Category Added in a WPeMatico Campaign

Building a Modern CI/CD Pipeline in the Serverless Era with GitOps | Amazon Web Services

Guest post by AWS Community Hero Shimon Tolts, CTO and co-founder at Datree.io. He specializes in developer tools and infrastructure, running a company that is 100% serverless.

In recent years, there was a major transition in the way you build and ship software. This was mainly around microservices, splitting code into small components, using infrastructure as code, and using Git as the single source of truth that glues it all together.

In this post, I discuss the transition and the different steps of modern software development to showcase the possible solutions for the serverless world. In addition, I list useful tools that were designed for this era.

What is serverless?

Before I dive into the wonderful world of serverless development and tooling, here’s what I mean by serverless. The AWS website talks about four main benefits:

  • No server management.
  • Flexible scaling.
  • Pay for value.
  • Automated high availability.

To me, serverless is any infrastructure that you don’t have to manage and scale yourself.
At my company Datree.io, we run 95% of our workload on AWS Fargate and 5% on AWS Lambda. We are a serverless company; we have zero Amazon EC2 instances in our AWS account. For more information, see the following:

What is GitOps?

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
According to Luis Faceira, a CI/CD consultant, GitOps is a way of working. You might look at it as an approach in which everything starts and ends with Git. Here are some key concepts:

  • Git as the SINGLE source of truth of a system
  • Git as the SINGLE place where we operate (create, change and destroy) ALL environments
  • ALL changes are observable/verifiable.

How you built software before the cloud

Back in the waterfall pre-cloud era, you used to have separate teams for development, testing, security, operations, monitoring, and so on.

Nowadays, in most organizations, there is a transition to full developer autonomy and developers owning the entire production path. The developer is the King – or Queen 🙂

Those teams (Ops/Security/IT/etc) used to be gatekeepers to validate and control every developer change. Now they have become more of a satellite unit that drives policy and sets best practices and standards. They are no longer the production bottleneck, so they provide organization-wide platforms and enablement solutions.

Everything is codified

With the transition into full developer ownership of the entire pipeline, developers automated everything. We have more code than ever, and processes that used to be manual are now described in code.

This is a good transition, in my opinion. Here are some of the benefits:

  • Automation: By storing all things as code, everything can be automated, reused, and re-created in moments.
  • Immutable: If anything goes wrong, create it again from the stored configuration.
  • Versioning: Changes can be applied and reverted, and are tracked to a single user who made the change.

GitOps: Git has become the single source of truth

The second major transition is that now everything is in one place! Git is the place where all of the code is stored and where all operations are initiated. Whether it’s testing, building, packaging, or releasing, nowadays everything is triggered through pull requests.

This is amplified by the codification of everything.

Useful tools in the serverless era

There are many useful tools in the market, here is a list of ones that were designed for serverless.

Code

Always store your code in a source control system. In recent years, more and more functions are codified, such as, BI, ops, security, and AI. For new developers, it is not always obvious that they should use source control for some functionality.

Build and test

The most common mistake I see is manually configuring build jobs in the GUI. This might be good for a small POC but it is not scalable. You should have your job codified and inside your Git repository. Here are some tools to help with building and testing:

Security and governance

When working in a serverless way, you end up having many Git repos. The number of code packages can be overwhelming. The demand for unified code standards remains as it was but now it is much harder to enforce it on top of your R&D org. Here are some tools that might help you with the challenge:

Bundle and release

Building a serverless application is connecting microservices into one unit. For example, you might be using Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Instead of configuring each one separately, you should use a bundler to hold the configuration in one place. That allows for easy versioning and replication of the app for several environments. Here are a couple of bundlers:

Package

When working with many different serverless components, you should create small packages of tools to be able to import across different Lambda functions. You can use a language-specific store like npm or RubyGems, or use a more holistic solution. Here are several package artifact stores that allow hosting for multiple programming languages:

Monitor

This part is especially tricky when working with serverless applications, as everything is split into small pieces. It’s important to use monitoring tools that support this mode of work. Here are some tools that can handle serverless:

Summary

The serverless era brings many transitions along with it like a codification of the entire pipeline and Git being the single source of truth. This doesn’t mean that the same problems that we use to have like security, logging and more disappeared, you should continue addressing them and leveraging tools that enable you to focus on your business.

AWS DeepLens – Now Orderable in Seven Additional Countries | Amazon Web Services

The new (2019) edition of the AWS DeepLens can now be purchased in six countries (US, UK, Germany, France, Spain, Italy, and Canada), and preordered in Japan. The 2019 edition is easier to set up, and (thanks to Amazon SageMaker Neo) runs machine learning models up to twice as fast as the earlier edition.

New Tutorials
We are also launch a pair of new tutorials to help you to get started:

aws-deeplens-coffee-leaderboard – This tutorial focuses on a demo that uses face detection to track the number of people that drink coffee. It watches a scene, and triggers a Lambda function when a face is detected. Amazon Rekognition is used to detect the presence of a coffee mug, and the face is added to a DynamoDB database that is maintained by (and private to) the demo. The demo also includes a leaderboard that tracks the number of coffees over time. Here’s the architecture:

And here’s the leaderboard:

To learn more, read Track the number of coffees consumed using AWS DeepLens.

aws-deeplens-worker-safety-project – This tutorial focuses on a demo that identifies workers that are not wearing safety helmets. The DeepLens detects faces, and uploads the images to S3 for further processing. The results are analyze using AWS IoT and Amazon CloudWatch, and are displayed on a web dashboard. Here’s the architecture:

To learn more, register for and then take the free 30-minute course: Worker Safety Project with AWS DeepLens.

Detecting Cats, and Cats with Rats
Finally, I would like to share a really cool video featuring my colleague Ben Hamm. After growing tired of cleaning up the remains of rats and other creatures that his cat Metric had killed, Ben decided to put his DeepLens to work. Using a hand-labeled training set, Ben created a model that could tell when Metric was carrying an unsavory item its his mouth, and then lock him out. Ben presented his project at Ignite Seattle and the video has been very popular. Take a look for yourself:

Order Your DeepLens Today
If you are in one of the countries that I listed above, you can order your DeepLens today and get started with Machine Learning in no time flat! Visit the DeepLens home page to learn more.

Jeff;

AWS Named as a Leader in Gartner’s Infrastructure as a Service (IaaS) Magic Quadrant for the 9th Consecutive Year | Amazon Web Services

My colleagues on the AWS service teams work to deliver what customers want today, and also do their best to anticipate what they will need tomorrow. This Customer Obsession, along with our commitment to Hire and Develop the Best (two of the fourteen Amazon Leadership Principles), helps us to figure out, and then to deliver on, our vision. It is always good to see that our hard work continues to delight customers, and to be recognized by Gartner and other leading analysts.

For the ninth consecutive year, AWS has secured the top-right corner of the Leader’s quadrant in Gartner’s Magic Quadrant for Cloud Infrastructure as a Service (IaaS), earning highest placement for Ability to Execute and furthest for Completeness of Vision:

The full report contains a lot of detail and is a great summary of the features and factors that our customers examine when choosing a cloud provider.

Jeff;

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Meet the Newest AWS News Bloggers! | Amazon Web Services

I wrote my first post for this blog way back in 2004! Over the course of the first decade, the amount of time that I devoted to the blog grew from a small fraction of my day to a full day. In the early days my email inbox was my primary source of information about upcoming launches, and also my primary tool for managing my work backlog. When that proved to be unscalable, Ana came onboard and immediately built a ticketing system and set up a process for teams to request blog posts. Today, a very capable team (Greg, Devin, and Robin) takes care of tickets, platforms, comments, metrics, and so forth so that I can focus on what I like to do best: using new services and writing about them!

Over the years we have experimented with a couple of different strategies to scale the actual writing process. If you are a long-time reader you may have seen posts from Mike, Jinesh, Randall, Tara, Shaun, and a revolving slate of guest bloggers.

News Bloggers
I would like to introduce you to our current lineup of AWS News Bloggers. Like me, the bloggers have a technical background and are prepared to go hands-on with every new service and feature. Here’s our roster:

Steve Roberts (@bellevuesteve) – Steve focuses on .NET tools and technologies.

Julien Simon (@julsimon) – Julien likes to help developers and enterprises to bring their ideas to life.

Brandon West (@bwest) – Brandon leads our developer relations team in the Americas, and has written a book on the topic.

Martin Beeby (@thebeebs) – Martin focuses on .NET applications, and has worked as a C# and VB developer since 2001.

Danilo Poccia (@danilop) – Danilo works with companies of any size to support innovation. He is the author of AWS Lambda in Action.

Sébastien Stormacq (@sebesto) – Sébastien works with builders to unlock the value of the AWS cloud, using his secret blend of passion, enthusiasm, customer advocacy, curiosity, and creativity.

We are already gearing up for re:Invent 2019, and can’t wait to bring you a rich set of blog posts. Stay tuned!

Jeff;

Learn about AWS Services & Solutions – July AWS Online Tech Talks | Amazon Web Services

Join us this July to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:

Blockchain

July 24, 2019 | 11:00 AM – 12:00 PM PTBuilding System of Record Applications with Amazon QLDB – Dive deep into the features and functionality of our first-of-its-kind, purpose-built ledger database, Amazon QLDB.

Containers

July 31, 2019 | 11:00 AM – 12:00 PM PTMachine Learning on Amazon EKS – Learn how to use KubeFlow and TensorFlow on Amazon EKS for your machine learning needs.

Data Lakes & Analytics

July 31, 2019 | 1:00 PM – 2:00 PM PTHow to Build Serverless Data Lake Analytics with Amazon Athena – Learn how to use Amazon Athena for serverless SQL analytics on your data lake, transform data with AWS Glue, and manage access with AWS Lake Formation.

August 1, 2019 | 11:00 AM – 12:00 PM PTEnhancing Your Apps with Embedded Analytics – Learn how to add powerful embedded analytics capabilities to your applications, portals and websites with Amazon QuickSight.

Databases

July 25, 2019 | 9:00 AM – 10:00 AM PTMySQL Options on AWS: Self-Managed, Managed, Serverless – Understand different self-managed and managed MySQL deployment options on AWS, and watch a demonstration of creating a serverless MySQL-compatible database using Amazon Aurora.

DevOps

July 30, 2019 | 9:00 AM – 10:00 AM PTBuild a Serverless App in Under 20 Minutes with Machine Learning Functionality Using AWS Toolkit for Visual Studio Code – Get a live demo on how to create a new, ready-to-deploy serverless application.

End-User Computing
July 23, 2019 | 1:00 PM – 2:00 PM PTA Security-First Approach to Delivering End User Computing Services – Learn how AWS improves security and reduces cost by moving data to the cloud while providing secure, fast access to desktop applications and data.

IoT

July 30, 2019 | 11:00 AM – 12:00 PM PTSecurity Spotlight: Best Practices for Edge Security with Amazon FreeRTOS – Learn best practices for building a secure embedded IoT project with Amazon FreeRTOS.

Machine Learning

July 23, 2019 | 9:00 AM – 10:00 AM PTGet Started with Machine Learning: Introducing AWS DeepLens, 2019 Edition – Learn the basics of machine learning through building computer vision apps with the new AWS DeepLens.

August 1, 2019 | 9:00 AM – 10:00 AM PTImplementing Machine Learning Solutions with Amazon SageMaker – Learn how machine learning with Amazon SageMaker can be used to solve industry problems.

Mobile

July 31, 2019 | 9:00 AM – 10:00 AM PTBest Practices for Android Authentication on AWS with AWS Amplify – Learn the basics of Android authentication on AWS and leverage the built in AWS Amplify Authentication modules to provide user authentication in just a few lines of code.

Networking & Content Delivery

July 23, 2019 | 11:00 AM – 12:00 PM PTSimplify Traffic Monitoring and Visibility with Amazon VPC Traffic Mirroring – Learn to easily mirror your VPC traffic to monitor and secure traffic in real-time with monitoring appliances of your choice.

Productivity & Business Solutions

July 30, 2019 | 1:00 PM – 2:00 PM PTGet Started in Minutes with Amazon Connect in Your Contact Center – See how easy it is to get started with Amazon Connect, based on the same technology used by Amazon Customer Service to power millions of customer conversations.

Robotics

July 25, 2019 | 11:00 AM – 12:00 PM PTDeploying Robotic Simulations Using Machine Learning with Nvidia JetBot and AWS RoboMaker – Learn how to deploy robotic simulations (and find dinosaurs along the way) using machine learning with Nvidia JetBot and AWS RoboMaker.

Security, Identity & Compliance

July 24, 2019 | 9:00 AM – 10:00 AM PTDeep Dive on AWS Certificate Manager Private CA – Creating and Managing Root and Subordinate Certificate Authorities – Learn how to quickly and easily create a complete CA hierarchy, including root and subordinate CAs, with no need for external CAs.

Serverless

July 24, 2019 | 1:00 PM – 2:00 PM PTGetting Started with AWS Lambda and Serverless Computing – Learn how to run code without provisioning or managing servers with AWS Lambda.

AWS New York Summit 2019 – Summary of Launches & Announcements | Amazon Web Services

The AWS New York Summit just wrapped up! Here’s a quick summary of what we launched and announced:

Amazon EventBridge – This new service builds on the event-processing model that forms the basis for Amazon CloudWatch Events, and makes it easy for you to integrate your AWS applications with SaaS applications such as Zendesk, Datadog, SugarCRM, and Onelogin. Read my blog post, Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications, to learn more.

Werner announces EventBridge – Photo by Serena

Cloud Development Kit – CDK is now generally available, with support for TypeScript and Python. Read Danilo‘s post, AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available, to learn more.

Fluent Bit Plugins for AWSFluent Bit is a multi-platform, open source log processor and forwarder that is compatible with Docker and Kubernetes environments. You can now build a container image that includes new Fluent Bit plugins for Amazon CloudWatch and Amazon Kinesis Data Firehose. The plugins routes logs to CloudWatch, Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service. Read Centralized Container Logging with Fluent Bit to learn more.

Nicki, Randall, Robert, and Steve – Photo by Deepak

AWS Toolkit for VS Code – This toolkit lets you develop and test locally (including step-through debugging) in a Lambda-like environment, and then deploy to the AWS Region of your choice. You can invoke Lambda functions locally or remotely, with full control of the function configuration, including the event payload and environment variables. To learn more, read Announcing AWS Toolkit for Visual Studio Code.

Amazon CloudWatch Container Insights (preview) – You can now create CloudWatch Dashboards that monitor the performance and health of your Amazon ECS and AWS Fargate clusters, tasks, containers, and services. Read Using Container Insights to learn more.

CloudWatch Anomaly Detection (preview) – This cool addition to CloudWatch uses machine learning to continuously analyze system and application metrics, determine a nominal baseline, and surface anomalies, all without user intervention. It adapts to trends, and helps to identity unexpected changes in performance or behavior. Read the CloudWatch Anomaly Detection documentation to learn more.

Amazon SageMaker Managed Spot Training (coming soon) – You will soon be able to use Amazon EC2 Spot to lower the cost of training your machine learning models. This upcoming enhancement to SageMaker will lower your training costs by up to 70%, and can be used in conjunction with Automatic Model Training.

Jeff;

AWS Cloud Development Kit (CDK) – TypeScript and Python are Now Generally Available | Amazon Web Services

Managing your Infrastructure as Code provides great benefits and is often a stepping stone for a successful application of DevOps practices. In this way, instead of relying on manually performed steps, both administrators and developers can automate provisioning of compute, storage, network, and application services required by their applications using configuration files.

For example, defining your Infrastructure as Code makes it possible to:

  • Keep infrastructure and application code in the same repository
  • Make infrastructure changes repeatable and predictable across different environments, AWS accounts, and AWS regions
  • Replicate production in a staging environment to enable continuous testing
  • Replicate production in a performance test environment that you use just for the time required to run a stress test
  • Release infrastructure changes using the same tools as code changes, so that deployments include infrastructure updates
  • Apply software development best practices to infrastructure management, such as code reviews, or deploying small changes frequently

Configuration files used to manage your infrastructure are traditionally implemented as YAML or JSON text files, but in this way you’re missing most of the advantages of modern programming languages. Specifically with YAML, it can be very difficult to detect a file truncated while transferring to another system, or a missing line when copying and pasting from one template to another.

Wouldn’t it be better if you could use the expressive power of your favorite programming language to define your cloud infrastructure? For this reason, we introduced last year in developer preview the AWS Cloud Development Kit (CDK), an extensible open-source software development framework to model and provision your cloud infrastructure using familiar programming languages.

I am super excited to share that the AWS CDK for TypeScript and Python is generally available today!

With the AWS CDK you can design, compose, and share your own custom components that incorporate your unique requirements. For example, you can create a component setting up your own standard VPC, with its associated routing and security configurations. Or a standard CI/CD pipeline for your microservices using tools like AWS CodeBuild and CodePipeline.

Personally I really like that by using the AWS CDK, you can build your application, including the infrastructure, in your IDE, using the same programming language and with the support of autocompletion and parameter suggestion that modern IDEs have built in, without having to do a mental switch between one tool, or technology, and another. The AWS CDK makes it really fun to quickly code up your AWS infrastructure, configure it, and tie it together with your application code!

How the AWS CDK works
Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as an S3 bucket or an SNS topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. To foster reusability, constructs can include other constructs. You compose constructs together into stacks, that you can deploy into an AWS environment, and apps, a collection of one of more stacks.

How to use the AWS CDK
We continuously add new features based on the feedback of our customers. That means that when creating an AWS resource, you often have to specify many options and dependencies. For example, if you create a VPC you have to think about how many Availability Zones (AZs) to use and how to configure subnets to give private and public access to the resources that will be deployed in the VPC.

To make it easy to define the state of AWS resources, the AWS Construct Library exposes the full richness of many AWS services with sensible defaults that you can customize as needed. In the case above, the VPC construct creates by default public and private subnets for all the AZs in the VPC, using 3 AZs if not specified.

For creating and managing CDK apps, you can use the AWS CDK Command Line Interface (CLI), a command-line tool that requires Node.js and can be installed quickly with:

npm install -g aws-cdk

After that, you can use the CDK CLI with different commands:

  • cdk init to initialize in the current directory a new CDK project in one of the supported programming languages
  • cdk synth to print the CloudFormation template for this app
  • cdk deploy to deploy the app in your AWS Account
  • cdk diff to compare what is in the project files with what has been deployed

Just run cdk to see more of the available commands and options.

You can easily include the CDK CLI in your deployment automation workflow, for example using Jenkins or AWS CodeBuild.

Let’s use the AWS CDK to build two sample projects, using different programming languages.

An example in TypeScript
For the first project I am using TypeScript to define the infrastructure:

cdk init app --language=typescript

Here’s a simplified view of what I want to build, not entering into the details of the public/private subnets in the VPC. There is an online frontend, writing messages in a queue, and an asynchronous backend, consuming messages from the queue:

Inside the stack, the following TypeScript code defines the resources I need, and their relations:

  • First I define the VPC and an Amazon ECS cluster in that VPC. By using the defaults provided by the AWS Construct Library, I don’t need to specify any parameter here.
  • Then I use an ECS pattern that in a few lines of code sets up an Amazon SQS queue and an ECS service running on AWS Fargate to consume the messages in that queue.
  • The ECS pattern library provides higher-level ECS constructs which follow common architectural patterns, such as load balanced services, queue processing, and scheduled tasks.
  • A Lambda function has the name of the queue, created by the ECS pattern, passed as an environment variable and is granted permissions to send messages to the queue.
  • The code of the Lambda function and the Docker image are passed as assets. Assets allow you to bundle files or directories from your project and use them with Lambda or ECS.
  • Finally, an Amazon API Gateway endpoint provides an HTTPS REST interface to the function.
const myVpc = new ec2.Vpc(this, "MyVPC");

const myCluster = new ecs.Cluster(this, "MyCluster", {
  vpc: myVpc
});

const myQueueProcessingService = new ecs_patterns.QueueProcessingFargateService(
  this, "MyQueueProcessingService", {
    cluster: myCluster,
    memoryLimitMiB: 512,
    image: ecs.ContainerImage.fromAsset("my-queue-consumer")
  });

const myFunction = new lambda.Function(
  this, "MyFrontendFunction", {
    runtime: lambda.Runtime.NODEJS_10_X,
    timeout: Duration.seconds(3),
    handler: "index.handler",
    code: lambda.Code.asset("my-front-end"),
    environment: {
      QUEUE_NAME: myQueueProcessingService.sqsQueue.queueName
    }
  });

myQueueProcessingService.sqsQueue.grantSendMessages(myFunction);

const myApi = new apigateway.LambdaRestApi(
  this, "MyFrontendApi", {
    handler: myFunction
  });

I find this code very readable and easier to maintain than the corresponding JSON or YAML. By the way, cdk synth in this case outputs more than 800 lines of plain CloudFormation YAML.

An example in Python
For the second project I am using Python:

cdk init app --language=python

I want to build a Lambda function that is executed every 10 minutes:

When you initialize a CDK project in Python, a virtualenv is set up for you. You can activate the virtualenv and install your project requirements with:

source .env/bin/activate

pip install -r requirements.txt

Note that Python autocompletion may not work with some editors, like Visual Studio Code, if you don’t start the editor from an active virtualenv.

Inside the stack, here’s the Python code defining the Lambda function and the CloudWatch Event rule to invoke the function periodically as target:

myFunction = aws_lambda.Function(
    self, "MyPeriodicFunction",
    code=aws_lambda.Code.asset("src"),
    handler="index.main",
    timeout=core.Duration.seconds(30),
    runtime=aws_lambda.Runtime.PYTHON_3_7,
)

myRule = aws_events.Rule(
    self, "MyRule",
    schedule=aws_events.Schedule.rate(core.Duration.minutes(10)),
)
myRule.add_target(aws_events_targets.LambdaFunction(myFunction))

Again, this is easy to understand even if you don’t know the details of AWS CDK. For example, durations include the time unit and you don’t have to wonder if they are expressed in seconds, milliseconds, or days. The output of cdk synth in this case is more than 90 lines of plain CloudFormation YAML.

Available Now
There is no charge for using the AWS CDK, you pay for the AWS resources that are deployed by the tool.

To quickly get hands-on with the CDK, start with this awesome step-by-step online tutorial!

More examples of CDK projects, using different programming languages, are available in this repository:

https://github.com/aws-samples/aws-cdk-examples

You can find more information on writing your own constructs here.

The AWS CDK is open source and we welcome your contribution to make it an even better tool:

https://github.com/awslabs/aws-cdk

Check out our source code on GitHub, start building your infrastructure today using TypeScript or Python, or try different languages in developer preview, such as C# and Java, and give us your feedback!

Amazon EventBridge – Event-Driven AWS Integration for your SaaS Applications | Amazon Web Services

Many AWS customers also make great use of SaaS (Software as a Service) applications. For example, they use Zendesk to manage customer service & support tickets, PagerDuty to handle incident response, and SignalFx for real-time monitoring. While these applications are quite powerful on their own, they are even more so when integrated into a customer’s own systems, databases, and workflows.

New Amazon EventBridge
In order to support this increasingly common use case, we are launching Amazon EventBridge today. Building on the powerful event processing model that forms the basis for CloudWatch Events, EventBridge makes it easy for our customers to integrate their own AWS applications with SaaS applications. The SaaS applications can be hosted anywhere, and simply publish events to an event bus that is specific to each AWS customer. The asynchronous, event-based model is fast, clean, and easy to use. The publisher (SaaS application) and the consumer (code running on AWS) are completely decoupled, and are not dependent on a shared communication protocol, runtime environment, or programming language. You can use simple Lambda functions to handle events that come from a SaaS application, and you can also route events to a wide variety of other AWS targets. You can store incident or ticket data in Amazon Redshift, train a machine learning model on customer support queries, and much more.

Everything that you already know (and hopefully love) about CloudWatch Events continues to apply, with one important change. In addition to the existing default event bus that accepts events from AWS services, calls to PutEvents, and from other authorized accounts, each partner application that you subscribe to will also create an event source that you can then associate with an event bus in your AWS account. You can select any of your event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule.

As part of today’s launch we are also opening up a partner program. The integration process is simple and straightforward, and generally requires less than one week of developer time.

All About Amazon EventBridge
Here are some terms that you need to know in order to understand how to use Amazon EventBridge:

Partner – An organization that has integrated their SaaS application with EventBridge.

Customer – An organization that uses AWS, and that has subscribed to a partner’s SaaS application.

Partner Name – A unique name that identifies an Amazon EventBridge partner.

Partner Event Bus – An Event Bus that is used to deliver events from a partner to AWS.

EventBridge can be accessed from the AWS Management Console, AWS Command Line Interface (CLI), or via the AWS SDKs. There are distinct commands and APIs for partners and for customers. Here are some of the most important ones:

PartnersCreatePartnerEventSource, ListPartnerEventSourceAccounts, ListPartnerEventSources, PutPartnerEvents.

CustomersListEventSources, ActivateEventSource, CreateEventBus, ListEventBuses, PutRule, PutTargets.

Amazon EventBridge for Partners & Customers
As I noted earlier, the integration process is simple and straightforward. You need to allow your customers to enter an AWS account number and to select an AWS region. With that information in hand, you call CreatePartnerEventSource in the desired region, inform the customer of the event source name and tell them that they can accept the invitation to connect, and wait for the status of the event source to change to ACTIVE. Then, each time an event of interest to the customer occurs, you call PutPartnerEvents and reference the event source.

The process is just as simple on the customer side. You accept the invitation to connect by calling CreateEventBus to create an event bus associated with the event source. You add rules and targets to the event bus, and prepare your Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. You can use DeActivateEventSource and ActivateEventSource to control the flow.

Here’s the overall flow (diagram created using SequenceDiagram):

Each partner has the freedom to choose the events that are relevant to their application, and to define the data elements that are included with each event.

Using EventBridge
Starting from the EventBridge Console, I click Partner event sources, find the partner of interest, and click it to learn more:

Each partner page contains additional information about the integration. I read the info, and click Set up to proceed:

The page provides me with a simple, three-step procedure to set up my event source:

After the partner creates the event source, I return to Partner event sources and I can see that the Zendesk event source is Pending:

I click the pending event source, review the details, and then click Associate with event bus:

I have the option to allow other AWS accounts, my Organization, or another Organization to access events on the event bus that I am about to create. After I have confirmed that I trust the origin and have added any additional permissions, I click Associate:

My new event bus is now available, and is listed as a Custom event bus:

I click Rules, select the event bus, and see the rules (none so far) associated with it. Then I click Create rule to make my first rule:

I enter a name and a description for my first rule:

Then I define a pattern, choosing Zendesk as the Service name:

Next, I select a Lambda function as my target:

I can also choose from many other targets:

After I create my rule, it will be activated in response to activities that occur within my Zendesk account. The initial set of events includes TicketCreated, CommentCreated, TagsChanged, AgentAssignmentChanged, GroupAssignmentChanged, FollowersChanged, EmailCCsChanged, CustomFieldChanged, and StatusChanged. Each event includes a rich set of properties; you’ll need to consult the documentation to learn more.

Partner Event Sources
We are launching with ten partner event sources, with more to come:

  • Datadog
  • Zendesk
  • PagerDuty
  • Whispir
  • Saviynt
  • Segment
  • SignalFx
  • SugarCRM
  • OneLogin
  • Symantec

If you have a SaaS application and you are ready to integrate, read more about EventBridge Partner Integration.

Now Available
Amazon EventBridge is available now and you can start using it today in all public AWS regions in the aws partition. Support for the AWS regions in China, and for the Asia Pacific (Osaka) Local Region, is in the works.

Pricing is based on the number of events published to the event buses in your account, billed at $1 for every million events. There is no charge for events published by AWS services.

Jeff;

PS – As you can see from this post, we are paying even more attention to the overall AWS event model, and have a lot of interesting goodies on the drawing board. With this launch, CloudWatch Events has effectively earned a promotion to a top-level service, and I’ll have a lot more to say about that in the future!

Amazon Aurora PostgreSQL Serverless – Now Generally Available | Amazon Web Services

The database is usually the most critical part of a software architecture and managing databases, especially relational ones, has never been easy. For this reason, we created Amazon Aurora Serverless, an auto-scaling version of Amazon Aurora that automatically starts up, shuts down and scales up or down based on your application workload.

The MySQL-compatible edition of Aurora Serverless has been available for some time now. I am pleased to announce that the PostgreSQL-compatible edition of Aurora Serverless is generally available today.

Before moving on with details, I take the opportunity to congratulate the Amazon Aurora development team that has just won the 2019 Association for Computing Machinery’s (ACM) Special Interest Group on Management of Data (SIGMOD) Systems Award!

When you create a database with Aurora Serverless, you set the minimum and maximum capacity. Your client applications transparently connect to a proxy fleet that routes the workload to a pool of resources that are automatically scaled. Scaling is very fast because resources are “warm” and ready to be added to serve your requests.

There is no change with Aurora Serverless on how storage is managed by Aurora. The storage layer is independent from the compute resources used by the database. There is no need to provision storage in advance. The minimum storage is 10GB and, based on the database usage, the Amazon Aurora storage will automatically grow, up to 64 TB, in 10GB increments with no impact to database performance.

Creating an Aurora Serverless PostgreSQL Database
Let’s start an Aurora Serverless PostgreSQL database and see the automatic scalability at work. From the Amazon RDS console, I select to create a database using Amazon Aurora as engine. Currently, Aurora Serverless is compatible with PostgreSQL version 10.7. Selecting that version, the serverless option becomes available.

I give the new DB cluster an identifier, choose my master username, and let Amazon RDS generate a password for me. I will be able to retrieve my credentials during database creation.

I can now select the minimum and maximum capacity for my database, in terms of Aurora Capacity Units (ACUs), and in the additional scaling configuration I choose to pause compute capacity after 5 minutes of inactivity. Based on my settings, Aurora Serverless automatically creates scaling rules for thresholds for CPU utilization, connections, and available memory.

Testing Some Load on the Database
To generate some load on the database I am using sysbench on an EC2 instance. There are a couple of Lua scripts bundled with sysbench that can help generate an online transaction processing (OLTP) workload:

  • The first script, parallel_prepare.lua, generates 100,000 rows per table for 24 tables.
  • The second script, oltp.lua, generates workload against those data using 128 worker threads.

By using those scripts, I start generating load on my database. As you can see from the graph below, taken from the RDS console monitoring tab, the serverless database capacity grows and shrinks to follow my requirements. The metric shown on this graph is the number of ACUs used by the database cluster:

  • First it scales up to accommodate the sysbench workload.
  • When I stop the load generator, it pauses after a few minutes of inactivity.
  • If I restart the load generator again, the database resumes at the same capacity it left.
  • If I disable the pause for inactivity option, or if I just leave a very small amount of workload, it scales down to the configured minimum capacity.

The pause option is great for database with a batch workload that need to start at full speed, do what they need to do, and then pause.

Available Now
Aurora Serverless PostgreSQL is available now in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). With Aurora Serverless, you pay on a per-second basis for the database capacity you use when the database is active, plus the usual Aurora storage costs.

For more information on Amazon Aurora, I recommend this great post explaining why and how it was created:

Amazon Aurora ascendant: How we designed a cloud-native relational database

It’s never been so easy to use a relational database in production. I am so excited to see what you are going to use it for!

AWS Project Resilience – Up to $2K in AWS Credits to Support DR Preparation | Amazon Web Services

We want to help state and local governments, community organizations, and educational institutions to better prepare for natural and man-made disasters that could affect their ability to run their mission-critical IT systems.

Today we are launching AWS Project Resilience. This new element of our existing Disaster Response program offers up to $2,000 in AWS credits to organizations of the types that I listed above. The program is open to new and existing customers, with distinct benefits for each:

New Customers – Eligible new customers can submit a request for up to $2,000 in AWS Project Resilience credits that can be used to offset costs incurred by storing critical datasets in Amazon Simple Storage Service (S3).

Existing Customers – Eligible existing customers can submit a request for up to $2,000 in AWS Project Resilience credits to offset the costs incurred by engaging CloudEndure and AWS Disaster Response experts to do a deep dive on an existing business continuity architecture.

Earlier this month I sat down with my colleague Ana Visneski to learn more about disaster preparedness, disaster recovery, and AWS Project Resilience. Here’s our video:

To learn more and to apply to the program, visit the AWS Project Resilience page!

Jeff;