This is part two of a two part series on Amazon Inspector. This article is about implementing Amazon Inspector in automated AMI pipelines. The first article is an Introduction to Amazon Inspector.

Automated security scanning is an essential part of DevSecOps, however setting the scanning up can be cumbersome. This quickstart incorporates Amazon Inspector and helps obviate these issues:

  • Provisioning of dedicated hardware to perform scanning
  • Deciding what rules to use in scans
  • Installing the scanning agents or provisioning credentials for the scanning tool
  • Configuring security groups/firewalls
  • Automating results consolidation
  • Initiating new scans whenever new applications are deployed

Integrating Amazon Inspector into the image creation process removes many of the pain points from the list presented above. This article will go over using Inspector scanning in an AMI creation pipeline. The pipeline will only use these rules packages with Inspector:

  • Common Vulnerability and Exposures
  • CIS Benchmarks
  • Security Best Practices

AWS Services used in this quickstart

Below is a list of the services used in this implementation:

  • CloudFormation – All the AWS resources used here are defined in CloudFormation.
  • IAM – IAM is used to set up the CodePipeline, CodeBuild, and Lambda roles. 
  • VPC – A VPC with two public subnets is created so the AMI creation and Inspector scans can run in isolation from other resources that may be on the AWS account. 
  • S3 – Two S3 buckets are used. The first bucket is used as the state bucket for the CodePipeline. The second bucket is used to store the finished scan results.
  • SNS – Two SNS topics were used. One topic is used to trigger the Lambda function when a new Inspector scan from the pipeline is completed. The second topic is used to send a notification when new reports are stored in the report’s S3 bucket.
  • CodePipeline – The pipeline starts whenever a new commit is made to the repository it monitors.
  • CodeBuild – Three CodeBuild jobs are used within CodePipeline to perform the following tasks:
    • Building AMIs and tagging them with the Git commit ID
    • Deploying the AMI based on commit ID along with the Inspector resources using CloudFormation. 
    • Linking the Inspector assessment template to the SNS topic used to trigger the Lambda function and initiating the scan. These tasks are performed with Python code running on the CodeBuild instance utilizing the boto3 library.
  • Inspector – Inspector performs the scans on the instances based on the selected rules packages.
  • EC2 – The AMIs are being built against Amazon Linux EC2 instances using Packer. The Inspector agent is installed as part of the AMI creation process.
  • Lambda – The Lambda function is Python using boto3 to download the report of the finished scan, push the report to S3 and send a SNS notification with the AMI ID used to launch the instance being scanned and S3 URI of the finished results. The Lambda is triggered on Inspector scan completion. 

 

Pipeline Architecture Diagram

The CloudFormation template aws-inspector-pipeline.yaml deploys all the resources after the required parameters are filled in. 

 

Launching the Pipeline CloudFormation Template

Launching the pipeline requires a few prerequisite steps but after the steps are completed everything will stand up with CloudFormation in a one-click deployment style. The template can be launched from the CloudFormation template or with the AWS CLI.

Required Prerequisites

Forking Repository

You will need to create a fork of the repository below:

Creating Github Access Token

Creating EC2 keypair

  • Create an EC2 keypair in the region you’re planning to launch the pipeline template in. The name of the EC2 keypair needs to be used in the parameter `KeyPair` of the `aws-inspector-pipeline.yaml` template file.

Repository Parameters

All parameters are located in the `aws-inspector-pipeline.yaml`

  • Set the `RepositoryOwner` parameter to the owner of the github account that the repository has been forked to.
  • Set the `RepositoryName` parameter to the repository name
  • Set the `BranchName` parameter to the branch the pipeline is going to run against if it isn’t the master branch.

Once all these steps have been completed, the CloudFormation stack can be launched.

Optional Prerequisites

The desired Inspector scan duration can be set by updating the value for the parameter `ScanLength` in the `aws-inspector-pipeline.yaml`. The unit of time is in seconds. The default value is set to 3 minutes.

Verifying Deployment

Once the CloudFormation template has completed launching, a 4 stage pipeline will be in CodePipeline named `<Your CloudFormation Stack Name>-codepipeline`.

There should also be a Lambda function named “<Your CloudFormation Stack Name>-InspectorLambda*“ in the Lambda console.

An acceptance test has been included in the repository to verify the resource creations. The acceptance test is named `verify_resources.py`.The test will call the AWS API using boto3 and print the CodePipeline name, Lambda function name, S3 reports bucket name, and the ARN of the SNS topic used to notify the user when new scans complete. The output should look like the output shown below.

ID provided is not a Gistpen repo.

Steps to run `verify_resources.py`

  1. Configure a default AWS profile or the AWS environment variables on the machine intended to run the script
  2. Have Python 3 installed on the machine intended to run the script
  3. Install the dependencies listed in the `requirements.txt` file from the repository
  4. Provide the name of the Inspector Pipeline CloudFormation stack as a command line argument

Breaking Down Pipeline Stages

The full pipeline consists of four stages:

  1. Checkout from source
  2. Build AMI
  3. Deploy AMI and Inspector resources
  4. Initiate Inspector scan on the deployed instance 

When an Inspector scan is complete, the Lambda function is triggered via SNS. On execution the function puts the report in S3 and cleans up the remaining resources launched from the pipeline. 

BuildAMI Stage

The build stage of the pipeline starts by installing Packer on the CodeBuild instance, then it builds the AMI based on the packer template ami.json

The ami.json file is:

  • Part of the repository the pipeline pulls
  • Configured to build the AMI in one of the subnets deployed from the `aws-inspector-pipeline.yaml` template file
  • Configured to install the inspector agent on the AMI.
ID provided is not a Gistpen repo.

After the image build finishes, it is tagged with the CommitID for later use in the pipeline.

DeployAMI Stage

The deploy stage of the pipeline pulls the AMI from the previous stage by using the AWS CLI to query the AMI using the commitID tag placed on the image in the Build stage.

ID provided is not a Gistpen repo.

Once the AMI ID has been pulled, the AMI is deployed along with the respective Inspector resources for the scan. The deploy stage uses the following CloudFormation template. The CloudFormation stack is launched with the name of the stack used to stand up the CodePipeline instance with the commit ID appended to it. The stack is named this way so it can be referenced based on the commit ID in later parts of the pipeline.

ID provided is not a Gistpen repo.

The CloudFormation template takes in the AMI ID and commit ID and adds them as UserAttributesForFindings for the Inspector assessment template resource. The stack name resulting from this template is also added to the UserAttributesForFindings of the Inspector assessment template.

Each Inspector resource ARN is placed in the stack outputs of this template.

ScanAMI Stage

The scan stage of the pipeline the Inspector scan is initiated using the boto3 Python code in start_scan.py. The methods for the class used in this Python file are located in InspectorQuickstart.py. Within the InspectorQuickstart class, the stack name of the CloudFormation launched in the previous template is used to look up the stack outputs so a run can be initiated based on the assessment template from the stack. The assessment template also subscribes to the SNS template launched in the CodePipeline template so it can send SNS messages to the Lambda function on completion.

Lambda Function

When the Lambda function is triggered via SNS, it parses the message to find the name of the completed assessment run. Once the lambda has the ARN of the assessment run, the results are uploaded to S3, and the function sends a SNS message to the ScanCompleteTopic notifying anyone subscribed with the AMI ID used in the scan and the S3 location of the full report. The Lambda function also deletes the CloudFormation stack deployed in the DeployAMI stage. This function pulls information about the Inspector deployment by querying the UserAttributesForFindings placed on the Inspector assessment template in the DeployAMI stage. The Lambda function code is posted below:

ID provided is not a Gistpen repo.

Viewing Scan Results

In order to view the scan results, you’ll need to navigate to the S3 bucket where they are stored. After standing up a CloudFormation stack from the `aws-inspector-pipeline.yaml`, you’ll need to navigate to the outputs section of the CloudFormation stack and look for the value of the output `ReportsBucket` to see what the name of the S3 bucket the stack created. Navigate to the bucket to see your reports. The reports are stored in a folder structure `date of scan/commit id/`. If you want to receive notifications whenever scans are completed, you can subscribe to the SNS topic that is displayed in the stack output `NotificationSNS`. Be sure to confirm the subscription after subscribing. The SNS notification will contain the AMI ID that the instance being scanned was launched from and the S3 URI of the results. 

Switching the Packer Script

The default configuration of this quickstart builds the AMI out of a Packer script that does not address any of the findings found in the report. A second Packer script is included in the repository named `hardened_ami.json` that includes some lockdowns in the image creation process. This Packer script can be used instead of the `ami.json` script by replacing the parameter value `PackerFile` in the `aws-inspector-pipeline.yaml` file. After making this change, launch the pipeline and inspect the findings results. The findings count should be lower than the scan results of AMI’s created with the previous Packer script.

Here’s the snippet of the Packer script hardened_ami.json that addresses some of the findings:

ID provided is not a Gistpen repo.

Next Steps

This demo is a proof of concept for using Inspector in an AMI pipeline. This pipeline could be extended to include other steps or security checks depending on the use case. The ami.json file could become more customized based on the application meant to run on it. Security hardening could be added into the AMI configuration, with the intent to add more lockdowns whenever new findings come in. This demo also provides a seamless way to start using Inspector without worry of unknown configurations issues up front. If you need to learn how to interpret Inspector scan results, our article Introduction to Amazon Inspector goes over scan results.

Use this quickstart if you’re trying to start creating an AMI pipeline. If you already have an AMI Pipeline, use the lessons in this quickstart to add automated instance scanning to your pipeline. You should extend or modify pieces of this quickstart to fit your specific use-cases. This quickstart gives you everything you need to get up and running with Amazon Inspector. If you decide to try the quickstart, please reach out to us and let us know how it goes!

Featured image by Craig Whitehead on Unsplash

The post Implementing Amazon Inspector in Automated AMI Pipelines appeared first on Stelligent.