<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Ezequiel Gioia]]></title><description><![CDATA[Ezequiel Gioia]]></description><link>https://ezequielgioia.com</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 19:36:18 GMT</lastBuildDate><atom:link href="https://ezequielgioia.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Implementing Approval Processes for AWS Patch Manager]]></title><description><![CDATA[Managing security updates on your EC2 instances is crucial for protecting your system against vulnerabilities. AWS Systems Manager Patch Manager is a powerful tool for automating this process, but sometimes you may need to patch instances that cannot...]]></description><link>https://ezequielgioia.com/implementing-approval-processes-for-aws-patch-manager</link><guid isPermaLink="true">https://ezequielgioia.com/implementing-approval-processes-for-aws-patch-manager</guid><category><![CDATA[AWS]]></category><category><![CDATA[Security]]></category><category><![CDATA[stepfunction]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Ezequiel Gioia]]></dc:creator><pubDate>Thu, 11 May 2023 03:08:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/wK-elt11pF0/upload/ba5e81de7b311182f6c445e0b4073db7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Managing security updates on your EC2 instances is crucial for protecting your system against vulnerabilities. AWS Systems Manager Patch Manager is a powerful tool for automating this process, but sometimes you may need to patch instances that cannot be updated during a maintenance window. In these cases, implementing a manual approval workflow can be an option.</p>
<p>This post explores the process of setting up an approval workflow for Patch Manager, so you can delegate the responsibility of applying patches to your EC2 instance owners. I assume you are already familiar with AWS Systems Manager Patch Manager and have it configured.</p>
<h2 id="heading-a-side-note-about-aws-patch-manager">A side note about AWS Patch Manager:</h2>
<p>Patch Manager policies can be for Scan-and-Install or Scan-only purposes. So, start by creating a patch policy that only scans. We want to delegate the patching decision to the end user. I like the approach to targeting the instances to be patched using a <strong>[patch-policy]</strong> tag. This option provides a lot of flexibility. For example, you can create two patch policies, one for scan-and-install-on-schedule based on a maintenance window and the second for <em>scan-on-approval</em>. With a tag [patch-policy] that accepts any of the previous values, you can let your users choose what strategy they want to follow to keep the EC2s up to date in terms of security.</p>
<p>Check the following article in case you want to enforce a tagging strategy: https://aws.amazon.com/blogs/mt/implement-aws-resource-tagging-strategy-using-aws-tag-policies-and-service-control-policies-scps/.</p>
<h3 id="heading-possible-tag-convention">Possible tag convention:</h3>
<ul>
<li><p>Tag name: <strong>patch-policy</strong></p>
</li>
<li><p>Tag values:</p>
<ul>
<li><p><strong>patch-on-approval</strong>: This tag value indicates that the instance should only be patched after explicit approval. It is useful when the instance can't be patched during a maintenance window or if you want to give more control to the instance owner over when the patching is done.</p>
</li>
<li><p><strong>patch-on-schedule</strong>: This tag value indicates that the instance should be patched according to a schedule. You can configure a maintenance window for your instances using AWS Systems Manager Maintenance Windows and apply the patches during the window. This option is useful to ensure that your instances are always up-to-date without requiring manual intervention from the instance owner.</p>
</li>
</ul>
</li>
</ul>
<h1 id="heading-1-workflow-overview">1. Workflow Overview</h1>
<p>If you're building basic integrations in AWS, your first instinct might be to create some Lambda functions to "glue" services together. While Lambdas are great, if you're planning to grow your solution beyond a couple of functions, consider using Step Functions. After learning about Step Functions, you may even consider using them for one-step workflows.</p>
<p>Main benefits of Step Functions:</p>
<ul>
<li><p><strong>Reduce Lambda functions development:</strong> You can directly interact with many AWS services from within the state machine tasks using Step Functions. This means that you don't have to write and manage as many Lambda functions, which can save time and effort.</p>
</li>
<li><p><strong>Easier Debugging:</strong> Step Functions provide a visual representation of your state machine, showing you all the inputs and outputs of each step in your workflow. This makes it easier to diagnose and fix any issues that may occur during execution.</p>
</li>
<li><p><strong>Task re-run:</strong> If a task fails, you can easily re-run that task within the state machine. This can help to recover from any issues encountered during execution quickly.</p>
</li>
<li><p><strong>Manual Workflow Execution:</strong> You can manually trigger a state machine execution with a custom input, which can be useful for testing and debugging.</p>
</li>
<li><p><strong>Retry mechanisms:</strong> AWS Step Functions provides a built-in retry mechanism that allows you to configure retries for each step in your workflow. If a step fails, the retry mechanism automatically retries the step based on the configuration you set. This helps to increase the reliability of your workflow and reduces the need for custom retry logic in your Lambda functions.</p>
</li>
</ul>
<p>We're going to implement the workflow with AWS Step Functions. The workflow should be triggered every time a critical patch is missing on an EC2 instance tagged with the patch policy <strong>[patch-on-approval]</strong>.</p>
<p>Assuming the patch policy, EC2 instances, and tagging are already in place, the integration between Systems Manager Patch Manager and the workflow will be handled by AWS EventBridge. EventBridge is a message bus that receives and delivers events from AWS services or your applications. All the AWS-supported messages are already passing through that bus. We just need to create a rule that triggers the workflow. We will work on that later. Before doing that, we need to develop the workflow.</p>
<p>This is an overview of the workflow we'll develop with Step Functions:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683667876519/66879edf-015a-4739-b2a0-786632a33ed8.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Validate Event:</strong> The workflow starts with some validation to make sure that we are proceeding with the correct type of event. I'm using this validation step during the development phase to be more permissive about the type of events that can trigger the workflow and have more flexibility and control. It allows you to inspect the payload on the events that are triggering the workflow to fine-tune your EventBridge rule later. This step will probably go away once in production.</p>
</li>
<li><p><strong>Prompt User for Approval:</strong> We already know a critical patch is pending, so the next step is to notify the user and wait for their answer. A Lambda function will handle this task. The Lambda will compose the message and send the communication out to the user explaining that there is a critical path pending their approval. This Lambda will send the message to an SNS topic.</p>
<p>  Also, this is where we will pause the workflow execution until we get approval from the user to proceed with the patching.</p>
<p>  We are also going to need an API Gateway public endpoint that the user can access to approve the patch, and a second Lambda function will handle the approval response and reactivate the execution of the state machine. These two components are external to the workflow, but they are essential to process the user response and reactivate the execution of the state machine.</p>
</li>
<li><p><strong>Approval Choice:</strong> this is the conditional branching logic that determines whether to proceed with patching the instance if the approval is given or reject the patch if any other answer is received.</p>
</li>
<li><p><strong>Patch Instance:</strong> Our last main component is a Systems Manager SendCommand task that will install the security patches. This is exactly the kind of task that can save you from writing another Lambda function. The integration is already there provided by AWS.</p>
</li>
</ul>
<h1 id="heading-2-creating-the-sns-topic">2. Creating the SNS topic</h1>
<p>The communication out to the end user to get their approval is handled by an SNS topic. SNS will provide us with the flexibility to add or remove people who need to be notified and abstract the workflow from the communication channel used to send the message. We are going to start simple, with email notifications, but this can be easily extended with Lambda functions to send the notifications via Slack, MS Teams, etc.</p>
<p>So, browse your AWS Console, choose the SNS service, and create a topic. Take note of the topic's ARN once created; you are going to need it later.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683670118286/8c333d57-4b1b-4601-a629-20b1b8e21a10.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-3-creating-the-workflow">3. Creating the Workflow</h1>
<p>Step Functions are a service provided by AWS that enables you to design and build applications using a visual workflow editor or the JSON-based Amazon State Language (ASL). ASL allows you to define the states, events, and transitions of your state machine using a structured format that is easy to read and understand.</p>
<p>Here you have an example of a single-step workflow that invokes a Lambda function:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Comment"</span>: <span class="hljs-string">"A simple state machine that executes a single task named HelloWorld"</span>,
  <span class="hljs-attr">"StartAt"</span>: <span class="hljs-string">"HelloWorld"</span>,
  <span class="hljs-attr">"States"</span>: {
    <span class="hljs-attr">"HelloWorld"</span>: {
      <span class="hljs-attr">"Type"</span>: <span class="hljs-string">"Task"</span>,
      <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME"</span>,
      <span class="hljs-attr">"End"</span>: <span class="hljs-literal">true</span>
    }
  }
}
</code></pre>
<p>In this example, the <strong>Comment</strong> field is optional and allows you to add a description or notes about the state machine.</p>
<p>The <strong>StartAt</strong> field is required and specifies the initial state of the state machine.</p>
<p>The <strong>States</strong> field is required and contains one or more states that define the logic of the state machine.</p>
<p>In this example, we have only one state named <strong>HelloWorld</strong>. The <strong>Type</strong> field specifies the type of the state, which in this case is <strong>Task</strong>. The <strong>Resource</strong> field specifies the Amazon Resource Name (ARN) of the Lambda function that will be executed as the task.</p>
<p>The <strong>End</strong> field is optional and indicates that this is the final state of the state machine. We set <strong>End: true</strong> to indicate that the state machine should terminate after executing the task.</p>
<h1 id="heading-4-triggering-the-workflow-from-systems-manager-patch-manager">4. Triggering the workflow from Systems Manager Patch Manager</h1>
<p>As we stated before, the integration between Patch Manager and AWS Step Function workflow will be handled by AWS EventBridge. All AWS events currently flow on the default bus of each account, perhaps without you even noticing it. Let's go to our default bus in the account and create a new rule. This rule will trigger our Step Function every time there is an EC2 instance going off compliance in terms of patching. EventBridge will also pass the original event payload as input to the state machine. We will get the instance ID that needs to be patched from there.</p>
<h4 id="heading-example-of-the-event-payload">Example of the event payload:</h4>
<pre><code class="lang-json">{
  <span class="hljs-attr">"version"</span>: <span class="hljs-string">"0"</span>,
  <span class="hljs-attr">"id"</span>: <span class="hljs-string">"da802c23-0364-5de4-02c4-8f2389f89233"</span>,
  <span class="hljs-attr">"detail-type"</span>: <span class="hljs-string">"Configuration Compliance State Change"</span>,
  <span class="hljs-attr">"source"</span>: <span class="hljs-string">"aws.ssm"</span>,
  <span class="hljs-attr">"account"</span>: <span class="hljs-string">"252525252525"</span>,
  <span class="hljs-attr">"time"</span>: <span class="hljs-string">"2023-05-01T18:48:11Z"</span>,
  <span class="hljs-attr">"region"</span>: <span class="hljs-string">"us-east-1"</span>,
  <span class="hljs-attr">"resources"</span>: [
    <span class="hljs-string">"arn:aws:ssm:us-east-1:252525252525:managed-instance/i-0933335a5598"</span>
  ],
  <span class="hljs-attr">"detail"</span>: {
    <span class="hljs-attr">"severity"</span>: <span class="hljs-string">"UNSPECIFIED"</span>,
    <span class="hljs-attr">"patch-baseline-id"</span>: <span class="hljs-string">"pb-0be1ccde3be43f5"</span>,
    <span class="hljs-attr">"compliance-status"</span>: <span class="hljs-string">"non_compliant"</span>,
    <span class="hljs-attr">"resource-type"</span>: <span class="hljs-string">"managed-instance"</span>,
    <span class="hljs-attr">"resource-id"</span>: <span class="hljs-string">"i-0933335a5598"</span>,
    <span class="hljs-attr">"compliance-type"</span>: <span class="hljs-string">"Patch"</span>
  }
}
</code></pre>
<h2 id="heading-create-the-eventbridge-rule">Create the EventBridge rule</h2>
<h3 id="heading-step-1-define-rule-detail">Step 1: Define rule detail</h3>
<p>Name your rule, and make sure that it is created on the default event bus. Also, check that the rule type is "Rule with an event pattern".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683671453370/35fc449d-9d63-4ad0-aab0-f113b58c177d.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-build-event-pattern">Step 2: Build event pattern</h3>
<p>Select the event source as:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683671462916/e032051a-0486-4047-9436-df75994a56d1.png" alt class="image--center mx-auto" /></p>
<p>Scroll down and enter the event pattern that will be used by the rule to filter what messages you are interested in:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683671479076/d574989e-a8e8-4a83-a514-c3e063189cea.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"source"</span>: [<span class="hljs-string">"aws.ssm"</span>],
  <span class="hljs-attr">"detail-type"</span>: [<span class="hljs-string">"Configuration Compliance State Change"</span>]
}
</code></pre>
<h3 id="heading-step-3-select-targets">Step 3: Select target(s)</h3>
<p>Select AWS Service as the target type. Then select "Step Functions state machine" as the target and choose the state machine that you created before:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1683671491238/1c559e84-f1bd-4616-ba3b-808fb848865a.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-5-resources">5. Resources</h1>
<p>You can refer to this article for an example of how to implement a manual approval workflow: <a target="_blank" href="https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-human-approval.html">https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-human-approval.html</a>. This is the article that I used to learn about manual approvals AWS Step Functions.</p>
<h1 id="heading-6-conclusion">6. Conclusion</h1>
<p>In conclusion, the integration of AWS Step Functions and Systems Manager Patch Manager provides a resilient and scalable solution for automating and delegating the patch management of EC2 instances. The use of Step Functions over Lambda enables easier troubleshooting, retries, and invocation with different parameters.</p>
<p>However, there are limitations and potential drawbacks, such as delayed patching if users do not respond to the patch notice.</p>
<p>Limitations and Drawbacks of the current implementation:</p>
<ul>
<li><p>Potential for delayed patching if users do not respond to the patch notice.</p>
</li>
<li><p>Backups. I would like to take a backup of the instance right before the patches are applied.</p>
</li>
<li><p>I'll try to create an AWS CDK script or Terraform to automate the deployment of this solution.</p>
</li>
</ul>
<p>Future improvements:</p>
<ul>
<li><p>Take backups of instances before applying the security patches</p>
</li>
<li><p>Automate the deployment process with Terraform or CDK</p>
</li>
<li><p>Add more notifications, for example, when the process fails or when the instance is up to date</p>
</li>
</ul>
<p>Overall, this project provides a foundation for those looking to streamline their patch management process and improve their security posture.</p>
]]></content:encoded></item><item><title><![CDATA[Deploying a Phoenix Web App to AWS App Runner: A Step-by-Step Guide]]></title><description><![CDATA[🌩
This guide deploys Phoenix apps to AWS App Runner, but LiveView apps won't work due to missing WebSocket support on App Runner. Stick to standard Phoenix apps for now, or consider alternatives for LiveView deployments.


Nowadays, deploying web ap...]]></description><link>https://ezequielgioia.com/deploying-a-phoenix-web-app-to-aws-app-runner-a-step-by-step-guide</link><guid isPermaLink="true">https://ezequielgioia.com/deploying-a-phoenix-web-app-to-aws-app-runner-a-step-by-step-guide</guid><category><![CDATA[AWS]]></category><category><![CDATA[app runner]]></category><category><![CDATA[Phoenix framework]]></category><category><![CDATA[Elixir]]></category><category><![CDATA[Web Development]]></category><dc:creator><![CDATA[Ezequiel Gioia]]></dc:creator><pubDate>Mon, 23 Jan 2023 04:42:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/1cqIcrWFQBI/upload/v1669597780484/a8xG9Xq-9.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div data-node-type="callout">
<div data-node-type="callout-emoji">🌩</div>
<div data-node-type="callout-text">This guide deploys Phoenix apps to AWS App Runner, but <strong>LiveView apps won't work</strong> due to <a target="_blank" href="https://github.com/aws/apprunner-roadmap/issues/13">missing WebSocket support on App Runner</a>. Stick to standard Phoenix apps for now, or consider alternatives for LiveView deployments.</div>
</div>

<p>Nowadays, deploying web applications with containers is the norm, but who wants to go down the rabbit hole managing containers when you can instead use that time to work on your product?</p>
<p>In this post, we will create and deploy a Phoenix web application in AWS App Runner. App Runner simplifies the deployment and management of your dockerized application, providing some extra features out of the box like:</p>
<ul>
<li><p>Automated Deployments</p>
</li>
<li><p>Load Balancing</p>
</li>
<li><p>Auto Scaling</p>
</li>
<li><p>Logs and Metrics</p>
</li>
<li><p>SSL/TLS Certificate Management</p>
</li>
</ul>
<p>App Runner takes care of starting, running, scaling, and load balancing your service. It works with two types of repository sources, source code –directly from GitHub– or docker images –from <a target="_blank" href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html">Amazon Elastic Container Registry (ECR)</a>–. When using a code-based source like GitHub, then App Runner supports several runtimes such as Go, Java, .NET, Node.js, PHP, Python, and Ruby. I found the Docker image-based flavor handy when your programming platform is not supported directly by App Runner or you need full control of the environment that will be running your application.</p>
<p>Because Elixir is not supported by App Runner, we will be deploying our application through the Docker image option.</p>
<p>During this step-by-step we are going to going to focus on three different areas:</p>
<ol>
<li><p><strong>Creating the App:</strong> We need a dockerized web application to deploy in App Runner, and that is what we are going to build in this section. If you already have a web application or API running in docker, feel free to jump to the next section.</p>
</li>
<li><p><strong>Automating the Docker image build:</strong> Our next step will be to automate the build of the Docker image, from GitHub to Amazon ECR.</p>
</li>
<li><p><strong>Hosting the app in App Runner:</strong> Lastly, we will configure App Runner to host our web application.</p>
</li>
</ol>
<p>Let's start!</p>
<h1 id="heading-1-creating-the-app">1. Creating the App</h1>
<h2 id="heading-installing-dependencies">Installing dependencies</h2>
<p>There are two dependencies that we need to install to create a Phoenix app, the programing language, Elixir and the Phoenix Framework.</p>
<p><strong>Installing Elixir</strong></p>
<p>On macOS, it can be easily installed through <a target="_blank" href="https://brew.sh">homebrew</a>; for other options, check Elixir's installation guideline: https://elixir-lang.org/install.html.</p>
<pre><code class="lang-bash">brew install elixir
</code></pre>
<p>The command above will install Elixir and Erlang and all the required dependencies on macOS.</p>
<p><strong>Installing the Phoenix Framework</strong></p>
<p>Let's use <a target="_blank" href="https://hexdocs.pm/mix/Mix.html">Mix</a>, Elixir's built-in tool to manage dependencies and other tasks to install the Phoenix Framework.</p>
<pre><code class="lang-bash">mix archive.install hex phx_new
</code></pre>
<p>Once Phoenix is installed, the <code>phx.new</code> generator will be available to create our first application. Full instructions to install Phoenix here: https://hexdocs.pm/phoenix/installation.html.</p>
<h2 id="heading-creating-a-new-phoenix-app">Creating a new Phoenix app</h2>
<p>To keep things simple as possible we are creating a web app without a database. Ecto is the go-to database toolkit for Elixir apps and it is configured by default for new Phoenix apps. So, we need to tell phoenix to create an app omitting the database layer. We will use the <code>--no-ecto</code> flag to achieve that.</p>
<p>Our app is called <strong>demo</strong> and we are telling Phoenix to create it without support for databases:</p>
<pre><code class="lang-bash">mix phx.new demo --no-ecto
</code></pre>
<p>https://hexdocs.pm/phoenix/up_and_running.html</p>
<p><strong>Locally testing our app</strong></p>
<p>Once the application has been created, browse in your terminal to the application directory and launch a local phoenix server:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> demo
mix phx.server
</code></pre>
<p>Yes, that is all you need to run a local Phoenix server. The output will be similar to this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669600763659/Mhx0cEB1R.png" alt="Screenshot 2022-11-27 at 8.59.12 PM.png" /></p>
<p>Point your web browser to <code>http://localhost:4000</code> and our Phoenix demo application will be there:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1669600890936/5N2kTSfoN.png" alt="Screenshot 2022-11-27 at 9.00.51 PM.png" /></p>
<h2 id="heading-creating-a-docker-image">Creating a Docker image</h2>
<p>To create our docker image we need a <em>Dockerfile,</em> this file specifies the steps to create the docker image, like a cooking recipe. Again, Phoenix can give you a working dockerfile for your project, just ask for it with the <em>phx.gen.release</em> generator, then create the docker image. Run the following commands on the main folder of your application:</p>
<pre><code class="lang-bash">mix phx.gen.release --docker
docker image build -t demo .
</code></pre>
<p><strong>Generating a secret key</strong></p>
<p>The secret key base is used to sign/encrypt cookies and other secrets. A default value is used in config/dev.exs and config/test.exs but you want to use a different value for production and you most likely don't want to check this value into version control, so we use an environment variable instead:</p>
<pre><code class="lang-bash">
<span class="hljs-comment"># generate a new secret key</span>
mix phx.gen.secret
XevO6j...JZHk/JQ...jY2rb/4K...pCk/O83...UY+i

<span class="hljs-comment"># set the secret key as an environment variable</span>
<span class="hljs-built_in">export</span> SECRET_KEY_BASE=XevO6j...JZHk/JQ...jY2rb/4K...pCk/O83...UY+i
</code></pre>
<p><strong>Locally testing our docker image</strong></p>
<p>Before moving forward to the next step, make sure that your web application can run inside a Docker container with no issues. The command below will run a Docker container, exposing port 400 to the localhost. The <em>-e</em> flag tells docker to read the environment variable SECRET_KEY_BASE and pass it inside the container:</p>
<pre><code class="lang-bash">docker run -dp 4000:4000 -e SECRET_KEY_BASE demo
</code></pre>
<h1 id="heading-2-automating-the-docker-image-build"><strong>2. Automating the Docker image build</strong></h1>
<p>Our next step is to automate the build process of our Docker image.</p>
<p>The image-based flavor version of App Runner that we are going to use can automatically deploy our application every time we push a new image to Amazon ECR. The process looks like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670206361758/bndb6LjBJ.png" alt /></p>
<p>App Runner gets notified when a new image is available in ECR and deploys a new version of our containers. But, how can we automate the deployment workflow to detect when a new commit has been pushed to our git repository? To solve this part of the deployment workflow we are going to introduce AWS CodePipeline and AWS CodeBuild:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670206461452/AUc8Wq4-A.png" alt /></p>
<p>CodePipeline can detect changes in our GitHub repository and invoke CodeBuild with the latest version of our code in the main branch. Then CodeBuild is going to generate a docker image with the latest version of our code and push it into our private ECR repository. Then we already know the rest of the story, App Runner will get notified about the new image in ECR and deploy one or more containers based on our App Runner Deployment Configuration.</p>
<p>So, let's start by creating our Amazon ECR repository. Then we will move forward with setting up CodePipeline and CodeBuild to complete our CI/CD pipeline.</p>
<h2 id="heading-creating-a-git-repository">Creating a Git repository</h2>
<p>I'm using GitHub, but CodePipeline also supports AWS CodeCommit and Atlassian Bitbucket.</p>
<p>I'm not covering the details of this step but here you have the GitHub documentation to create a repository and upload your existing local code. I'm sure that you are already familiar with this: <a target="_blank" href="https://docs.github.com/en/get-started/importing-your-projects-to-github/importing-source-code-to-github/adding-locally-hosted-code-to-github">https://docs.github.com/en/get-started/importing-your-projects-to-github/importing-source-code-to-github/adding-locally-hosted-code-to-github</a>.</p>
<h2 id="heading-creating-the-amazon-ecr-repository">Creating the Amazon ECR repository</h2>
<p>In your web browser, access the Amazon ECR console https://us-east-1.console.aws.amazon.com/ecr/repositories to create a new ECR repository. This is your docker registry repository, where your docker images will be stored.</p>
<p>I'll go with all the defaults in my case for a <strong>private</strong> repository:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670218372780/DbgIXs4SR.png" alt /></p>
<p><em>Note: at the time of writing, ECR is the only Docker repository supported by App [Runner. Other repositories like Docker Hub will not work.</em></p>
<p><strong>Verify if ECR is working:</strong></p>
<p>Once the ECR repository is created, access it in the ECR web console and click on <code>View push command</code> button located at the top bar.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670377394124/efuxGWhqb.png" alt /></p>
<p>Follow the instructions provided based on your operative system, macOS/Linux or Window:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670377466777/uwBTDQzkQ.png" alt /></p>
<p>After running your docker push, you should see your docker image listed in the ECR repository web console.</p>
<h2 id="heading-codebuild-and-codepipeline">CodeBuild and CodePipeline</h2>
<p>With the git repository and the ECR registry created, it is time to configure our CI/CD pipeline with CodePipeline and CodeBuild AWS services.</p>
<p>Let's start by creating the pipeline. This is a five-step wizard that walks us through all the settings.</p>
<p><strong>Define a new CodePipeline and CodeBuild step</strong></p>
<p>Access the CodePipeline web console and click on "Create pipeline" button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670777863357/MKAKq3GPF.png" alt class="image--center mx-auto" /></p>
<p><strong>Add source stage:</strong> this is where we are going to connect our GitHub repository. Select GitHub (Version 2) as the source provider, then click on Connect to GitHub. This will open a dialog to grant the AWS Connector for GitHub access to your GitHub repository. As a best practice, I like to give access to the repository involved instead of all the repositories in my GitHub account. Once the AWS Connector is configured, the pipeline screen should look similar to this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670777817476/R6sWbnaGi.png" alt class="image--center mx-auto" /></p>
<p><strong>Add build stage:</strong> this is where CoudeBuild will create a new docker image and push it to ECR every time a code change in the main branch is detected.</p>
<p>Choose <em>CodeBuid</em> as the build provider and click on "Create project" button.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1670778087114/2LJEwL2gc.png" alt class="image--center mx-auto" /></p>
<p>We need to define a CodeBuild project. This is where we can set up all the configurations related to the build process, like the operative system used for the build machine, pre-installed runtimes, support for GPU during the build process, etc.</p>
<p>In my case, I choose:</p>
<ul>
<li><p>Operative system: Amazon Linux 2</p>
</li>
<li><p>Runtime(s): standard</p>
</li>
<li><p>Image: latest image available</p>
</li>
<li><p>Image version: "Always use the latest image for this runtime version"</p>
</li>
<li><p>Environment type: Linux</p>
</li>
<li><p>Privileged: check this box. The build instance needs privileged access to build Docker images.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671310573060/CzWtZ-2VV.png" alt class="image--center mx-auto" /></p>
<p><strong>Buildspec:</strong> make sure to keep the default values for the Buildspec section. The expectation is that CodeBuild will look for a file named buildspec.yml at the root level of our git repository. That is the file where we can define the commands for the build process. We are going to configure this in a few minutes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674407882898/1464f6fe-5b02-46f2-a026-886ecacacdb2.png" alt class="image--center mx-auto" /></p>
<p>Click on "Continue to CodePipeline". This will create the CodeBuild and bring us back to the pipeline wizard:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671310342244/x7-Rpwj8C.png" alt class="image--center mx-auto" /></p>
<p>Click on "Next" and then "Skip deploy stage". The deployment component of our pipeline will be provided by App Runner itself, which is why we are skipping this stage. Review the settings and click on "Create pipeline".</p>
<p>The CI/CD pipeline is almost ready. We need to adjust some permissions and add our buildspec.yml file to the repository.</p>
<p><strong>Setting up the IAM permissions:</strong></p>
<p>CodeBuild needs permission to push the new docker images in Amazon ECR. Access the IAM console, search for the role created by CodeBuild and attach a new IAM policy to the role with the following permissions adjusting the arn ECR repository path:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"ReadAndWriteAccessToDemoAppRepository"</span>,
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"ecr:GetDownloadUrlForLayer"</span>,
                <span class="hljs-string">"ecr:BatchGetImage"</span>,
                <span class="hljs-string">"ecr:CompleteLayerUpload"</span>,
                <span class="hljs-string">"ecr:UploadLayerPart"</span>,
                <span class="hljs-string">"ecr:InitiateLayerUpload"</span>,
                <span class="hljs-string">"ecr:PutImage"</span>,
                <span class="hljs-string">"ecr:BatchCheckLayerAvailability"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:ecr:us-east-1:552334911378:repository/demo-app-repository"</span>
        },
        {
            <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"ListECRImages"</span>,
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Action"</span>: [
                <span class="hljs-string">"ecr:DescribeImages"</span>,
                <span class="hljs-string">"ecr:GetAuthorizationToken"</span>
            ],
            <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"*"</span>
        }
    ]
}
</code></pre>
<p><strong>Buildspec.yml:</strong> create a new file at the root level of your git repository named buildspec.yml and paste the code below renaming the ECR repository URI. The buildspec file has two phases defined, a pre_build phase to get the ECR credentials and pull the latest existing image from ECR and the build phase, which has the commands to build a new Docker image and push it into the ECR repository. Replace the ECR URI with your own:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-number">0.2</span>

<span class="hljs-attr">env:</span>
  <span class="hljs-attr">variables:</span>
    <span class="hljs-attr">CONTAINER_REPOSITORY_URL:</span> <span class="hljs-number">552334911378.</span><span class="hljs-string">dkr.ecr.us-east-1.amazonaws.com/demo-app-repository</span>
    <span class="hljs-attr">TAG_NAME:</span> <span class="hljs-string">latest</span>

<span class="hljs-attr">phases:</span>
  <span class="hljs-attr">pre_build:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">aws</span> <span class="hljs-string">ecr</span> <span class="hljs-string">get-login-password</span> <span class="hljs-string">--region</span> <span class="hljs-string">us-east-1</span> <span class="hljs-string">|</span> <span class="hljs-string">docker</span> <span class="hljs-string">login</span> <span class="hljs-string">--username</span> <span class="hljs-string">AWS</span> <span class="hljs-string">--password-stdin</span> <span class="hljs-number">552334911378.</span><span class="hljs-string">dkr.ecr.us-east-1.amazonaws.com</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">pull</span> <span class="hljs-string">$CONTAINER_REPOSITORY_URL:$TAG_NAME</span> <span class="hljs-string">||</span> <span class="hljs-literal">true</span>

  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">commands:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">build</span> <span class="hljs-string">--tag</span> <span class="hljs-string">demo-app-repository</span> <span class="hljs-string">.</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">tag</span> <span class="hljs-string">demo-app-repository:latest</span> <span class="hljs-number">552334911378.</span><span class="hljs-string">dkr.ecr.us-east-1.amazonaws.com/demo-app-repository:latest</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">docker</span> <span class="hljs-string">push</span> <span class="hljs-number">552334911378.</span><span class="hljs-string">dkr.ecr.us-east-1.amazonaws.com/demo-app-repository:latest</span>
</code></pre>
<p>Now it is time to commit and push the buildspec.yml file, this time it should trigger the CI/CD. pipeline and after a couple of minutes we should see a new Docker image in our ECR repository.</p>
<h1 id="heading-3-hosting-the-app-in-app-runner">3. <strong>Hosting the app in App Runner</strong></h1>
<h2 id="heading-configure-app-runner">Configure App Runner</h2>
<p>Open the AWS App Runner console and click on "Create service". Once in the App Runner wizard, select the Container registry and Amazon ECR as the source type. Then click on <em>Browse</em> and select the <em>latest</em> Docker image.</p>
<p>Select the deployment settings that you prefer and click on Next. I choose <em>Automatic</em> deployments:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671317867280/G8hoIy6jZ.png" alt class="image--center mx-auto" /></p>
<p>The next and last step is to configure the details for the App Runner service, here is where we have some control over the environment that will be running our app like the number of CPUs, memory per container, and when App Runner needs to automatically scale the number of containers.</p>
<p>For this application I'll go with the minimum environment possible, 1 vCPU, 2 GB of memory, and a custom auto-scaling configuration that will create up to 4 instances. By default App Runner will scale up to 25 instances, and it will deploy a new instance when the number of simultaneous requests per instance exceeds 100.</p>
<p>Another import setting that makes on this page is the port number. Our Phoenix app is running on the default port, 4000. Also, we need to specify the SECRET_KEY_BASE environment variable:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1671318318844/9xfkVBdJV.png" alt class="image--center mx-auto" /></p>
<p>Click on "Create and deploy", and after a couple of minutes –it took around 4 minutes in my case– you will see your app running. App Runner will create a domain name for us that will look similar to this:</p>
<p><code>https://SOME-RANDOM-NAME.REGION.awsapprunner.com</code></p>
<p>And that is the end of this guide. I hope it helped you with your project!</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>We learned how to create an Elixir Phoenix web application, store the code in GitHub, containerize our application, create a CI/CD pipeline and host a web application in AWS without having to worry about any piece of infrastructure. We are not managing virtual machines, container infrastructure, or tools, everything is managed by AWS.</p>
<p>We also got some interesting features out of the box like:</p>
<ul>
<li><p>SSL/TLS certificate</p>
</li>
<li><p>Auto-scaling and load-balancing capability based on the number of requests</p>
</li>
<li><p>Easy access to the application logs</p>
</li>
<li><p>Important metrics like:</p>
<ul>
<li><p>Number of successful requests (2XX)</p>
</li>
<li><p>Number of failed requests (5XX)</p>
</li>
<li><p>Number of active instances</p>
</li>
<li><p>Request latency</p>
</li>
</ul>
</li>
</ul>
<p>Some of the limitations that I have faced so far:</p>
<ul>
<li><p>Access to databases in a VPC requires more advanced network configuration since App Runner lives in a VPC managed by AWS</p>
</li>
<li><p>The observability feature can be configured only during the App Runner Service definition. Once you create the service, this cannot be changed.</p>
</li>
<li><p>When using the App Runner <em>source code repository</em> option instead of ECR, then only GitHub is supported.</p>
</li>
<li><p>When using the <em>Container registry</em> option, then only Amazon ECR is supported. No chance to host your docker image in Docker Hub and connect it with App Runner.</p>
</li>
</ul>
<p>App Runner is a great service, especially for those developing minimum viable products. It allows you to focus on your project, providing in minutes an initial setup that can serve for many years and customers to come.</p>
<h1 id="heading-resources">Resources</h1>
<p>Official AWS App Runner Workshop:</p>
<ul>
<li>Getting started with AWS App Runner :: AWS App Runner Workshop: <a target="_blank" href="https://www.apprunnerworkshop.com/getting-started/">https://www.apprunnerworkshop.com/getting-started/</a></li>
</ul>
<p>For those who want to run the extra mile, here you have great information to optimize the docker build process and general best practices for your docker files:</p>
<ul>
<li><p><em>Speed up your Docker builds with –cache-from:</em><a target="_blank" href="https://lipanski.com/posts/speed-up-your-docker-builds-with-cache-from">https://lipanski.com/posts/speed-up-your-docker-builds-with-cache-from</a></p>
</li>
<li><p><em>Best practices when writing a Dockerfile for a Ruby application:</em></p>
<p>  <a target="_blank" href="https://lipanski.com/posts/dockerfile-ruby-best-practices">https://lipanski.com/posts/dockerfile-ruby-best-practices</a></p>
</li>
<li><p><em>Reducing Docker image build time on AWS CodeBuild using an external cache:</em><a target="_blank" href="https://aws.amazon.com/blogs/devops/reducing-docker-image-build-time-on-aws-codebuild-using-an-external-cache/">https://aws.amazon.com/blogs/devops/reducing-docker-image-build-time-on-aws-codebuild-using-an-external-cache/</a></p>
</li>
</ul>
<p>Enjoy!</p>
]]></content:encoded></item><item><title><![CDATA[Quick deployment of MS Active Directory in AWS]]></title><description><![CDATA[Were you trying to deploy a Windows Active Directory (AD) in AWS? Maybe you are here because you just tried that using the CloudFormation template referred to in this AWS post (Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML...]]></description><link>https://ezequielgioia.com/aws-deploying-active-directory-with-cloud-formation</link><guid isPermaLink="true">https://ezequielgioia.com/aws-deploying-active-directory-with-cloud-formation</guid><category><![CDATA[AWS]]></category><category><![CDATA[Active Directory]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[cloudformation]]></category><dc:creator><![CDATA[Ezequiel Gioia]]></dc:creator><pubDate>Fri, 29 Jul 2022 11:30:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/bONdDmhHZGs/upload/v1659067966805/riGD_HdUd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Were you trying to deploy a Windows Active Directory (AD) in AWS? Maybe you are here because you just tried that using the CloudFormation template referred to in this AWS post (<a target="_blank" href="https://aws.amazon.com/blogs/security/enabling-federation-to-aws-using-windows-active-directory-adfs-and-saml-2-0/">Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0</a>), and it failed with the following error:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">API: ec2:RunInstances Not authorized for images:</span> [<span class="hljs-string">ami-003a480a3489b1c5c</span>]
</code></pre>
<p>If you are in a hurry, below, you can find a version of this template that includes a fix and some other minor improvements like support to deploy in a non-default VPC. And please, feel free to read the rest of this post to find out what was wrong with the original template, how to fix it, and the improvements.</p>
<p>CloudFormation template: https://public-cf-templates.s3.amazonaws.com/windows-active-directory/Windows_Single_Server_Active_Directory.template.json</p>
<h2 id="heading-full-story">Full story</h2>
<p>I was looking for help to have an AD working on AWS to perform a proof of concept on a Single Sign On solution based on Cognito + SAML 2.0 and + Active Directory Federation Services (ADFS). Windows administration is not my thing, so I was looking for a shortcut to start with a working Active Directory installation. In that search, I found the following post that explains how to get an AD up and running quickly; it failed.</p>
<p>Original post: Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0: https://aws.amazon.com/blogs/security/enabling-federation-to-aws-using-windows-active-directory-adfs-and-saml-2-0/</p>
<h3 id="heading-what-was-wrong-with-the-original-cf-template">What was wrong with the original CF template?</h3>
<p>The AMI IDs embedded in the template are outdated. The original post is from 2013, yep, nine years old. Maybe you are thinking, how can I trust an ancient post like that? Ok, you are right; you can expect this ancient blog post to point you to a broken CF template. I got that. But what happens if I tell you that the CloudFormation web console also has a link to the same broken template? Here it is:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1659068547118/Y8iuQBouh.png" alt="image.png" class="image--center mx-auto" /></p>
<p>In the case of the web console, it's using a different S3 endpoint, but it's pointing to the same template in the same bucket.</p>
<p>Link 1 - Found in the blog post: https://s3.amazonaws.com/cloudformation-templates-us-east-1/Windows_Single_Server_Active_Directory.template</p>
<p>Link 2 - Found in CloudFormation Web Console: https://s3-external-1.amazonaws.com/cloudformation-templates-us-east-1/Windows_Single_Server_Active_Directory.template</p>
<h3 id="heading-how-to-fix-it">How to fix it?</h3>
<p>To fix it, you need to edit the template and add an updated AMI id. You can get the AMI ID from the EC2 launch wizard or the AWS CLI. Here is an example to list the latest Windows Server 2022 images in the North Virginia region. You can use <a target="_blank" href="https://aws.amazon.com/cloudshell/">CloudShell</a> to run the following command:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Listing Windows Server 2022 AMIs on the us-east-1 region: </span>
aws ec2 describe-images --owners amazon --filters <span class="hljs-string">"Name=name,Values=Windows_Server-2022*"</span> --query <span class="hljs-string">'sort_by(Images, &amp;CreationDate)[].[ImageId, Name, Description]'</span> --region us-east-1 --no-include-deprecated
</code></pre>
<p>Note: I removed the AMI mapping on my version of the template. It defaults to <em>Windows Server 2022 English Full Base</em> in the <em>us-east-1</em> region. You can get the AMI you need by running the AWS CLI command above.</p>
<h3 id="heading-still-failing-no-default-vpc">Still failing, no default VPC</h3>
<p>I fixed the AMI ID issue, but the template is still not working. Now what? Well, a limitation that I found is that the original template only works on accounts with a default VPC. In my scenario, the VPC is shared from another account, so I don't have a default VPC.</p>
<p>To solve the default VPC issue, I included two new parameters in the template. These are the <strong>VPC</strong> and <strong>Subnet</strong> parameters where you can specify the network to deploy the template's resources.</p>
<p>Then, I faced another issue related to how the security groups are created. The original template relies on the property <em>SourceSecurityGroupName</em> when creating the <em>SecurityGroupIngress</em> rules. But that is not compatible with a non-default VPC. I replaced it with <strong><em>SourceSecurityGroupId</em></strong>.</p>
<h3 id="heading-defining-the-instance-type">Defining the instance type</h3>
<p>The EC2 Instance Type parameter is no more than a preconfigured list of allowed values. This approach is vulnerable to aging as new types of instances are released, so I converted it into a string with <em>m5.large</em> as the default value. I believe that if you are still reading this, you are very familiar with the EC2 instance types, and you can change the parameter to any other valid value.</p>
<p>I hope this post has helped you get an Active Directory and un running in AWS. I'm not pretending to use this template in production, but it can be handy when testing things. Enjoy! 🙂</p>
<h2 id="heading-other-resources">Other resources:</h2>
<p><a target="_blank" href="https://aws-quickstart.github.io/quickstart-microsoft-activedirectory/">Active Directory Domain Services on the AWS Cloud - Quick Start Reference Deployment</a></p>
<p><a target="_blank" href="https://aws.amazon.com/blogs/mt/query-for-the-latest-windows-ami-using-systems-manager-parameter-store/">Query for the Latest Windows AMI Using Systems Manager Parameter Store</a></p>
]]></content:encoded></item></channel></rss>