AWS SAP Notes 14 - Deployment and Management

Service Catalog

  • A service catalog is a document or database crate by the IT team containing an organized collection of products
  • Used when different teams in the business use a service-charge module
  • Key product information: Product Owner, Cost, Requirements, Support Information, Dependencies
  • Defines approval of provisioning from IT and customer side
  • Designed for managing cost and scale service delivery

AWS Service Catalog

  • Portal for end users who can launch predefined products by admins
  • End user permissions can be controlled
  • Admins can those products using CloudFormation and the permissions required to launch them
  • Admins build products into portfolios which are made visible to the end users
  • Service Catalog architecture: alt text

CI/CD

  • CI/CD architecture: alt text
  • Branching architecture: alt text
  • Code pipeline: alt text
  • Each pipeline has stages
  • Each pipeline should be linked to a single branch in a repository alt text
  • CodeBuild/CodeDeploy configuration files:

AWS CodeCommit

  • Managed git service
  • Basic entity of CodeCommit is a repository
  • Authentication can be configured via IAM console. CodeCommit supports HTTPS, SSH and HTTPS over GRPC
  • Triggers and notifications:
    • Notifications rules: can send notifications based on events happening in the repo, example: commits, pull request, status changes, etc. Notifications can be sent to SNS topics or AWS chat bots
    • Triggers: allow the generate event driven processes based on things that happen in the repo. Events can be sent ot SNS or Lambda functions

AWS CodePipeline

  • It is a Continuos Delivery Tools
  • Controls the flow from source code, through build towards deployment
  • A pipeline is build from stages. These contain actions which can be sequential or parallel
  • Movement between stages can happen automatically or it can require a manual approval
  • Action can consume artifacts or they can generate artifacts
  • Artifacts are just files which are generated and/or consumed by actions
  • Any changes to the sate of pipelines, stages or actions generate events which are published to Event Bridge
  • CloudTrail or Console UI can be used to view/interact with the pipeline

AWS CodeBuild

  • CodeBuild is a build as a service product
  • It is fully managed, we pay only for the resources consumed during builds
  • CodeBuild is an alternative to the solutions provided by third party solutions such as Jenkins
  • CodeBuild uses Docker for build environments which can be customized by us
  • CodeBuild integrates with other AWS services such as KMS, IAM, VPC, CloudTrails, S3, etc.
  • Architecturally CodeBuild gets source material from GitHub, CodeCommit, CodePipeline or even S3
  • It builds and tests code. The build can be customized via buildspec.yml file which has to be located in the root of the source
  • CodeBuild output logs are published to CloudWatch Logs, metrics are also published to CloudWatch Metrics and events to Event Bridge (or CloudWatch Events)
  • CodeBuild supports build environments such as Java, Ruby, Python, Node.JS, PHP, .NET, Go and many more

buildspec.yml

  • It is used to customize the build process
  • It has to be located in root folder of the repository
  • It can contain four main phases:
    • install: used to install packages in the build environment
    • pre_build: sign-in to things or install code dependencies
    • build: commands run during the build process
    • post_build: used for packaging artifacts, push docker images, explicit notifications
  • It can contain environment variables: shell, variables, parameter-store, secret-manager variables
  • Artifacts part of the file: specifies what stuff to put where

AWS CodeDeploy

  • Is a code deployment as a service product
  • It is an alternative for for third-party services such as Jenkins, Ansible, Chef, Puppet and even CloudFormation
  • It is used to deploy code, not resources (use CloudFormation for that)
  • CodeDeploy can deploy code to EC2, on-premises, Lambda and Ecs
  • Besides code, it can deploy configurations, executables, packages, scripts, media and many more
  • CodeDeploy integrates with other AWS services such as AWS Code*
  • In order to deploy code on EC2 and on-premises, CodeDeploy requires the presence of an agent

appspec.[yaml|json]

  • It controls how deployments occur on the target
  • Manages deployments: configurations + lifecycle event hooks
  • Configuration section:
    • Files: applies to EC2/on-premises. Provides information about which files should be installed on the instance
    • Resources: applies to ECS/Lambda. For Lambda it contains the name, alias, current version and target version of a Lambda function. For ECS contains things like the task definition and container details (ports, traffic routing)
    • Permissions: applies to EC2/on-premises. Details any special permissions and how should be applies to files and folders from the files sections
  • Lifecycle event hooks:
    • ApplicationStop: happens before the application is downloaded. Used for gracefully stop the application
    • DownloadBundle: agent copies the application to a temp location
    • BeforeInstall: used for pre-installation tasks
    • Install: agent copies the application from the temp folder to the final location
    • AfterInstall: perform post-install steps
    • ApplicationStart: used to restart/start services which were stopped during the ApplicationStop hook
    • ValidateService: verify the deployment was completed successfully

Elastic Beanstalk - EB

alt text

  • It is a platform as a service (PaaS) product in AWS, meaning the vendors handles all the infrastructure, we provide the code only
  • EB is a developer focused product, providing managed application environments
  • At a high level, developers provide code and EB handles infrastructure
  • EB is fully customizable - uses AWS products under the covers provisioned with CloudFormation
  • Using EB requires application support, does not come for free

EB Platforms

  • EB is capable of accepting code in many languages known as platforms
  • EB has support for built-in languages, Docker and custom platforms
  • Built-in supported languages: Go, Java SE, Java Tomcat, .NET Core (Linux) and .NET (Windows), Node.JS, PHP, Python, Ruby
  • Docker options: single container docker and multicontainer docker (ECS)
  • Preconfigured Docker: way to provide runtimes which are not yet natively supported, example Java with Glassfish
  • We can create our own custom platform using packer which can be used with Beanstalk

EB Terminology

  • Elastic Beanstalk Application: is a collection of things relating to an application - a container/folder
  • Application Version: specific labeled version of deployable code for an application. The source bundle is stored in S3
  • Environments: are containers of infrastructure and configuration for a specific version
  • Each environment is either a web server tier or a worker tier. The web server tier is designed to communicate with the end-users. The worker tier is designed to process work from the web tiers. Web server tier and worker tier communicate using SQS queues
  • Each environment is running a specific version at any given time
  • Each environment has its own CNAME, a CNAME SWAP can be done to exchange to environment DNS

Deployment Policies

  • All at once:
    • Deploy to all instances at once
    • It is quick and simple, but it will cause a brief outage
    • Recommended for testing and development environments
  • Rolling:
    • Application code is deployed in rolling batches
    • It is safer, since the deployment will continue only if the previous batch is healthy
    • The application will encounter loss in capacity
  • Rolling with additional batch:
    • Same as rolling deployment, with the addition of having a new batch in order to maintain capacity during the deployment process
    • Recommended for production environment with real load
  • Immutable:
    • New temporary ASG is created with the newer version of the application
    • Once the validation is complete, the older stack is removed
    • It is easier to roll back
  • Traffic Splitting:
    • Fresh instances are created in a similar way as in case of immutable deployment
    • Traffic will be split between the older and the newer version
    • Allows to perform A/B testing on the application
    • It does not have capacity drops, but it will come with an additional cost
  • Blue/Green:
    • Not automatically supported by EB
    • Requires manual CNAME swap between 2 environments
    • Provides full control in terms of when we would want to switch to the new environment

EB and RDS

  • In order to access an RDS instance from EB we can create an RDS instance within an EB environment
  • If we do this, the RDS is linked to the environment
  • If we delete the environment, the database will also be deleted
  • If we link a database to an environment, we get access to the following environment properties:
    • RDS_HOSTNAME
    • RDS_PORT
    • RDS_DB_NAME
    • RDS_USERNAME
    • RDS_PASSWORD
  • Other alternative is to create the RDS instance outside of the EB
  • The environment properties above are not automatically provided in this case, we can create them manually
  • With this method the RDS lifecycle is not tied to the EB environment
  • Decoupling an existing RDS from an EB environment:
    1. Create a Snapshot
    2. Enable Delete Protection
    3. Create a new EB environment with the same app version
    4. Ensure new environment can connect to the DB
    5. Swap environment (CNAME or DNS)
    6. Terminate the old environment - this will try to terminate the RDS instance
    7. Locate the DELETE_FAILED stack in CFN, manually delete the stack and pick to retain stuck resources

Customizing via .ebextensions

  • We can include directive to customize EB environments using .ebextensions folder
  • Anything added in this folder as YAML or JSON format ending in .config is regarded to be configuration file
  • This files has to be formatted as CloudFormation files
  • EB will use CFN to create additional resources within the environment specified in the .config files
  • This files can have also a number of config elements:
    • option_settings: allows us to set options for resources
    • Resources: allows us to create new resources using CFN elements
    • packages, sources, files, users, groups, commands, container_commands and services

EB with HTTPS

  • To use HTTPS with EB we need to apply an SSL certificate to the load balancer
  • We can do this using the EB console or we can use the .ebextensions/securelistener-[alb|nlb].config feature
  • We can configure the security group as well to allow SSL connections

Environment Cloning

  • Cloning allows to create new EB environment by cloning existing environments
  • By cloning an environment we don't have to manually configure options, env. variables, resources and other settings
  • A clone does copy any RDS instance defined, but the data is not copied by default
  • EB cloning does not include any un-managed changes to resources from the environment
  • To clone an environment from the eb command line we can use eb clone <ENV> command

EB and Docker

Single Container Mode

  • We can only run one container in one Docker host
  • This mode uses EC2 with Docker, not ECS
  • In order to use this mode we have to provide a few configurations:
    • Dockerfile: used to create a new container image from this file
    • Dockerrun.aws.json (version 1): to use an existing docker image. We can configure ports, volumes and other Docker attributes
    • Docker-compose.yml: if we want to use Docker compose

Multi-Container Mode

  • Elastic Beanstalk uses ECS to create a cluster
  • ECS uses EC2 instances provisioned in the cluster and an ELB for HA
  • EB takes care of ECS tasks, cluster creation, task definition and task execution
  • We need to provide an Dockerrun.aws.json (version 2) file in the application source bundle (root level)
  • Any images need to be stored in a container registry such as ECR

AWS OpsWorks

  • OpsWorks is configuration managed service which provides AWS managed implementation of Chef a Puppet
  • OpsWorks functions in one of 3 modes:
    • Puppet Enterprise: we can create an AWS Managed Puppet Master Server (desired state architecture)
    • Chef Automate: we can create AWS Managed Chef Servers (similar as IaC, set of steps using Ruby)
    • OpsWorks: AWS implementation of Chef, no servers. Chef at a basic level, little admin overhead
  • Generally we should only chose to use them if we are required to use Chef or Puppet, for example in case of a migration
  • Other use case would be a requirement to automate

Opsworks Mode

  • Stacks: core components of OpsWorks, container of resource similar to CFN stacks
  • Layers: represent a specific function in a stack, example layer of load balancers, layer of database, layer of EC2 instances running a web application
  • Recipes and Cookbooks: they are applied to layers. We can use them to install packages, deploy applications, run scripts, perform reconfigurations. Cookbooks are collections of recipes which can be stored on GitHub
  • Lifecycle Events: special events which run on a layer, examples:
    • Setup
    • Configure: generally executed when instances are removed or added to the stack. It will run on all instances, including already existing ones
    • Deploy
    • Undeploy
    • Shutdown
  • Instances: compute instances (EC2 instances or on-premise servers). They can be:
    • 24/7 instances: started manually
    • Time-Based instances: configured to start and stop on a schedule
    • Load-Based instances: turn on or off based on system metrics (similar to ASG)
  • Instance auto-healing: Opsworks automatically restarts instances in they fail for some reason
  • Apps: they can be stored in repositories such as S3. Each app is represented by an OpsWorks App which specifies the application type and containing any information needed to deploy the app
  • OpsWorks architecture: alt text

AWS Systems Manager (SSM)

  • Is a product which lets us manage and control AWS and on-premise infrastructure
  • SSM is agent based, which means an agent needs to be installed on Windows and Linux based AMIs
  • SSM manages inventory (what application are installed, files, network config, hw details, services, etc.) and can path assets
  • It can also run commands and manage desired state of instances (example: block certain ports)
  • It provides a parameters store for configurations and secrets
  • Finally it provides session manager used to securely connect to EC2 instances even in private VPCs

Agent Architecture

  • An instances needs the SSM agent to be installed in order to be able to be managed by the service
  • It also needs an EC2 instance role attached to it which allows communication with the service
  • Instances require connectivity to the SSM service. This can be done via IGW or VPCE
  • On the on-premises side we need to create managed instance activations
  • For each activation we will receive an activation code and an activation ID

SSM Run Command

  • It allows us to run commands on managed instances
  • It takes a Command Document and executes it using the agent installed on the instance
  • It does this without using SSH/RDP protocol
  • Command documents can be executed on individual instances, multiple instances based on tags or resource groups
  • Command documents can be reused and they can have input parameters
  • Rate Control: if we are running commands on lot of instances, we can control it by using rate control. It can be based on:
    • Concurrency: on how many instances must run the command at a time
    • Error Threshold: defines how many individual commands can fail
  • Output of commands can be sent to S3 or we can send SNS notifications
  • Commands can be integrated with EventBridge

SSM Patch Manager

  • Allows to patch Linux and Windows instances running in EC2 or on-premises
  • Concepts:
    • Patch Baseline: we can have many of these defined. Defines what should be installed (what patches, what hot-fixes)
    • Patch Groups: what resources we want to patch
    • Maintenance Windows: time slots when patches can take place
    • Run Command: base level functionality to perform the patching process. The command used for patching is AWS-RunPatchbaseline
    • Concurrency and Error Threshold: (see above)
    • Compliance: after patches are applied, system manager can check success of compliance compared to what is expected
  • Patch Baselines:
    • For Linux: AWS-[OS]DefaultPatchBaseline - explicitly defines patches, example: AWS-AmazonLinux2DefaultPatchBaseline, AWS-UbuntuDefaultPatchBaseline - contain security updates and any critical update
    • For Windows: AWS-DefaultPatchBaseline, AWS-WindowsPredefinedPatchBaseline-OS, AWS-WindowsPredefinedPatchBaseline-OS-Application

Ref

Bình luận


White
{{ comment.user.name }}
Bỏ hay Hay
{{comment.like_count}}
Male avatar
{{ comment_error }}
Hủy
   

Hiển thị thử

Chỉnh sửa

White

Nguyễn Huy Hoàng

17 bài viết.
10 người follow
Kipalog
{{userFollowed ? 'Following' : 'Follow'}}
Cùng một tác giả
White
11 4
(Ảnh) Tại hội nghị Build 2016 diễn ra từ ngày 30/3 đến hết ngày 1/4 ở San Francisco, Microsoft đã đưa ra 7 thông báo lớn, quan trọng và mang tầm c...
Nguyễn Huy Hoàng viết hơn 4 năm trước
11 4
White
7 0
Viết code chạy một cách trơn tru ngay lần đầu tiên là một việc rất khó, thậm chí là bất khả thi. Do đó debug là một kỹ năng vô cùng quan trọng đối ...
Nguyễn Huy Hoàng viết hơn 4 năm trước
7 0
Bài viết liên quan
White
0 0
FSx FSx For Windows File Servers FSx for Windows are fully managed native Windows file servers/file shares Designed for integration with Wind...
Nguyễn Huy Hoàng viết 15 ngày trước
0 0
White
0 0
CloudFront It is a content deliver network (CDN) Its job is to improve the delivery of content from its original location to the viewers of the...
Nguyễn Huy Hoàng viết 15 ngày trước
0 0
{{like_count}}

kipalog

{{ comment_count }}

bình luận

{{liked ? "Đã kipalog" : "Kipalog"}}


White
{{userFollowed ? 'Following' : 'Follow'}}
17 bài viết.
10 người follow

 Đầu mục bài viết

Vẫn còn nữa! x

Kipalog vẫn còn rất nhiều bài viết hay và chủ đề thú vị chờ bạn khám phá!