Develop Infrastructure as Code with CloudFormation

Yunus Kılıç
Analytics Vidhya
Published in
6 min readDec 30, 2019

--

The software development industry changes very rapidly. After DevOps term becomes very popular, only writing code is not enough anymore. You should automate the software development process as can as possible. In this post, I will describe how to define our infrastructure by just writing code.

“Infrastructure as code is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools”.[1]

Why do we need infrastructure as code?

  • After moving a containerized world, infrastructure can be changed or delete/create more often.
  • Also, keeping track of infrastructure setting becomes very hard, because of tons of different settings. So keeping these settings inside code, helps to figure out huge complexity.

In this post, I will use CloudFormation to define infrastructure.

“AWS CloudFormation provides a common language for you to model and provision AWS and third party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS and third party resources.”[2]

I drew an architecture diagram below, to demonstrate the final.

Let’s define the components which exists inside the diagram.

Region: are separate geographic areas that AWS uses to house its infrastructure. These are distributed around the world so that customers can choose a region closest to them in order to host their cloud infrastructure there.[3]

Availability Zone: is the logical building block that makes up an AWS Region. There are currently 69 AZs, which are isolated locations — data centers — within a region.[3]

VPC: enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.[4]

Subnet: is “part of the network”, in other words, part of entire availability zone. Each subnet must reside entirely within one Availability Zone and cannot span zones.[5]

For simplicity, I will create 1 VPC, 2 public subnets, 2 private subnets. Subnets stand different availability zones.

PS: Deploying code commands

$sam package --template-file template.yaml --s3-bucket BUCKETNAME --output-template-file packaged.yaml
$sam deploy --template-file FILEPATH/packaged.yaml --stack-name test --capabilities CAPABILITY_IAM

First of all, let’s define parameters like below to use later.

Parameters:
EnvironmentName:
Description: An environment name that is prefixed to resource names
Type: String
Default: Test

VpcCIDR:
Description: Please enter the IP range (CIDR notation) for this VPC
Type: String
Default: 10.0.0.0/16

PublicSubnet1CIDR:
Description: Please enter the IP range (CIDR notation) for the public subnet in the first Availability Zone
Type: String
Default: 10.0.1.0/24

PublicSubnet2CIDR:
Description: Please enter the IP range (CIDR notation) for the public subnet in the second Availability Zone
Type: String
Default: 10.0.2.0/24

PrivateSubnet1CIDR:
Description: Please enter the IP range (CIDR notation) for the private subnet in the first Availability Zone
Type: String
Default: 10.0.3.0/24

PrivateSubnet2CIDR:
Description: Please enter the IP range (CIDR notation) for the private subnet in the second Availability Zone
Type: String
Default: 10.0.4.0/24

PS: 10.0.0.0/16 : Each number represents 8 digit and the first 16 digits (first two numbers) stays the same. The remaining numbers can be changed. So 256*256 Ip can be assigned.

CREATE VPC

VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VpcCIDR
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Ref EnvironmentName

Let’s connect this VPC with the internet. Internet Gateway is used to access internet from VPC.

InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Ref EnvironmentName

InternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC

CREATING SUBNETS

PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 0, !GetAZs '' ]
CidrBlock: !Ref PublicSubnet1CIDR
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Public Subnet (AZ1)

PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 1, !GetAZs '' ]
CidrBlock: !Ref PublicSubnet2CIDR
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Public Subnet (AZ2)

PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 0, !GetAZs '' ]
CidrBlock: !Ref PrivateSubnet1CIDR
MapPublicIpOnLaunch: false
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Private Subnet (AZ1)

PrivateSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 1, !GetAZs '' ]
CidrBlock: !Ref PrivateSubnet2CIDR
MapPublicIpOnLaunch: false
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Private Subnet (AZ2)

PublicSubnet1 stands on the first of the Availability Zone.

PublicSubnet2 stands on second.

PrivateSubnet1 stands on the first of the Availability Zone.

PrivateSubnet2 stands on second.

Public subnets should be connected to the internet. In order to handle this, we need a routing table.

PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub ${EnvironmentName} Public Routes

DefaultPublicRoute:
Type: AWS::EC2::Route
DependsOn: InternetGatewayAttachment
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway

PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref PublicSubnet1

PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref PublicSubnet2

Any request with related public subnet routed to the internet gateway.

PS: If you want to do the same thing for the private subnets, you should NAT Gateway.

CREATING ELASTICACHE INSTANCE WITH CODE

ServerlessSecurityGroup:
DependsOn: VPC
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: SecurityGroup for Serverless Functions
VpcId:
Ref: VPC

ServerlessStorageSecurityGroup:
DependsOn: VPC
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Ingress for Redis Cluster
VpcId:
Ref: VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '6379'
ToPort: '6379'
SourceSecurityGroupId:
Ref: ServerlessSecurityGroup

ServerlessCacheSubnetGroup:
Type: AWS::ElastiCache::SubnetGroup
Properties:
Description: "Cache Subnet Group"
SubnetIds:
- Ref: PrivateSubnet1

ElasticCacheCluster:
DependsOn: ServerlessStorageSecurityGroup
Type: AWS::ElastiCache::CacheCluster
Properties:
AutoMinorVersionUpgrade: true
Engine: redis
CacheNodeType: cache.t2.micro
NumCacheNodes: 1
VpcSecurityGroupIds:
- "Fn::GetAtt": ServerlessStorageSecurityGroup.GroupId
CacheSubnetGroupName:
Ref: ServerlessCacheSubnetGroup

Due to security reasons, I put Elasticache instance inside a private subnet. So we need to create a security group to allow 6379 port to connect Redis.

CONNECT REDIS FROM LAMBDA FUNCTION

CacheClientFunction:
Type: AWS::Serverless::Function
Properties:
Tracing: Active
CodeUri: bin/cacheClient
Handler: cacheClient
Runtime: go1.x
Role: !GetAtt RootRole.Arn
VpcConfig:
SecurityGroupIds:
- Ref: ServerlessSecurityGroup
SubnetIds:
- Ref: PublicSubnet1
Environment:
Variables:
redis_url: !GetAtt ElasticCacheCluster.RedisEndpoint.Address
redis_port: !GetAtt ElasticCacheCluster.RedisEndpoint.Port

Lambda function stands on the public subnet. We can define redis_url and redis_port according to the created ElasticCache by our definition. While writing code, we can use these environment variables to connect Redis.

!!!Important!!!

Lambda function requires some role and policy staff. Otherwise creating stack will be got an error. Let’s create a role and policy with code.

SampleManagedPolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: AllowAllUsersToListAccounts
Effect: Allow
Action:
- ec2:CreateNetworkInterface
- ec2:DescribeNetworkInterfaces
- ec2:DeleteNetworkInterface
- xray:PutTraceSegments
Resource: "*"

RootRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- !Ref SampleManagedPolicy

The above policy will be created with minimum access.

Cache Client code:

package main

import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/lambda"
"github.com/go-redis/redis"
"os"
)

func HandleRequest(ctx context.Context) (string, error) {

redisUrl := os.Getenv("redis_url")
redisPort := os.Getenv("redis_port")
client := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", redisUrl, redisPort),
Password: "", // no password set
DB: 0, // use default DB
})

client.Set("1", "1", 0)

return client.Get("1").String(), nil
}

func main() {
lambda.Start(HandleRequest)
}

CREATING DATABASE INSTANCE WITH CODE

ServerlessDBSecurityGroup:
DependsOn: VPC
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Ingress for Redis Cluster
VpcId:
Ref: VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '5432'
ToPort: '5432'
SourceSecurityGroupId:
Ref: ServerlessSecurityGroup

ServerlessDBSubnetGroup:
DependsOn: ServerlessDBSecurityGroup
Type: AWS::RDS::DBSubnetGroup
Properties:
DBSubnetGroupDescription: "DB Subnet Group"
SubnetIds:
- Ref: PrivateSubnet1
- Ref: PrivateSubnet2

PostgresqlInstance:
DependsOn: VPC
Type: AWS::RDS::DBInstance
Properties:
AllocatedStorage: 30
DBInstanceClass: db.t2.micro
DBName: postgres
Engine: postgres
MasterUsername: CacheClient
MasterUserPassword: ChangeIt2
DBSubnetGroupName: !Ref ServerlessDBSubnetGroup
VPCSecurityGroups:
- "Fn::GetAtt": ServerlessDBSecurityGroup.GroupId
DbClientFunction:
Type: AWS::Serverless::Function
Properties:
Tracing: Active
CodeUri: bin/dbClient
Handler: dbClient
Runtime: go1.x
Role: !GetAtt RootRole.Arn
VpcConfig:
SecurityGroupIds:
- Ref: ServerlessSecurityGroup
SubnetIds:
- Ref: PublicSubnet1
Environment:
Variables:
db_url: !GetAtt PostgresqlInstance.Endpoint.Address
db_port: !GetAtt PostgresqlInstance.Endpoint.Port

Creating DB is very similar to the cache. The important point is your DB instance should stand on at least two availability zone.

DB client code:

package main

import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/lambda"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/postgres"
"os"
)

type MyEvent struct {
Name string `json:"name"`
}

func HandleRequest(ctx context.Context, name MyEvent) (string, error) {

dbUrl := os.Getenv("db_url")
dbURI := fmt.Sprintf("host=%s user=CacheClient dbname=postgres sslmode=disable password=ChangeIt2", dbUrl)
fmt.Println(dbURI)
db, err := gorm.Open("postgres", dbURI)
if err != nil {
return "err", err
}

db.AutoMigrate(&Entity{})
var ent = &Entity{}
ent.Text = name.Name
db.Save(&ent)

return fmt.Sprint(&ent.ID), nil
}

func main() {
lambda.Start(HandleRequest)
}
package main

import "github.com/jinzhu/gorm"

type Entity struct {
gorm.Model
Text string
}

All is done. Now we created network, cache instance, DB instance with code. We bind the output of them with our serverless functions. Cache and DB clients used created architecture by code.

The full version of template.yaml:

https://github.com/yunuskilicdev/infrastructureascode

Cites:

1-) https://stackify.com/what-is-infrastructure-as-code-how-it-works-best-practices-tutorials/

2-) https://aws.amazon.com/cloudformation/

3-)https://cloudacademy.com/blog/aws-regions-and-availability-zones-the-simplest-explanation-you-will-ever-find-around/

4-)https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html

5-)https://www.infoq.com/articles/aws-vpc-explained/

--

--

Yunus Kılıç
Analytics Vidhya

I have 10 years of experience in high-quality software application development, implementation, and integration.