With any AWS implementation, time really is money.
You need to get things moving as quickly and efficiently as possible from the get-go, but where do you start when you’re working with Infrastructure as a Service (IaaS)-type functions in the public cloud? Let’s look at a sample AWS implementation guide together.
Don’t forget that IaaS refers to the use of the cloud for hosting the networking, servers, and different devices required to provide a full Information Technology (IT) infrastructure. IaaS is just one of the many potential ‘as a service’ offerings that we have today with the cloud, but a prevalent one.
The AWS root account and IAM
When you sign up for AWS (typically with a Free Tier account), you indicate the email address that is used to login to AWS for root account access.
This email address represents the most powerful account in the Identity and Access Management (IAM) system of AWS. It can not only configure every aspect of every service in AWS but also manage things like payment and billing.
As a result of the incredible power of this account, it’s recommended that you use it as sparingly as possible. For example, you should quickly create another account for yourself in AWS IAM and make that account an AWS administrator account.
You can then use that account instead of the AWS root account for the day-to-day administration of your infrastructure; you might even create more accounts for yourself with varying privilege levels and use those accounts as required for the various tasks you assign the accounts permission over.
If, for instance, you have an account for yourself that has full administrative rights over the Simple Storage Service (S3) of AWS, you’d use that account when you need to work with S3. This approach is part of a robust security design called the Least Privilege concept.
Figure 1: The IAM Dashboard
Notice in this example how I have:
- Deleted my root account access keys from the cloud, so there is less chance of compromise
- Activated Multi-Factor Authentication on the root account
- Created individual IAM users that I can use to log in with and that have reduced privileges from the root account
- Used groups to assign permissions so that my IAM approach is scalable
- Applied an IAM password policy so that my users do not compromise security by using easy to crack passwords
Want the best people on your AWS implementation?
Tell us what you’re looking for and we’ll put together a job spec that’ll attract professionals with the skills and experience you need.
Your own Virtual Private Cloud
While it is amazing that you can have your own IT infrastructure in the cloud, you certainly want privacy (when required), and you need to have full control over your networking components. Amazon provides this capability thanks to the Virtual Private Cloud (VPC).
When you create your AWS account, AWS creates your default VPC for you. This default VPC consists of the following components:
- An IPv4 private address space that accommodates 65,536 private IPv4 addresses
- A default subnet in each of the Availability Zones in your AWS Region; each subnet accommodates 4,096 addresses
- A default Internet Gateway connected to your default VPC
- A default Security Group associated with the default VPC; this Security Group permits you to control traffic flows in and out of virtual machines (EC2 instances) you might create in AWS
- A default Network Access Control List (NACL) associated with your default VPC; this security structure permits you to control traffic into and out of your subnets
- A default set of DHCP options for your VPC
Some AWS architects recommend leaving these default constructs intact and not using them for anything.
They design a new, custom VPC from the ground up with the exact specifications they need. Others gladly use the default VPC and modify it for their needs—it really is up to you, and I have done both successfully in the past.
Figure 2: A Default VPC
Need servers? No problem thanks to EC2!
When it comes to your server needs, the sky is the limit! AWS provides Amazon Machine Images (AMIs) to quickly spin up Elastic Compute Cloud (EC2) images. You can easily size these virtual machines on appropriate hardware platforms (called instances) to ensure your servers can access the required amount of RAM, CPU, disk, network capacity, and more.
One of the reasons that architecting servers on AWS is so exciting is the fact that you can easily scale your server footprint as demand increases, or even shrinks. This property is called elasticity and is a major reason why cloud technologies are so incredibly popular.
Who needs VMs anyway?
One very exciting area of AWS that is exploding in popularity is called serverless computing.
The primary serverless compute service in AWS is called Lambda. In this design model, you don’t need to worry about spinning up virtual machines or maintaining them at all; instead, AWS provides compute resources for you when you need them based on function calls from your various applications.
This solution almost sounds too good to be true, especially when you consider that it can be very affordable and scalable.
Lambda currently offers 1,000,000 free requests per month and up to 3.2 million seconds of compute time per month.
Where should we store stuff?
Another great advantage of using AWS for IaaS is the variety of options that exist for affordable and scalable storage.
Let’s run through some of the major storage services of AWS and make sure you understand the intent of each:
- Simple Storage Service (S3): S3 provides simple and flexible object storage with an easy to use web service interface to store and retrieve any amount of data from anywhere on the web. It is designed to deliver 99.999999999 percent durability. You can use Amazon S3 for a vast number of purposes, such as:
-
- Primary storage for cloud-native applications
- A bulk repository, or “data lake,” for analytics
- A target for backup and recovery and disaster recovery
- For use with serverless computing
- Elastic Block Store (EBS): EBS provides persistent block storage volumes for use with EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure. This permits EBS to offer high availability and durability. EBS volumes feature the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes—all while paying a low price for only what you provision.
- Elastic File System (EFS): EFS provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily.
- Amazon Glacier: Glacier is a secure, durable, and extremely low-cost storage service for data archiving and long-term backup. With Glacier, you can:
- Reliably store large or small amounts of data for as little as $0.004 per gigabyte per month
- Save money compared to on-premises storage options
- Keep costs low yet suitable for varying retrieval needs
- Choose from three options for access to archives, from a few minutes to several hours
- Snowball: Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS. With Snowball, you don’t need to write any code or purchase any hardware to transfer your data.
- AWS Storage Gateway: The Storage Gateway service seamlessly enables hybrid storage between on-premises storage environments and the AWS Cloud. Features include:
-
- High Performance: AWS Storage Gateway combines a multiprotocol storage appliance with highly efficient network connectivity to Amazon cloud storage services, delivering local performance with virtually unlimited scale.
- Hybrid Cloud Support: You can use it in remote offices and data centers for hybrid cloud workloads involving migration, bursting, and storage tiering.
Data, data, and more data
AWS also provides you with many options for database services. These include:
- Aurora: Amazon’s very own relational database engine in the cloud
- Relational Database Service (RDS): this service permits you to choose between many popular relational database engines including SQL Server, Oracle, MariaDB, and more
- DynamoDB: a fast and flexible NoSQL database service
- ElastiCache: an in-memory, cloud-based caching service
- Redshift: Amazon’s data warehousing solution
- Database Migration Service (DMS): this service makes it simple for you to easily migrate data from your on-premise location to the cloud, or vice-versa
With so many rich services at our fingertips, it’s no wonder that AWS popularity is surging. Remember, while we’ve explored the core services that you’ll find in any IaaS implementation, there are countless other services that could take your infrastructure to new and exciting heights.
More AWS talent than anyone else
Take a look at our database of pre-screened AWS professionals and take the first step toward landing the best administrators, developers, and consultants in the market.
About the author
Anthony Sequeira, CCIE No. 15626, is a seasoned trainer and author regarding various levels and tracks of Cisco, Microsoft, Juniper, and AWS certifications.
In 1994, Anthony formally began his career in the information technology industry with IBM in Tampa, Florida. He quickly formed his own computer consultancy and discovered his true passion—teaching and writing about information technologies. He is a full-time instructor at CBT Nuggets.