Leveraging Terraform to reduce cloud infrastructure costs

Luis Barral
Luis Barral
December 19, 2023
Cloud
Integration
Leveraging Terraform to reduce cloud infrastructure costs

The rise of cloud hosting and its operational implications


Nowadays, many organizations are reaping the benefits of hosting their systems on the cloud.

However, this model can become unsustainable if covering its monthly operational cost is not viable for your business. Even though it is possible to rapidly increase or decrease cloud resources, you will still end up paying for any unutilized capacity.

Additionally, undertaking cloud implementations will make it necessary for your business to rethink its work processes, as inactive resources and oversupply become commonplace.

Unnecessary costs add up: each year there is an average of around 20% of total cloud costs which are attributed to squandered resources.

In this article, we will provide a practical example of how we approached these challenges on a recent project. Specifically, you will learn how HashiCorp Terraform allows for the automation, standardization and governance of the cloud infrastructure supply, all while making it possible to reduce unnecessary expenses.

A brief introduction to Terraform


Terraform is an open-source infrastructure as code software tool created by HashiCorp. With Terraform, users define and provide data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally JSON.

Terraform manages external resources (such as public cloud infrastructure, private cloud infrastructure, network appliances, software as a service, and platform as a service) with "providers". HashiCorp maintains an extensive list of official providers, and can also integrate with community-developed providers.Users can interact with Terraform providers by declaring resources or by calling data sources. Rather than using imperative commands to provision resources, Terraform uses declarative configuration to describe the desired final state. Once a user invokes Terraform on a given resource, Terraform will perform CRUD actions on the user's behalf to accomplish the desired state. The infrastructure as code can be written as modules, promoting reusability and maintainability.

Our client and project background

Our client for this cloud infrastructure implementation project is an established organization with over 15 years of market presence. Their business offers tech support services and quality management software solutions for exceptional companies seeking to be competitive around the globe. After experiencing issues and significant delays when planning for a new version of one of their cloud-based software products, they decided to hire nearshore experts to build their software architecture quickly and without errors, ultimately choosing our team of architects to carry out their project.

Project requirements and challenges

This project presented us with an interesting challenge: the project's goal was to design and implement an infrastructure based on microservices, where the frontend was to be included. It was also necessary to implement the repositories, integration processes, and continuous delivery to deploy applications.

After performing an in-depth analysis of the requirements, we proposed working with AWS, since it is the leading cloud solution provider and the one our teams have the most experience with:

Cloud Providers Market Share

The proposed architecture solution was based on an architecture approach of three layers. It consists of seven microservices, implementing a serverless solution with AWS Fargate.

This setup allowed us to scale each individual microservice horizontally without affecting the rest of them, and without the need to manage servers. These services are accessed via a load balancer that handles redirection of incoming Internet traffic:

Proposed Cloud Infrastrucuture Solution

Each service consists of one Postgres database that can only be accessed via its designated service according to the specific AWS policies involved.

We also designed a structure to store the frontend application that will consume these services, which is based on S3 and CloudFront (CDN service).

Optimal setup for prioritizing cloud cost reduction

When the time came to implement this architecture, there were some key aspects we had to consider: our client put a big emphasis on the need for cost savings, as they needed to implement this architecture progressively and with different environments (development, testing environment, production, etcetera).

This is an important aspect, since the typical approach would be implementing the entire infrastructure directly on the client’s AWS account, but this would simply have been too costly and unnecessary, given that to get started they only needed to implement the development environment and some specific services.

Faced with this particular issue, our team analyzed the potential alternatives and decided to use infrastructure as code. Basically, instead of creating the AWS resources manually via a UI, we resolved to write them via an automated script on a tool, and have this tool be the one to create the resources for us when we need them.

To get this done, we leveraged the perfect tool, called Terraform, which allows you to write all the infrastructure in a modular, organized way:

  • Networking, firewalls, security, DNS services and certificates
  • Backend: Fargate services, databases, AWS cluster, among others
  • Services associated with the frontend application

This way, we ensured that the Terraform project could scale in an organized manner and be maintained by our client’s technical team.

A very useful feature of Terraform is that it allows you to handle environment variables. This helped us tremendously for creating a unique resource configuration for each work environment.

For example:

  • For the development environment, the microservices are created with minimal specifications, which significantly reduces costs.
  • For the production environment, the client can specify how much memory and CPU should be used for each service, as well as the amount of instances that are executed simultaneously.

Terraform's straightforward knowledge transfer process


Once all the code necessary for executing the required infrastructure was written, and after running a series of successful tests, we were ready to begin the delivery and implementation process.

Terraform is very simple to use when it comes to executing the code. A couple of meetings with our client’s technical team were enough for them to have the tool installed and configured in their computers, as well as being able to create a couple of services in the development environment.

This way, they progressively began working on their own business model, without the need to have an entire infrastructure created but unutilized, thus allowing them to perform gradual cost analysis based only on the services already created in AWS.

Bringing it all together


Terraform allowed for the automation of our client’s infrastructure, allowing them to determine what, when, and how to include it. They will also be able to create additional work environments aside from those configured by our team, and they can even migrate the whole setup to a new AWS account.

Another of Terraform’s biggest benefits is that it allows you to automatically delete an entire development environment along with its associated resources. This solves a very common problem that affects thousands of users, which is unintentionally keeping active AWS resources and therefore generating high, unnecessary expenses due to services not being properly disabled.

Need help implementing your cloud infrastructure on your upcoming project? Get in touch with our team to discuss your specific goals, explore the ideal setup for your needs, and learn how to reduce project costs.

Don't miss a thing, subscribe to our monthly Newsletter!

Thanks for subscribing!
Oops! Something went wrong while submitting the form.

Get and Put data Using Lambda and DynamoDB, simple and clear

A simple and clear step by step guide on how to get and put data using Lambda & DynamoDB

February 7, 2020
Read more ->
Integration
Lambda
API

Consuming GraphQL endpoint using React

Learn how to use React-Apollo to consume GraphQL endpoints with this step-by-step guide.

February 13, 2020
Read more ->
React
Apollo
GraphQL
Integration

Contact

Ready to get started?
Use the form or give us a call to meet our team and discuss your project and business goals.
We can’t wait to meet you!

Write to us!
info@vairix.com

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.