Today AWS is one of the leading cloud computing providers and we can safely say it’s the top trending one. The main reason for that is that they have a robust platform on which they offer a wide range of computing services in the segment of App Development, AI, Robotics, Security and even Satellites.
The top reason for their usage as a cloud computing provider is the fact that most of their services are made with scalability in mind, and that the tech part allows the final user to pay only for the resources which are actually in use without commiting to a fixed period license or hidden fees on large contracts. This fits into the Serverless definition.
They even have a minimum threshold for their resources usage which guarantee you with a free layer of their services. For example, you can start the development of an API using API Gateway, Lambda, Cognito and RDS, and that will cost you 0$. If you don’t believe me, you can check yourself:
Great, right? So, how should you start the development process? The most common way is going into each of the services on the AWS console and start configuring those per your needs. This works but will not give you the ability to recreate the stack from scratch, let’s say, to test it offline or deploy it as a new environment, and it lacks versioning and code handling which you would normally be used to.
In this situation is where the Serverless Framework comes into play. This framework allows us to build and deploy serverless applications using all AWS services (beside others as Azure or Google) which will be actually coded into templates.
Now that we are ready, let’s create a directory and initialize our serverless project by running:
This will create three files on the directory: a .gitignore, a function called handler and a serverless.yml which is the main file that serverless uses to build and deploy the resources. It also provides you with helpful comments on the serverless.yml which you should read before removing.
The serverless.yml includes the basic configuration that allows the building and deploying to the AWS platform. In this case a service name related to your app, a provider which is related to AWS platform, and a function which is later deployed as a Lambda on AWS.
By default, when you deploy the app, the serverless framework creates basic resources to build and deploy the app. This includes a versioned Lambda and a Log Group for each function, an S3 bucket to handle the deployment, and a Lambda Execution Role which is required to invoke the Lambdas.
The next step is to set up the other AWS resources we are going to be using on our serverless app. We do this by using the AWS SAM template specification which provides you with a simple and clean syntax to describe the functions, APIs, permissions, configurations, and events that make up a serverless application. This is later translated into a CloudFormation on deployment.
For that purpose, we create a folder on the project root called “Resources”, where we are going to define each resource on a file using the SAM specification.
For Cognito we created a cognito.yml file inside the Resources folder with the templates to create a user pool, a user pool client, a user pool domain and an endpoint authorizer based on this user pool:
For RDS we created an rds.yml with the template to create a PostgreSQL DB Instance using environmental variables provided by the serverless project, and for the sake of simplicity, a rule to grant access publicly that allows us to not use a VPC with a paid NAT Gateway.
Then we import these resources into our serverless project like this:
With these we are basically declaring our AWS resources using SAM and some dynamic variables coming from the serverless.yml file. This is done to customize our resources based on the environment set up at project level.
Now that we have the “static” AWS resources it is time to build our endpoints, which will be used to authenticate, post on the wall and get the wall. For that we just need to work on our serverless.yml to create new lambdas and expose endpoints for them.
We first add the stage and environment properties into the provider. This will allow us to deploy our project on a particular environment and also define database credentials. Also for sake of simplicity, we are directly entering database credentials here, but they can come from the env or securely encrypted from KMS, too.
Next, we add the lambdas and their endpoints with an individual package configuration. For each new lambda, copy the default handler and rename accordingly:
Notice that the two last endpoints, getWall and post have an authorizer property which is linking to the endpoint authorizer we created inside the cognito.yml. That way, only these two endpoints will be protected. Then, the project structure will be as following:
Now we are in good shape to start coding inside the Lambdas. Let’s add the authentication logic. To interact with Cognito, we will be using aws-sdk which is a global dependency available on the lambdas. For the signup process, we write this code on the signup.js file as follows:
Here we are forwarding the username and password to the Cognito sign up process, ensuring that both client and server errors are handled properly. If all goes well, the endpoint will report to check your email for email verification.
For the sign in process inside the signin.js we use the following:
As with the previous endpoint, we are forwarding the credentials to the Cognito sign in process. If the user is already confirmed, it will return the authorization tokens.
For the next two endpoints, which require fetches to an RDS Postgres database, we will need to use an adapter because the native way of using the AWS Data API does not support the free Postgres instance we are using in this example. It requires us to use an Aurora Database, which is a highly performant and scalable database compatible with MySQL and Postgres created by AWS, and also not available on the free layer.
The adapter we are going to use is node-postgres. Let’s add this to the project by initializing a package in our serverless project and installing the dependency:
Now we can require this dependency on our lambdas that interact with the database. Notice that we already have our node_modules packaging configuration set up at the two endpoints.
In the getWall.js we connect to the database, ensure the posts table is created and return the rows on that table:
In the post.js we almost do the same, but instead of fetching the records on the table, we insert a single record:
And that’s it. The next step is to deploy our serverless project. By default, it will deploy to a dev environment, and internally it will translate our stack into an AWS Cloud Formation:
The AWS resources for our cloud formation are now created and the final diagram shows them:
Finally, after deploying, the serverless cli will output the service information with the endpoints we can consume:
Service Informationservice: wallpoststage: devregion: us-east-1stack: wallpost-devresources: 34api keys: Noneendpoints: POST - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/signup POST - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/signin GET - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/getWall POST - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/postfunctions: signup: wallpost-dev-signup signin: wallpost-dev-signin getWall: wallpost-dev-getWall post: wallpost-dev-postlayers: NoneServerless: Removing old service artifacts from S3...Serverless: Run the "serverless" command to setup monitoring, troubleshooting and testing.
Let’s test using Postman. First, let’s sign up:
The endpoints are already handling the validation errors coming from Cognito, so before signing in, we need to verify the account by clicking the link on the received email.
After confirming we can sign in, like this:
The response will include all the auth tokens provided by Cognito. Now, for the next two endpoints that require authentication, we need to take the ID Token and set it up as Bearer on the Authorization header like this:
After that we can post on the wall by consuming the post endpoint like this:
Finally, to retrieve all the wall posts, we consume the getWall endpoint like this:
Job well done! Recapping, we just developed a small REST API using a set of AWS esources which we automatically designed, linked and handled within our serverless project without paying for anything.
Just remember not to update anything on AWS outside the serverless project because changes will be out of sync and the serverless project might not be deployable later. If you want to do updates you need to check the documentation of each resource and see which property update can be done with a stack update or a stack regeneration.
You can find this project source code on my GitHub. Outside this example I’ve added an eslint configuration just to make the code clean.
Ready to get started? Use the form below or give us a call to meet our team and discuss your project and business goals.
We can’t wait to meet you!