“”
Development of a Rest API the serverless way

Rest API

  View more techs   

Development of a Rest API the serverless way

 

Reynaldo Rodríguez

Reynaldo Rodríguez

March 11, 2020

 

Today AWS is one of the leading cloud computing providers and we can safely say it’s the top trending one. The main reason for that is that they have a robust platform on which they offer a wide range of computing services in the segment of App Development, AI, Robotics, Security and even Satellites.

The top reason for their usage as a cloud computing provider is the fact that most of their services are made with scalability in mind, and that the tech part allows the final user to pay only for the resources which are actually in use without commiting to a fixed period license or hidden fees on large contracts. This fits into the Serverless definition.

They even have a minimum threshold for their resources usage which guarantee you with a free layer of their services. For example, you can start the development of an API using API Gateway, Lambda, Cognito and RDS, and that will cost you 0$. If you don’t believe me, you can check yourself:


https://aws.amazon.com/api-gateway/pricing/

https://aws.amazon.com/lambda/pricing/
https://aws.amazon.com/es/rds/free/
https://aws.amazon.com/es/cognito/pricing/


Great, right? So, how should you start the development process? The most common way is going into each of the services on the AWS console and start configuring those per your needs. This works but will not give you the ability to recreate the stack from scratch, let’s say, to test it offline or deploy it as a new environment, and it lacks versioning and code handling which you would normally be used to.

In this situation is where the Serverless Framework comes into play. This framework allows us to build and deploy serverless applications using all AWS services (beside others as Azure or Google) which will be actually coded into templates.

So, let’s put our hands to work. The plan is to build a small API which will allow logged users to post on a common private wall. From the AWS services we will be using Cognito to handle our users and Auth flows, API Gateway to create and expose our REST API and Lambda to code our Javascript logic which will resolve each endpoint.

First of all, you must have an AWS account and the Serverless Framework installed.

After having those setup, you must grant the Serverless Framework access to AWS, you can do it by creating an AWS Access Key and set the credentials on Serverless.

Now that we are ready, let’s create a directory and initialize our serverless project by running:

$ mkdir wallpost && cd wallpost
$ serverless create --template aws-nodejs

 

This will create three files on the directory: a .gitignore, a function called handler and a serverless.yml which is the main file that serverless uses to build and deploy the resources. It also provides you with helpful comments on the serverless.yml which you should read before removing.

Rest API

The serverless.yml includes the basic configuration that allows the building and deploying to the AWS platform. In this case a service name related to your app, a provider which is related to AWS platform, and a function which is later deployed as a Lambda on AWS.

Rest API

Rest API

By default, when you deploy the app, the serverless framework creates basic resources to build and deploy the app. This includes a versioned Lambda and a Log Group for each function, an S3 bucket to handle the deployment, and a Lambda Execution Role which is required to invoke the Lambdas.

Rest API

 

The next step is to set up the other AWS resources we are going to be using on our serverless app. We do this by using the AWS SAM template specification which provides you with a simple and clean syntax to describe the functions, APIs, permissions, configurations, and events that make up a serverless application. This is later translated into a CloudFormation on deployment.

For that purpose, we create a folder on the project root called “Resources”, where we are going to define each resource on a file using the SAM specification.

For Cognito we created a cognito.yml file inside the Resources folder with the templates to create a user pool, a user pool client, a user pool domain and an endpoint authorizer based on this user pool:

Resources:
  UserPool:
    Type: 'AWS::Cognito::UserPool'
    Properties:
      UserPoolName: '${self:service}-${self:provider.stage}-user-pool'
      UsernameAttributes:
        - 'email'
      AutoVerifiedAttributes:
        - 'email'
      VerificationMessageTemplate:
        DefaultEmailOption: CONFIRM_WITH_LINK
  UserPoolClient:
    Type: 'AWS::Cognito::UserPoolClient'
    Properties:
      ClientName: '${self:service}-${self:provider.stage}-user-pool-client'
      UserPoolId: 
        Ref: UserPool
      ExplicitAuthFlows: 
          - 'ALLOW_USER_PASSWORD_AUTH'
          - 'ALLOW_REFRESH_TOKEN_AUTH'
      PreventUserExistenceErrors: 'ENABLED'
      SupportedIdentityProviders:
        - 'COGNITO'
  UserPoolDomain:
    Type: 'AWS::Cognito::UserPoolDomain'
    Properties:
      UserPoolId:
        Ref: UserPool
      Domain: '${self:service}-${self:provider.stage}'
  ApiGatewayAuthorizer:
      DependsOn:
        - ApiGatewayRestApi
      Type: AWS::ApiGateway::Authorizer
      Properties:
        Name: EndpointAuthorizer
        IdentitySource: method.request.header.Authorization
        RestApiId:
          Ref: ApiGatewayRestApi
        Type: COGNITO_USER_POOLS
        ProviderARNs:
          - Fn::GetAtt: [UserPool, Arn]

 

For RDS we created an rds.yml with the template to create a PostgreSQL DB Instance using environmental variables provided by the serverless project, and for the sake of simplicity, a rule to grant access publicly that allows us to not use a VPC with a paid NAT Gateway.

Resources:
  DBSecurityGroup:
    Type: AWS::RDS::DBSecurityGroup
    Properties: 
      DBSecurityGroupIngress: 
        -  CIDRIP: '0.0.0.0/0'
      GroupDescription: 'Group for Lambda Access'
  DBInstance:
    Type: AWS::RDS::DBInstance
    Properties:
      DBInstanceClass: db.t2.micro
      Engine: postgres
      EngineVersion: '11.5'
      AllocatedStorage: '20'
      DBInstanceIdentifier: '${self:service}-${self:provider.stage}-instance'
      DBName: '${self:service}_${self:provider.stage}_db'
      MasterUsername: '${self:provider.environment.DB_USERNAME}'
      MasterUserPassword: '${self:provider.environment.DB_PASSWORD}'
      DBSecurityGroups: 
        - Ref: DBSecurityGroup
      AvailabilityZone:
        Fn::Select:
          - 0
          - Fn::GetAZs: ''


 

Then we import these resources into our serverless project like this:

resources:
  - ${file(resources/cognito.yml)}
  - ${file(resources/rds.yml)}

 

With these we are basically declaring our AWS resources using SAM and some dynamic variables coming from the serverless.yml file. This is done to customize our resources based on the environment set up at project level.

Now that we have the “static” AWS resources it is time to build our endpoints, which will be used to authenticate, post on the wall and get the wall. For that we just need to work on our serverless.yml to create new lambdas and expose endpoints for them.

We first add the stage and environment properties into the provider. This will allow us to deploy our project on a particular environment and also define database credentials. Also for sake of simplicity, we are directly entering database credentials here, but they can come from the env or securely encrypted from KMS, too.

provider:
  name: aws
  runtime: nodejs10.x
  stage: ${opt:stage, 'dev'}
  region: ${opt:region, 'us-east-1'}
  environment:
    DB_HOSTNAME:
      Fn::GetAtt:
        - DBInstance
        - Endpoint.Address
    DB_PORT:
      Fn::GetAtt:
        - DBInstance
        - Endpoint.Port
    DB_NAME: '${self:service}_${self:provider.stage}_db'
    DB_USERNAME: 'wallpost'
    DB_PASSWORD: '123456789'
    COGNITO_USER_POOL_ID:
      Ref: UserPool
    COGNITO_CLIENT_ID: 
      Ref: UserPoolClient

 

Next, we add the lambdas and their endpoints with an individual package configuration. For each new lambda, copy the default handler and rename accordingly:

package:
  individually: true
  exclude:
    - '*'
    - '**/*'

functions:
  signup:
    handler: 'signup.handler'
    package:
      include:
        - signup.js
    events:
      - http:
          path: signup
          method: post
  signin:
    handler: 'signin.handler'
    package:
      include:
        - signin.js
    events:
      - http:
          path: signin
          method: post
  getWall:
    handler: 'getWall.handler'
    package:
      include:
        - getWall.js
        - node_modules/**
    events:
      - http:
          path: getWall
          method: get
          authorizer:
            type: COGNITO_USER_POOLS
            authorizerId:
              Ref: ApiGatewayAuthorizer
  post:
    handler: 'post.handler'
    package:
      include:
        - post.js
        - node_modules/**
    events:
      - http:
          path: post
          method: post
          authorizer:
            type: COGNITO_USER_POOLS
            authorizerId:
              Ref: ApiGatewayAuthorizer

 

Notice that the two last endpoints, getWall and post have an authorizer property which is linking to the endpoint authorizer we created inside the cognito.yml. That way, only these two endpoints will be protected. Then, the project structure will be as following:

Rest API

 

Now we are in good shape to start coding inside the Lambdas. Let’s add the authentication logic. To interact with Cognito, we will be using aws-sdk which is a global dependency available on the lambdas. For the signup process, we write this code on the signup.js file as follows:

'use strict';

const AWS = require('aws-sdk')
const cognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider()

module.exports.handler = async event => {
  const body = JSON.parse(event.body);
  const { username, password } = body;
  if (!username || !password) {
    return response(400, 'You must specify the username and password');
  }
  
  return cognitoIdentityServiceProvider.signUp({
    Username: username,
    Password: password,
    ClientId: process.env.COGNITO_CLIENT_ID
  }).promise().then((result) => {
    return response(200, 'Signed up successfully, please check your email');
  }).catch((error) => {
    return response(error.statusCode, error.message);
  });
};

const response = (responseCode, message) => {
  return {
    statusCode: responseCode,
    body: JSON.stringify(
      {
        message,
      },
      null,
      2
    ),
  };
}

 

Here we are forwarding the username and password to the Cognito sign up process, ensuring that both client and server errors are handled properly. If all goes well, the endpoint will report to check your email for email verification.

For the sign in process inside the signin.js we use the following:

'use strict';

const AWS = require('aws-sdk')
const cognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider()

module.exports.handler = async event => {
  const body = JSON.parse(event.body);
  const { username, password } = body;
  if (!username || !password) {
    return response(400, 'You must specify the username and password');
  }
  
  return cognitoIdentityServiceProvider.initiateAuth({
    AuthFlow: 'USER_PASSWORD_AUTH',
    AuthParameters: {
      USERNAME: username,
      PASSWORD: password
    },
    ClientId: process.env.COGNITO_CLIENT_ID
  }).promise().then((result) => {
    return response(200, result.AuthenticationResult);
  }).catch((error) => {
    return response(error.statusCode, error.message);
  });
};

const response = (responseCode, message) => {
  return {
    statusCode: responseCode,
    body: JSON.stringify(responseCode === 200 ? 
      {
        ...message,
      } :
      {
        message
      },
      null,
      2
    ),
  };
}

 

As with the previous endpoint, we are forwarding the credentials to the Cognito sign in process. If the user is already confirmed, it will return the authorization tokens.

For the next two endpoints, which require fetches to an RDS Postgres database, we will need to use an adapter because the native way of using the AWS Data API does not support the free Postgres instance we are using in this example. It requires us to use an Aurora Database, which is a highly performant and scalable database compatible with MySQL and Postgres created by AWS, and also not available on the free layer.

The adapter we are going to use is node-postgres. Let’s add this to the project by initializing a package in our serverless project and installing the dependency:

$ npm init --yes
$ npm install pg


 

Now we can require this dependency on our lambdas that interact with the database. Notice that we already have our node_modules packaging configuration set up at the two endpoints.

In the getWall.js we connect to the database, ensure the posts table is created and return the rows on that table:

const { Client } = require('pg');

module.exports.handler = async () => {
  try {
    const client = new Client({
      host: process.env.DB_HOSTNAME,
      database: process.env.DB_NAME,
      port: parseInt(process.env.DB_PORT, 10),
      user: process.env.DB_USERNAME,
      password: process.env.DB_PASSWORD,
    });

    await client.connect();

    const tableExists = await client
      .query('SELECT EXISTS (SELECT FROM pg_tables WHERE schemaname = \'public\' AND tablename  = \'posts\');')
      .then((result) => result.rows[0].exists);

    if (tableExists) {
      const result = await client.query('SELECT * FROM public.posts');
      await client.end();
      return response(200, { posts: result.rows });
    }

    await client.query(`
      CREATE TABLE public.posts (id serial, message text NOT NULL);
    `);

    await client.end();

    return response(200, { posts: [] });
  } catch (e) {
    return response(500, e);
  }
};

const response = (responseCode, message) => ({
  statusCode: responseCode,
  body: JSON.stringify(responseCode === 200
    ? {
      ...message,
    }
    : {
      message,
    },
  null,
  2),
});

 

In the post.js we almost do the same, but instead of fetching the records on the table, we insert a single record:

const { Client } = require('pg');

module.exports.handler = async (event) => {
  try {
    const body = JSON.parse(event.body);
    const { message } = body;
    if (!message) {
      return response(400, 'You must specify the message');
    }

    const client = new Client({
      host: process.env.DB_HOSTNAME,
      database: process.env.DB_NAME,
      port: parseInt(process.env.DB_PORT, 10),
      user: process.env.DB_USERNAME,
      password: process.env.DB_PASSWORD,
    });

    await client.connect();

    const tableExists = await client
      .query('SELECT EXISTS (SELECT FROM pg_tables WHERE schemaname = \'public\' AND tablename  = \'posts\');')
      .then((result) => result.rows[0].exists);
    if (!tableExists) {
      await client.query(`
        CREATE TABLE public.posts (id serial, message text NOT NULL);
      `);
    }

    const result = await client.query(`
        INSERT INTO public.posts (message) VALUES ('${message}') RETURNING posts.id, posts.message
      `);

    await client.end();

    return response(200, { posts: result.rows });
  } catch (e) {
    return response(500, e);
  }
};

const response = (responseCode, message) => ({
  statusCode: responseCode,
  body: JSON.stringify(responseCode === 200
    ? {
      ...message,
    }
    : {
      message,
    },
  null,
  2),
});

 

And that’s it. The next step is to deploy our serverless project. By default, it will deploy to a dev environment, and internally it will translate our stack into an AWS Cloud Formation:

$ serverless deploy

 

The AWS resources for our cloud formation are now created and the final diagram shows them:

Rest API

 

Finally, after deploying, the serverless cli will output the service information with the endpoints we can consume:

Service Information
service: wallpost
stage: dev
region: us-east-1
stack: wallpost-dev
resources: 34
api keys:
  None
endpoints:
  POST - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/signup
  POST - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/signin
  GET - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/getWall
  POST - https://tktdwsch71.execute-api.us-east-1.amazonaws.com/dev/post
functions:
  signup: wallpost-dev-signup
  signin: wallpost-dev-signin
  getWall: wallpost-dev-getWall
  post: wallpost-dev-post
layers:
  None
Serverless: Removing old service artifacts from S3...
Serverless: Run the "serverless" command to setup monitoring, troubleshooting and testing.

 

Let’s test using Postman. First, let’s sign up:

Rest API

 

The endpoints are already handling the validation errors coming from Cognito, so before signing in, we need to verify the account by clicking the link on the received email.

Rest API

 

After confirming we can sign in, like this:

Rest API

 

The response will include all the auth tokens provided by Cognito. Now, for the next two endpoints that require authentication, we need to take the ID Token and set it up as Bearer on the Authorization header like this:

Rest API

 

After that we can post on the wall by consuming the post endpoint like this:

Rest API

 

Finally, to retrieve all the wall posts, we consume the getWall endpoint like this:

Rest API

 

Job well done! Recapping, we just developed a small REST API using a set of AWS esources which we automatically designed, linked and handled within our serverless project without paying for anything.

Just remember not to update anything on AWS outside the serverless project because changes will be out of sync and the serverless project might not be deployable later. If you want to do updates you need to check the documentation of each resource and see which property update can be done with a stack update or a stack regeneration.

You can find this project source code on my GitHub. Outside this example I’ve added an eslint configuration just to make the code clean.

Reynaldo Rodríguez

Reynaldo Rodríguez

Reynaldo is a Computer Science graduate with eight years of experience in web and mobile development working with the latest technologies. As a senior full-stack developer in the VAIRIX team, he works on a variety of projects for the USA. Reynaldo is currently in charge of designing solutions for our projects using technologies like React, React Native and Node.

Contact us

Ready to get started? Use the form below or give us a call to meet our team and discuss your project and business goals.
We can’t wait to meet you!


Follow Us
See our client reviews