“”
Serverless GraphQL API with Hasura and AWS stack

Rest API

  View more techs   

Serverless GraphQL API with Hasura and AWS stack

 

Reynaldo Rodríguez

Reynaldo Rodríguez

May 5, 2020

 

As we saw in our previous post, we can code and deploy a fully scripted Rest API on AWS using the Serverless Framework. Today we will be diving deeper by creating a GraphQL API boilerplate based on Hasura and the AWS stack. Hasura is a trending open source engine which auto-generates a GraphQL API with subscriptions support by reading a PostgreSQL database model. Previous knowledge of Serverless, Hasura and GraphQL is required.

We will also use Lambda microservices to enable some Hasura capabilities like Authentication, Remote Schema and Event Triggers in the same stack. There will be other simple AWS Resources needed like Cognito, Aurora RDS, ECS and a few others which are more complex like VPC, NAT Gateway, ELB, all these to have full control of the stack and its accessibility between their components and the Internet.

The goal of this tutorial is to create a boilerplate we can use to deploy a backend using Hasura and AWS which is ready to be used. Just keep in mind this proposed stack includes some resources outside the AWS Free Tier and they will be billed accordingly. After deploying, I will show you how to estimate the cost of the stack.

Let’s start as usual by initializing the serverless project by specifying the path where it will be created and the project name:


serverless create --template aws-nodejs --path graphqlApi --name graphqlApi && cd graphqlApi && npm i --save-dev hasura-cli serverless-dotenv-plugin


Now that we have the project initialized, we can fire up our favorite IDE to open it.

First let’s add a .env file which will hold credentials for Hasura and the Database. This will be picked up by serverless and subsequently by the resources at first deployment and will set these values:


HASURA_ADMIN_SECRET=tempestpass DATABASE_USERNAME=tempest DATABASE_PASSWORD=123456789


The next thing to do is to organize the project structure according to the services it will hold.

Let’s create the following folder structure:

- functions

         - cognito-triggers

         - event-triggers

         - remote-schema

                  - mutations

                  - queries

                  - types

- migrations

- resources

Inside the resources folder, we are going to add the yaml typed AWS Resources. I’ll list the AWS main resources and a description of why we are using them:

- VPC (To group all project resources into a separated section of the AWS Cloud and provide it with network addressing).

- ECS (To construct and deploy a container which will hold Hasura).

- ELB (To secure and distribute the traffic to the VPC).

- Cloudfront (To serve our instance into a distributed network across the globe).

- Cognito (To enable user authentication).

- RDS (To create an Aurora Database which will be connected to Hasura).

Let’s add each one of them:

For the VPC let’s create vpc.yml. Here we define the template to create a Virtual Private Cloud with its own networking configuration, which includes 2 public subnets, 2 private subnets, a NAT gateway and proper routing between them:


Parameters: VPCCidrBlock: Type: 'String' Default: '10.192.0.0/16' PublicSubnet1CidrBlock: Type: 'String' Default: '10.192.10.0/24' PublicSubnet2CidrBlock: Type: 'String' Default: '10.192.11.0/24' PrivateSubnet1CidrBlock: Type: 'String' Default: '10.192.20.0/24' PrivateSubnet2CidrBlock: Type: 'String' Default: '10.192.21.0/24' Resources: VPC: Type: 'AWS::EC2::VPC' Properties: CidrBlock: Ref: 'VPCCidrBlock' EnableDnsSupport: true EnableDnsHostnames: true InternetGateway: Type: 'AWS::EC2::InternetGateway' InternetGatewayAttachment: Type: 'AWS::EC2::VPCGatewayAttachment' Properties: InternetGatewayId: Ref: 'InternetGateway' VpcId: Ref: 'VPC' PublicSubnet1: Type: 'AWS::EC2::Subnet' Properties: VpcId: Ref: 'VPC' AvailabilityZone: Fn::Select: - 0 - Fn::GetAZs: "" CidrBlock: Ref: 'PublicSubnet1CidrBlock' MapPublicIpOnLaunch: true PublicSubnet2: Type: 'AWS::EC2::Subnet' Properties: VpcId: Ref: 'VPC' AvailabilityZone: Fn::Select: - 1 - Fn::GetAZs: "" CidrBlock: Ref: 'PublicSubnet2CidrBlock' MapPublicIpOnLaunch: true PrivateSubnet1: Type: 'AWS::EC2::Subnet' Properties: VpcId: Ref: 'VPC' AvailabilityZone: Fn::Select: - 0 - Fn::GetAZs: "" CidrBlock: Ref: 'PrivateSubnet1CidrBlock' MapPublicIpOnLaunch: false PrivateSubnet2: Type: 'AWS::EC2::Subnet' Properties: VpcId: Ref: 'VPC' AvailabilityZone: Fn::Select: - 1 - Fn::GetAZs: "" CidrBlock: Ref: 'PrivateSubnet2CidrBlock' MapPublicIpOnLaunch: false NatGateway1EIP: Type: 'AWS::EC2::EIP' DependsOn: 'InternetGatewayAttachment' Properties: Domain: 'vpc' NatGateway2EIP: Type: 'AWS::EC2::EIP' DependsOn: 'InternetGatewayAttachment' Properties: Domain: 'vpc' NatGateway1: Type: 'AWS::EC2::NatGateway' Properties: AllocationId: Fn::GetAtt: ['NatGateway1EIP', 'AllocationId'] SubnetId: Ref: 'PublicSubnet1' NatGateway2: Type: 'AWS::EC2::NatGateway' Properties: AllocationId: Fn::GetAtt: ['NatGateway2EIP', 'AllocationId'] SubnetId: Ref: 'PublicSubnet2' PublicRouteTable: Type: 'AWS::EC2::RouteTable' Properties: VpcId: Ref: 'VPC' DefaultPublicRoute: Type: 'AWS::EC2::Route' DependsOn: ['InternetGatewayAttachment'] Properties: RouteTableId: Ref: 'PublicRouteTable' DestinationCidrBlock: '0.0.0.0/0' GatewayId: Ref: 'InternetGateway' PublicSubnet1RouteTableAssociation: Type: 'AWS::EC2::SubnetRouteTableAssociation' Properties: RouteTableId: Ref: 'PublicRouteTable' SubnetId: Ref: 'PublicSubnet1' PublicSubnet2RouteTableAssociation: Type: 'AWS::EC2::SubnetRouteTableAssociation' Properties: RouteTableId: Ref: 'PublicRouteTable' SubnetId: Ref: 'PublicSubnet2' PrivateRouteTable1: Type: 'AWS::EC2::RouteTable' Properties: VpcId: Ref: 'VPC' DefaultPrivateRoute1: Type: 'AWS::EC2::Route' Properties: RouteTableId: Ref: 'PrivateRouteTable1' DestinationCidrBlock: '0.0.0.0/0' NatGatewayId: Ref: 'NatGateway1' PrivateSubnet1RouteTableAssociation: Type: 'AWS::EC2::SubnetRouteTableAssociation' Properties: RouteTableId: Ref: 'PrivateRouteTable1' SubnetId: Ref: 'PrivateSubnet1' PrivateRouteTable2: Type: 'AWS::EC2::RouteTable' Properties: VpcId: Ref: 'VPC' DefaultPrivateRoute2: Type: 'AWS::EC2::Route' Properties: RouteTableId: Ref: 'PrivateRouteTable2' DestinationCidrBlock: '0.0.0.0/0' NatGatewayId: Ref: 'NatGateway2' PrivateSubnet2RouteTableAssociation: Type: 'AWS::EC2::SubnetRouteTableAssociation' Properties: RouteTableId: Ref: 'PrivateRouteTable2' SubnetId: Ref: 'PrivateSubnet2'


In a new file called elb.yml let’s create an Elastic Load Balancer to route public access to the VPC at a specific port on the public subnets. Note that the internal port is using 8080, as this is the port that the Hasura docker image exposes when deployed:


Parameters: HTTPPort: Type: 'Number' Default: 80 InternalHTTPPort: Type: 'Number' Default: 8080 Resources: HTTPSecurityGroup: Type: 'AWS::EC2::SecurityGroup' Properties: GroupDescription: '${self:service}-${self:provider.stage}-http-security-group' VpcId: Ref: 'VPC' SecurityGroupIngress: - IpProtocol: 'tcp' FromPort: Ref: 'HTTPPort' ToPort: Ref: 'HTTPPort' CidrIp: '0.0.0.0/0' - IpProtocol: 'tcp' FromPort: Ref: 'InternalHTTPPort' ToPort: Ref: 'InternalHTTPPort' CidrIp: '0.0.0.0/0' LoadBalancer: Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer' Properties: Name: '${self:service}-${self:provider.stage}-load-balancer' Subnets: - Ref: 'PublicSubnet1' - Ref: 'PublicSubnet2' SecurityGroups: - Ref: 'HTTPSecurityGroup' TargetGroup: Type: 'AWS::ElasticLoadBalancingV2::TargetGroup' Properties: Name: '${self:service}-${self:provider.stage}-target-group' HealthCheckEnabled: true HealthCheckPath: '/healthz' Port: Ref: 'ContainerPort' Protocol: 'HTTP' TargetType: 'ip' VpcId: Ref: 'VPC' Listener: Type: 'AWS::ElasticLoadBalancingV2::Listener' Properties: Port: Ref: 'HTTPPort' Protocol: 'HTTP' LoadBalancerArn: Ref: 'LoadBalancer' DefaultActions: - Type: 'forward' TargetGroupArn: Ref: 'TargetGroup'


Next on rds.yml let’s add the Aurora PostgreSQL-based database within the private segment of the VPC that Hasura will be using to store the data. Note how we are referencing to environment variables as they are picked up and declared in the serverless.yml:


Parameters: DBUsername: Type: 'String' Default: '${self:provider.environment.DATABASE_USERNAME}' DBPassword: Type: 'String' Default: '${self:provider.environment.DATABASE_PASSWORD}' EngineVersion: Type: 'String' Default: '10.7' DBPort: Type: 'Number' Default: 5432 DBName: Type: 'String' Default: '${self:service}' Resources: DBSecurityGroup: Type: 'AWS::EC2::SecurityGroup' Properties: GroupDescription: '${self:service}-${self:provider.stage}-db-security-group' VpcId: Ref: 'VPC' SecurityGroupIngress: - IpProtocol: 'tcp' FromPort: Ref: 'DBPort' ToPort: Ref: 'DBPort' SourceSecurityGroupId: Fn::GetAtt: ['VPC', 'DefaultSecurityGroup'] SubnetGroup: Type: 'AWS::RDS::DBSubnetGroup' Properties: DBSubnetGroupDescription: 'Private' SubnetIds: - Ref: 'PrivateSubnet1' - Ref: 'PrivateSubnet2' DB: Type: 'AWS::RDS::DBCluster' Properties: DBClusterIdentifier: '${self:service}-${self:provider.stage}-db' DatabaseName: Ref: 'DBName' DBSubnetGroupName: Ref: 'SubnetGroup' Engine: 'aurora-postgresql' EngineMode: 'serverless' EngineVersion: Ref: 'EngineVersion' MasterUsername: Ref: 'DBUsername' MasterUserPassword: Ref: 'DBPassword' Port: Ref: 'DBPort' VpcSecurityGroupIds: - Ref: 'DBSecurityGroup'


On a new cognito.yml file, we will create the template to deploy a Cognito User Pool and a User Pool Client. This will be the identity provider and what Hasura will use to validate user access through attached Hasura claims on the JWT:


Parameters: RefreshTokenValidity: Type: 'Number' Default: 30 Resources: UserPool: Type: 'AWS::Cognito::UserPool' Properties: UserPoolName: '${self:service}-${self:provider.stage}-user-pool' UsernameAttributes: - 'email' AutoVerifiedAttributes: - 'email' VerificationMessageTemplate: DefaultEmailOption: CONFIRM_WITH_LINK UserPoolClient: Type: 'AWS::Cognito::UserPoolClient' Properties: ClientName: '${self:service}-${self:provider.stage}-user-pool-client' UserPoolId: Ref: 'UserPool' ExplicitAuthFlows: - 'ALLOW_USER_PASSWORD_AUTH' - 'ALLOW_REFRESH_TOKEN_AUTH' PreventUserExistenceErrors: 'ENABLED' SupportedIdentityProviders: - 'COGNITO'


On a new ecs.yml file, we will create the template to deploy the Hasura container within the public segment of the VPC by pulling the last stable version from Dockerhub. We will also add a few environment variables that Hasura requires such as Database connection, Authenticacion configuration, and two static container environmental variables, which refers to two functions in our stack, the remote schema and the event triggers:


Parameters: ServiceDiscoveryTTL: Type: 'Number' Default: 60 ServiceDiscoveryNamespaceName: Type: 'String' Default: '${self:service}-${self:provider.stage}' ContainerName: Type: 'String' Default: '${self:service}-${self:provider.stage}-container' ContainerPort: Type: 'Number' Default: 8080 ContainerImage: Type: 'String' Default: 'registry.hub.docker.com/hasura/graphql-engine:v1.1.1' DesiredCount: Type: 'Number' Default: 1 TaskCpu: Type: 'Number' Default: 512 TaskMemory: Type: 'Number' Default: 1024 AdminSecret: Type: 'String' Default: '${self:provider.environment.HASURA_ADMIN_SECRET}' EnableConsole: Type: 'String' Default: 'true' EnableTelemetry: Type: 'String' Default: 'false' UnauthorizedRole: Type: 'String' Default: 'anonymous' Resources: ExecutionRole: Type: 'AWS::IAM::Role' Properties: RoleName: 'ECSExecutionRole' AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: 'Allow' Principal: Service: - 'ecs-tasks.amazonaws.com' Action: - 'sts:AssumeRole' ManagedPolicyArns: - 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy' LogGroup: Type: 'AWS::Logs::LogGroup' Properties: LogGroupName: '/ecs/${self:service}-${self:provider.stage}' Cluster: Type: 'AWS::ECS::Cluster' Properties: ClusterName: '${self:service}-${self:provider.stage}-cluster' TaskDefinition: Type: 'AWS::ECS::TaskDefinition' Properties: ExecutionRoleArn: Ref: 'ExecutionRole' RequiresCompatibilities: - 'FARGATE' NetworkMode: 'awsvpc' Family: '${self:service}-${self:provider.stage}-task-definition' Cpu: Ref: 'TaskCpu' Memory: Ref: 'TaskMemory' ContainerDefinitions: - Name: Ref: 'ContainerName' Image: Ref: 'ContainerImage' PortMappings: - ContainerPort: Ref: 'ContainerPort' Environment: - Name: 'HASURA_GRAPHQL_ADMIN_SECRET' Value: Ref: 'AdminSecret' - Name: 'HASURA_GRAPHQL_ENABLE_CONSOLE' Value: Ref: 'EnableConsole' - Name: 'HASURA_GRAPHQL_ENABLE_TELEMETRY' Value: Ref: 'EnableTelemetry' - Name: 'HASURA_GRAPHQL_UNAUTHORIZED_ROLE' Value: Ref: 'UnauthorizedRole' - Name: 'HASURA_GRAPHQL_DATABASE_URL' Value: Fn::Join: - '' - - 'postgres://' - '${self:provider.environment.DATABASE_USERNAME}' - ':' - '${self:provider.environment.DATABASE_PASSWORD}' - '@' - Fn::GetAtt: ['DB', 'Endpoint.Address'] - '/' - Ref: 'DBName' - Name: 'HASURA_GRAPHQL_JWT_SECRET' Value: Fn::Join: - '' - - '{"type":"RS256","jwk_url":"https://cognito-idp.' - '${self:provider.region}' - '.amazonaws.com/' - Ref: 'UserPool' - '/.well-known/jwks.json","claims_format":"stringified_json"}' - Name: 'REMOTE_SCHEMA' Value: { "Fn::Join" : ["", ["https://", { "Ref" : "ApiGatewayRestApi" }, ".execute-api.${self:provider.region}.amazonaws.com/${self:provider.stage}", "/remote-schema" ] ] } - Name: 'EVENT_TRIGGER' Value: { "Fn::Join" : ["", ["https://", { "Ref" : "ApiGatewayRestApi" }, ".execute-api.${self:provider.region}.amazonaws.com/${self:provider.stage}", "/event-triggers" ] ] } LogConfiguration: LogDriver: 'awslogs' Options: 'awslogs-group': Ref: 'LogGroup' 'awslogs-region': '${self:provider.region}' 'awslogs-stream-prefix': 'ecs' ServiceDiscoveryNamespace: Type: 'AWS::ServiceDiscovery::PrivateDnsNamespace' Properties: Name: '${self:service}-${self:provider.stage}' Vpc: Ref: 'VPC' ServiceDiscovery: Type: 'AWS::ServiceDiscovery::Service' Properties: NamespaceId: Ref: 'ServiceDiscoveryNamespace' Name: 'service' DnsConfig: DnsRecords: - Type: 'A' TTL: Ref: 'ServiceDiscoveryTTL' Service: Type: 'AWS::ECS::Service' DependsOn: 'Listener' Properties: ServiceName: '${self:service}-${self:provider.stage}-service' LaunchType: 'FARGATE' DesiredCount: Ref: 'DesiredCount' Cluster: Ref: 'Cluster' TaskDefinition: Ref: 'TaskDefinition' HealthCheckGracePeriodSeconds: 3600 NetworkConfiguration: AwsvpcConfiguration: AssignPublicIp: 'ENABLED' SecurityGroups: - Fn::GetAtt: ['VPC', 'DefaultSecurityGroup'] - Ref: 'HTTPSecurityGroup' Subnets: - Ref: 'PublicSubnet1' - Ref: 'PublicSubnet2' LoadBalancers: - TargetGroupArn: Ref: 'TargetGroup' ContainerName: Ref: 'ContainerName' ContainerPort: Ref: 'ContainerPort' ServiceRegistries: - RegistryArn: Fn::GetAtt: ['ServiceDiscovery', 'Arn'] ContainerName: Ref: 'ContainerName'


Last but not least, among the resources is the cloudfront.yml file. Here we will add the template to deploy a Cloudfront Distribution which will serve Hasura through the ELB globally to the frontend clients with low latency:


Parameters: ELBHTTPPort: Type: 'Number' Default: 80 Resources: Distribution: Type: 'AWS::CloudFront::Distribution' Properties: DistributionConfig: Comment: '${self:service}-${self:provider.stage}' DefaultRootObject: '' Enabled: true IPV6Enabled: true HttpVersion: 'http2' Origins: - Id: 'ecs' DomainName: Fn::GetAtt: ['LoadBalancer', 'DNSName'] CustomOriginConfig: HTTPPort: Ref: 'ELBHTTPPort' OriginProtocolPolicy: 'http-only' OriginSSLProtocols: - 'TLSv1.2' DefaultCacheBehavior: AllowedMethods: - 'GET' - 'HEAD' - 'OPTIONS' - 'PUT' - 'PATCH' - 'POST' - 'DELETE' Compress: true ForwardedValues: QueryString: true TargetOriginId: 'ecs' ViewerProtocolPolicy: 'redirect-to-https'


Now that we are done with the resources, let’s move up to the functions. Inside our cognito-triggers folder we need a Pre Token Generation Lambda trigger which will be used by the declared User Pool to attach Hasura Claims into the generated JWT on user login. At the moment there is no way to link these two with serverless due to a known bug but we will link them manually later. Add the following inside cognito-triggers/pre-token-generation.js:


'use strict'; module.exports.handler = (event, context, callback) => { event.response = { claimsOverrideDetails: { claimsToAddOrOverride: { 'https://hasura.io/jwt/claims': JSON.stringify({ 'x-hasura-allowed-roles': ['anonymous', 'user'], 'x-hasura-default-role': 'user', 'x-hasura-user-id': event.request.userAttributes.sub }) } } }; callback(null, event); };


Inside the event-triggers folder, we will add an index.js with a function which will be the entry point for the Hasura Event Triggers:


// Import and use function to handle each trigger (by trigger name) on operation const triggersHandle = { INSERT: { }, UPDATE: { }, DELETE: { } }; exports.handler = async args => { const body = JSON.parse(args.body); const { event: { op, data: { old: oldData, new: newData } }, table } = body; if (triggersHandle[op] && triggersHandle[op][table.name]) { return triggersHandle[op][table.name](newData, oldData).then(() => { return { statusCode: 200, body: 'success', }; }).catch((error) => { console.log('error', error); return { statusCode: 404, body: error }; }); } else { return { statusCode: 404, body: 'No trigger associated' }; } };


On the remote-schema folder we will develop a Lambda GraphQL service based on apollo-server-lambda which will serve as a remote schema for our Hasura instance and will allow us to develop new endpoints with custom logic like third party integrations. In this case we will add authentication endpoints.

Initialize the service and install dependencies by calling:


npm init -y && npm i --save apollo-server-lambda graphql && npm i --save-dev aws-sdk


Let’s start by adding an remote-schema/index.js with the following:


const { ApolloServer, gql } = require('apollo-server-lambda'); const typeDefs = gql` ${require('./types').types} `; const resolvers = { Query: { ...require('./queries').queries }, Mutation: { ...require('./mutations').mutations } }; const server = new ApolloServer({ typeDefs, resolvers, context: ({ event, context }) => ({ headers: event.headers, functionName: context.functionName, event, context, }), }); exports.handler = server.createHandler({ cors: { origin: '*', credentials: true, }, });


Inside remote-schema/mutations let’s add an index.js, which will read every file on this folder and treat it like a resolver for the mutation with the same name:


const fs = require('fs'); const mutations = fs.readdirSync('./functions/remote-schema/mutations') .reduce((p, f) => { if (f === 'index.js') return p; p[f.replace('.js', '')] = require(`./${f}`).default; return p; }, {} ); exports.mutations = mutations;


Add also a signUp.js with the logic to sign up a user on the User Pool. Note we are taking most of the code from the previous serverless post:


const AWS = require('aws-sdk'); const apollo = require('apollo-server-lambda'); const cognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider(); const signUp = async (parent, args) => { const { email, password } = args; if (!email || !password) { throw new apollo.UserInputError('You must specify the email and password'); } return cognitoIdentityServiceProvider.signUp({ Username: email, Password: password, ClientId: process.env.COGNITO_CLIENT_ID, }).promise().then(() => 'Signed up successfully, please check your email') .catch((error) => { throw new apollo.AuthenticationError(error.message) }); }; exports.default = signUp;


Now inside remote-schema/queries let’s do the same but for queries, adding an index.js with the following:


const fs = require('fs'); const queries = fs.readdirSync('./functions/remote-schema/queries') .reduce((p, f) => { if (f === 'index.js') return p; p[f.replace('.js', '')] = require(`./${f}`).default; return p; }, {} ); exports.queries = queries;


Add also a signIn.js with the logic to sign in a user against the User Pool:


const AWS = require('aws-sdk'); const apollo = require('apollo-server-lambda'); const cognitoIdentityServiceProvider = new AWS.CognitoIdentityServiceProvider(); const signIn = async (parent, args) => { const { email, password } = args; if (!email || !password) { throw new apollo.UserInputError('You must specify the email and password'); } return cognitoIdentityServiceProvider.initiateAuth({ AuthFlow: 'USER_PASSWORD_AUTH', AuthParameters: { USERNAME: email, PASSWORD: password, }, ClientId: process.env.COGNITO_CLIENT_ID, }).promise().then((result) => result.AuthenticationResult) .catch((error) => { throw new apollo.AuthenticationError(error.message) }); }; exports.default = signIn;


Regarding the remote-schema/types folder, let’s add an index.js too, to pick all the files in the folder as the schema types:


const fs = require('fs'); const types = fs.readdirSync('./functions/remote-schema/types') .reduce((p, f) => { if (f === 'index.js') return p; p += require(`./${f}`).default; return p; }, '' ); exports.types = types;


Now define the query and mutation types in query.js and mutation.js files respectively, so the graphql server can pick them:


exports.default = `type Query { signIn(email: String!, password: String!): AuthResult }`;


exports.default = `type Mutation { signUp(email: String!, password: String!): String }`;


Also, we are adding a custom type on AuthResult.js, which will be the response of the authentication provider:


exports.default = `type AuthResult { AccessToken: String ExpiresIn: Int TokenType: String RefreshToken: String IdToken: String }`;


All done for the remote schema part, now that we have all AWS resources and our functions in place, let’s wire them all up in the serverless.yml as follows. A few important things to note here: we are using the serverless-dotenv-plugin to load environment variables from the .env in our serverless.yml, we are declaring the COGNITO_CLIENT_ID env var which takes its value from the reference of the deployed resource, we are specifying VPC configuration to our service, we are using an individual package configuration for our Lambda functions, remote schema requires both GET and POST methods, pre token generation doesn’t have an event due to the mentioned bug so we are going to handle both trigger and Cognito deployment but manual linking is required between them over the AWS console.


service: graphqlApi plugins: - serverless-dotenv-plugin provider: name: aws runtime: nodejs12.x stage: ${opt:stage, 'dev'} region: ${opt:region, 'us-east-1'} # versionFunctions: true vpc: securityGroupIds: - Fn::GetAtt: - 'VPC' - 'DefaultSecurityGroup' subnetIds: - Ref: 'PrivateSubnet1' - Ref: 'PrivateSubnet2' environment: HASURA_ADMIN_SECRET: ${env:HASURA_ADMIN_SECRET} DATABASE_USERNAME: ${env:DATABASE_USERNAME} DATABASE_PASSWORD: ${env:DATABASE_PASSWORD} COGNITO_CLIENT_ID: Ref: UserPoolClient package: individually: true exclude: - '*' - '**/*' functions: remote-schema: handler: functions/remote-schema/index.handler package: include: - functions/remote-schema/** events: - http: path: remote-schema method: post cors: true - http: path: remote-schema method: get cors: true event-triggers: handler: functions/event-triggers/index.handler package: include: - functions/event-triggers/** events: - http: path: event-triggers method: post cors: true pre-token-generation: handler: functions/cognito-triggers/pre-token-generation.handler package: include: - functions/cognito-triggers/pre-token-generation.js resources: - ${file(resources/cognito.yml)} - ${file(resources/vpc.yml)} - ${file(resources/elb.yml)} - ${file(resources/rds.yml)} - ${file(resources/ecs.yml)} - ${file(resources/cloudfront.yml)}


Finally, we can deploy our stack by issuing the deploy command on the shell as follows:


serverless deploy


After a few minutes, the whole stack will be deployed on AWS. It may take a while as the Cloudfront instance needs to propagate across all regions. Once that’s done, we can navigate to the Cloudfront management within AWS Console to see the URL where Hasura is reachable, by going to that URL. It will look like this:

Serverless GraphQL API with Hasura and AWS stack


Type the password you specified on HASURA_ADMIN_SECRET .env file and you will be redirected to the dashboard:

Serverless GraphQL API with Hasura and AWS stack


To start adding migrations on the project you can open the hasura console locally by creating a file called config.yaml and adding the following:


admin_secret: tempestpass endpoint: https://d1xm0mmo4ampw6.cloudfront.net/


Being admin_secret the value of HASURA_ADMIN_SECRET and endpoint the Cloudfront url, this way the Hasura CLI can communicate, open the Hasura console locally and persist changes done on the migrations folder. After having this file you can issue the following on the shell:


hasura console


Let’s test the migration persisting by going to the Remote Schema tab and adding the service we created to act as the remote schema. As the URL for that service is already incorporated as an environment variable on the container, we can load that value from there by specifying it as follows:

Serverless GraphQL API with Hasura and AWS stack


Once created, it will look like this:

Serverless GraphQL API with Hasura and AWS stack


And migration folder will have that change persisted:

Serverless GraphQL API with Hasura and AWS stack


We can do this same procedure to add the needed event triggers on the tables when we have them.

After attaching the remote schema, we will be able to see the custom endpoints we created to handle user authentication:

Serverless GraphQL API with Hasura and AWS stack


In order for the Cognito authentication to return the proper JWT, we need to link the pre token generation trigger we created and deployed with the deployed user pool. To do that, we can go to the Cognito User Pool management, specifically on the triggers section, and select the deployed Lambda:

Serverless GraphQL API with Hasura and AWS stack


Now, let’s test the user authentication. We will use the signUp mutation to sign up a new user:

Serverless GraphQL API with Hasura and AWS stack


And SignIn endpoint to sign in a user:

Serverless GraphQL API with Hasura and AWS stack


If we inspect the generated idToken on https://jwt.io/ it will have the attached claims:

Serverless GraphQL API with Hasura and AWS stack


This token is what we need to send from the frontend to be able to query Hasura queries and mutations when the database modeling is all completed.

Regarding our deployed stack on AWS, if you go to the Cost Management section on AWS Console you can find the current and estimated cost for each deployed service. The monthly estimate flat fee will be approximately:

Serverless GraphQL API with Hasura and AWS stack


Consider this as just the base which can be maintained while making the database modeling or developing the frontend. This would go higher when it is deployed to production and usage of services increases because of traffic.

The final stack architecture diagram is:

Serverless GraphQL API with Hasura and AWS stack


With this we conclude the creation of a Serverless GraphQL API boilerplate based on Hasura on AWS. The following source code can be found at https://github.com/ReyRod/graphql-api

Reynaldo Rodríguez

Reynaldo Rodríguez

Reynaldo is a Computer Science graduate with eight years of experience in web and mobile development working with the latest technologies. As a senior full-stack developer in the VAIRIX team, he works on a variety of projects for the USA. Reynaldo is currently in charge of designing solutions for our projects using technologies like React, React Native and Node.

Contact us

Ready to get started? Use the form below or give us a call to meet our team and discuss your project and business goals.
We can’t wait to meet you!


Follow Us
See our client reviews