How to build your own Infura on AWS using serverless framework

In one of previous articles we’ve demonstrated how to run a single mainnet Parity node on AWS. In many cases having one Parity node will be sufficient, it surely beats a need to rely on external public infrastracture, like Infura.

Drawbacks of having only one node

However in many situations one node is simply not enough. I can think of several reasons why you may need more than one node.

  1. Downtime. If for any reason node has to be taken down for maintenance there will be no node left. There will be downtime when for example there is a need to upgrade a version of Parity.

Infrastructure of jsonrpc-proxy

Big picture of what proxy does goes as follows:

  1. A list of URL of out Parity node should be kept in DynamoDB. There is also a special URL called leader that points to Infura.

Monitoring nodes

The upside of using AWS services is that they come with default metrics out-of-box

Requests per second

Using built-in ELB metrics we can see the brakedown of requests coming to the proxy per response status codes.

Lateness to Infura

Also jsonrpc-proxy stack pushes it’s own metrics that shows diff between each node block number and Infura.

Number of healthy nodes

Another custom metric shows number of healthy nodes.

This metric is very useful for setting a CloudWatch alarm.

Deploying your own jsonrpc-proxy

The code of our solution can be found in git repository.

Build ECR container with nginx

Our service runs nginx container on ECS cluster which fetches its config from S3 bucket. The image of this service needs to be available on some AWS account as jsonrpc-proxy. You can build and upload image with following commands:

$ cd docker 
$ AWS_DEFAULT_PROFILE=yourProfileName bash -x build_and_upload.sh

This will create an ECR repository, build image and push it. In the result you will see the ARN of created ECR repository that you need to put into config file in the next step.

Written above assumes that you use named profiles for command-line access. If you use session tokens skip the profile part.

Create config file

The stack is done so that it re-uses external resources that have to be passed as parameter in the config file. You can start by creating your own config file from the template provided:

$ cd services 
$ cp config.yml.sample config.dev.yml # this assumes `dev` as stage name to be used later

Edit the file and specify inside:

  • VPC you run in and subnet ids of private subnets

Deploy stack

You will need to have serverless installed on your machine.

npm install -g serverless

Than you need to install stack dependencies:

cd services 
npm install

Once this is done you can deploy stack to your AWS account:

$ AWS_DEFAULT_PROFILE=yourProfileName sls deploy -s dev

Finally you need to configure your DNS to point the name used by stack to your Application Load Balancer.

Add nodes to monitor

Next step is to actually tell the proxy what nodes to monitor:

If everything works fine in a minute you should be able to see that the monitoring kicks in and that the nodes are now healthy: