Skip to main content

Set up the proxy

This guide shows you how to set up a proxy for your application. The proxy acts as an intermediary between your application and the Microblink API. This is necessary to protect your API keys and other sensitive data.

You can find a sample implementation of a proxy service in our GitHub repository.

Why use a proxy?

A proxy adds a layer of security to your application. It prevents your API keys from being exposed client-side. The initial request from your application is routed through the proxy. The proxy then adds your API keys to the requests before forwarding them to the Microblink API.

This setup also allows you to implement custom logic. For example, you can add end-user authentication, rate limiting, or logging.

Deployment options

You have several options for deploying the proxy. You can:

  1. run it as a separate service in a container or VM
  2. integrate it into your existing codebase
  3. deploy it as a serverless function.

Run as a container or VM

You can run the proxy as a separate service in a container or a virtual machine. This is a good option if you want to decouple the proxy from your application.

Docker container

You can use the provided Dockerfile to build a Docker image from our sample implementation. Then, you can run the image as a container.

# Build the Docker image
docker build -t mb-proxy .

# Run the Docker container
docker run -p 8081:8080 mb-proxy

Alternatively, you can package your own proxy implementation as a Docker container. This gives you the flexibility to use any language or framework you prefer. For example, you could containerize the Python class wrapper shown in the "Integrate into your codebase" section.

Nginx proxy

You can use Nginx as a reverse proxy. This is a good option if you are already using Nginx in your infrastructure.

# Example of an Nginx configuration for transaction creation.
# This forwards POST requests from your-proxy.com/create-transaction
# to the Agent API's creation endpoint.
server {
listen 80;
server_name your-proxy.com;

# The SDK should be configured to make a POST request to this location.
location /create-transaction {
# Only allow POST requests
if ($request_method != POST) {
return 405 'Method Not Allowed';
}

proxy_pass https://api.us-east.platform.microblink.com/agent/api/v1/transaction;
proxy_set_header Authorization "Bearer <your_access_token>";
proxy_set_header Content-Type "application/json";
proxy_set_header Host api.us-east.platform.microblink.com;
}
}

Integrate into your codebase

You can add the proxy logic directly into your application's back end. This is a good option if you want to keep your architecture simple.

Class wrapper

Create a class that encapsulates the logic for forwarding requests to the Microblink API. This class will be responsible for adding the API keys to the requests.

# Example of a Python class wrapper
import os
import requests
import time

class MicroblinkProxy:
def __init__(self):
self.client_id = os.environ.get("MB_CLIENT_ID")
self.client_secret = os.environ.get("MB_CLIENT_SECRET")
self.agent_api_url = "https://api.us-east.platform.microblink.com/agent"
self.auth_url = "https://account.platform.microblink.com/oauth/token"
self.access_token = None
self.token_expiry = 0

def get_access_token(self):
if self.access_token and time.time() < self.token_expiry:
return self.access_token

payload = {
"client_id": self.client_id,
"client_secret": self.client_secret,
"grant_type": "client_credentials",
"audience": "idv-api"
}
response = requests.post(self.auth_url, json=payload)
response.raise_for_status()
token_data = response.json()
self.access_token = token_data["access_token"]
# Refresh token 5 minutes before it expires
self.token_expiry = time.time() + token_data["expires_in"] - 300
return self.access_token

def create_transaction(self, transaction_data):
'''
Creates a transaction by forwarding the request to the Agent API.
The response contains the transactionId, ephemeralKey, and edgeApiUrl
needed by the SDK to continue the transaction.
'''
access_token = self.get_access_token()
headers = {
"Authorization": f"Bearer {access_token}"
}

# The proxy is only used for creating transactions.
creation_url = f"{self.agent_api_url}/api/v1/transaction"

response = requests.post(
creation_url,
headers=headers,
json=transaction_data
)
return response.json()

Internal library

If you have multiple services that need to access the Microblink API, you can create an internal library. This library can be imported by your services. Here is how you would import and use the library in a few popular languages.

Python

# In your service
from microblink_internal_sdk import microblink_api

# Now you can use the methods from the library
# to make API calls.
response = microblink_api.create_transaction(transaction_data)

Node.js

// In your service
const microblinkApi = require('microblink-internal-sdk');

// Now you can use the methods from the library
// to make API calls.
const response = await microblinkApi.createTransaction(transactionData);

Java

// In your service
import com.yourcompany.microblink.MicroblinkApi;
import com.yourcompany.microblink.Transaction;

// ...

MicroblinkApi microblinkApi = new MicroblinkApi();
Transaction transaction = microblinkApi.createTransaction(transactionData);

Run as a serverless function

You can deploy the proxy as a serverless function. This is a good option if you want to minimize infrastructure management. The function's sole purpose is to handle the initial POST request from the SDK to create a transaction.

A common pattern is to cache the access token in a global variable to reuse it across function invocations until it expires.

AWS Lambda

This function is designed to be triggered by an API Gateway endpoint. It only responds to POST requests to create a transaction.

// Example of an AWS Lambda function (Node.js) for transaction creation
const fetch = require('node-fetch');

// Cache the token outside the handler to reuse across invocations
let cachedToken = null;
let tokenExpiry = 0;

async function getAccessToken() {
if (cachedToken && Date.now() < tokenExpiry) {
return cachedToken;
}

const authUrl = "https://account.platform.microblink.com/oauth/token";
const payload = {
client_id: process.env.MB_CLIENT_ID,
client_secret: process.env.MB_CLIENT_SECRET,
grant_type: "client_credentials",
audience: "idv-api"
};

const response = await fetch(authUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});

if (!response.ok) {
throw new Error('Failed to get access token');
}

const tokenData = await response.json();
// Refresh token 5 minutes before it expires
tokenExpiry = Date.now() + (tokenData.expires_in * 1000) - 300000;
cachedToken = tokenData.access_token;
return cachedToken;
}

exports.handler = async (event) => {
if (event.httpMethod !== 'POST') {
return { statusCode: 405, body: 'Method Not Allowed' };
}

try {
const accessToken = await getAccessToken();
const creationUrl = "https://api.us-east.platform.microblink.com/agent/api/v1/transaction";

const response = await fetch(creationUrl, {
method: 'POST',
headers: {
"Authorization": `Bearer ${accessToken}`,
"Content-Type": "application/json"
},
body: event.body
});

const data = await response.json();

// The response body contains the transactionId, ephemeralKey, and edgeApiUrl
// that the SDK needs to continue the process.
return {
statusCode: response.status,
body: JSON.stringify(data)
};
} catch (error) {
return {
statusCode: 500,
body: JSON.stringify({ message: 'Internal server error' })
};
}
};

Netlify function

The logic for a Netlify function is very similar. It should also be restricted to handling only POST requests for transaction creation.

For a detailed example of how to implement a Netlify function that also injects data into the request, see the Match face images against selfies how-to guide.

// Example of a Netlify function (Node.js) for transaction creation
const fetch = require('node-fetch');

// The getAccessToken function would be the same as in the Lambda example.
// ...

exports.handler = async (event) => {
if (event.httpMethod !== 'POST') {
return { statusCode: 405, body: 'Method Not Allowed' };
}

// The rest of the handler logic is identical to the AWS Lambda example above.
// It should get the access token and POST the event.body to the
// .../agent/api/v1/transaction endpoint.
};

Secure your proxy

A proxy that is open to the public can be abused. It's critical to add a layer of authentication to ensure that only legitimate users from your application can create transactions. This protects your API quota and prevents malicious actors from creating transactions on your behalf.

A common approach is to have your client application (web or mobile) send a credential along with the request to the proxy. The proxy then validates this credential before proceeding.

For example, using an Express.js server, you could implement a middleware to check for a valid JSON Web Token (JWT) in the Authorization header.

// Example of a JWT authentication middleware in Express.js
const jwt = require('jsonwebtoken');

function authenticateToken(req, res, next) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];

if (token == null) {
return res.sendStatus(401); // Unauthorized
}

jwt.verify(token, process.env.YOUR_JWT_SECRET, (err, user) => {
if (err) {
return res.sendStatus(403); // Forbidden
}
req.user = user;
next();
});
}

// Apply this middleware to your transaction creation route
app.post('/create-transaction', authenticateToken, (req, res) => {
// If we get here, the user is authenticated.
// Now we can proceed with creating the Microblink transaction.
// ...
});

Rate limiting

To prevent abuse, you should limit the number of requests a single user or IP address can make to your proxy. This can be easily added with middleware libraries.

// Example of rate limiting in Express.js
const rateLimit = require('express-rate-limit');

const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // Limit each IP to 100 requests per window
standardHeaders: true,
legacyHeaders: false,
});

// Apply the rate limiter to your route
app.post('/create-transaction', limiter, authenticateToken, (req, res) => {
// ...
});

Input validation

Before forwarding a request to the Microblink API, your proxy should validate the incoming data from the client. This ensures that the payload is well-formed and contains all required fields, preventing errors and potential security vulnerabilities.

// Example of input validation using Joi
const Joi = require('joi');

const transactionSchema = Joi.object({
workflowId: Joi.string().required(),
consent: Joi.object({
userId: Joi.string().required(),
// ... other consent fields
}).required(),
// ... other expected fields from the SDK
});

function validateInput(req, res, next) {
const { error } = transactionSchema.validate(req.body);
if (error) {
return res.status(400).send(error.details[0].message);
}
next();
}

// Apply validation before other logic
app.post('/create-transaction', validateInput, limiter, authenticateToken, (req, res) => {
// ...
});

Scale your proxy with a load balancer

For production environments with significant traffic, running a single instance of your proxy creates a single point of failure and a performance bottleneck. To ensure high availability and scalability, you should run multiple instances of your proxy behind a load balancer.

A load balancer distributes incoming traffic across your proxy instances, so if one instance fails or becomes overloaded, traffic is automatically redirected to healthy instances.

Cloud-based load balancers

Most cloud providers (AWS, Google Cloud, Azure) offer managed load balancing services that are easy to configure and integrate with your virtual machines, containers, or serverless functions. This is the recommended approach for most use cases.

Self-hosted load balancer

Alternatively, you can run your own load balancer using software like Nginx or HAProxy. This gives you more control but requires more maintenance.

Here is a snippet for an Nginx configuration that load balances traffic between two proxy instances running on the same machine at ports 8081 and 8082.

# Define the group of servers to balance traffic across
upstream my_proxy_servers {
server 127.0.0.1:8081;
server 127.0.0.1:8082;
}

server {
listen 80;
server_name your-proxy.com;

location / {
proxy_pass http://my_proxy_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}