Mitigate Errors
In this guide, you’ll learn how to identify and mitigate common errors such as server errors, reauthentication errors, and rate limit errors.
Finch aims to ensure our APIs always return the data in the exact format you expected it, but sometimes errors still happen. This guide will discuss our customers’ most frequently encountered errors and how to mitigate them. Your application should expect specific error types and handle errors from the Finch API.
Handle null values and 202 Response Codes
Review our Handling API Responses guide to learn how to handle null
values and 202
response codes from the Finch API.
Server errors
While unlikely, any robust application should still plan to encounter server errors HTTP 500
. If this type of error occurs, implementing a few error-handling mechanisms in your application can help maintain a good user experience while still allowing you to diagnose the issue.
- Friendly error page - Always display a user-friendly error message instead of raw server errors. This helps in maintaining user trust.
- Log the error - Capture detailed server logs including the error message, stack trace, request details, and any other relevant context. Make sure to specifically log the
finch-request-id
found in the HTTP Request headers. This will help in diagnosing and rectifying the error on Finch’s side. - Health checks - By calling the /introspect endpoint on a regular basis, you can implement a health check to provide information about the status and health of your Finch integration and its connections. This can help in identifying the source of
500
errors faster. - Retry - Sometimes, simply retrying the request after a specified delay or adjusting the request parameters solves the problem.
Product Outages
Finch reports all API outages and provider integration incidents. You can visit https://status.tryfinch.com subscribe to receive email notifications from Finch whenever we create, update, or resolve an incident.
If any server error persists, contact the Finch Support team and attach the finch-request-id
present in the headers of the response for further assistance.
Reauthentication errors
When connections are set up in Finch, a long-lived session is established between you, Finch, and your users’ employment systems. There is no need to refresh the connection on a recurring basis. However, if a user changes security settings on their account or an employment system makes changes to its infrastructure, Finch’s connection can get disconnected. This will result in an error with the HTTP status code of 401 Unauthorized
and finch_code
of reauthenticate_user
(see Finch API errors). When an access token returns this, your user will need to re-authenticate by going through the Finch Connect flow again.
Since a reauthentication error means that a connection is disconnected and the access token is no longer working, your customer must be reengaged to have them reconnect their provider through Finch Connect again to get a new access token. We recommend using automatic in-app or email notifications to alert your customer that there is an issue with their connection, why reconnecting is beneficial, and the steps they need to follow to reconnect.
Finch recommends the following steps to handle reauthentication errors:
- Catch
401
HTTP status code error responses with thefinch_code
ofreauthenticate_user
in your application while handling Finch API responses. - To create a more seamless experience and avoid unintended duplicate connections, create a new Finch Connect session using the
/connect/sessions/reauthenticate
endpoint. The endpoint requires that you pass inconnection_id
of your customer by calling the /introspect endpoint with theiraccess_token
. - Notify the customer that their connection is broken, and they can resolve it by reconnecting their account. To increase conversion, we recommend letting your customer know why reconnecting is beneficial to them and the services they will miss out in the interim. You can prompt your user to log on to their application dashboard where they can reconnect through Finch Connect embedded in your app or send them an email with the Finch Connect redirect link.
- Once your customer goes through Finch Connect successfully, an authorization
code
is generated which you will need to exchange for a newaccess_token
that you can use to send requests to the Finch API again. Make sure to save this new token in your database.
While you will receive a new access token for the employer, everything else will remain the same. All Finch identifiers, like
individual_id
orpayment_id
, are the same across tokens.
Rate limit errors
Finch will return a rate limit error with the HTTP status code 429 To Many Requests
when the request rate limit for an application or an individual IP address has been exceeded. Familiarize yourself with the API rate limits set before continuing.
Finch’s rate limits work on a per-endpoint basis for applications, and we refer to each distinct endpoint as a unique product. Rate limits are summed on a rolling 60-second basis for each unique product. This is commonly referred to as a Sliding or Rolling Window rate limit.
You can think of a product rate limit like a “bucket”. Therefore, when a request is made to a product (which corresponds directly to an API endpoint), a single gallon of water is added to that endpoint’s bucket, thus starting that bucket’s 60-second time-to-live (TTL) timer. After the product’s rate limit is reset after 60 seconds, the first request to that product starts the 60-second TTL again.
Ensure that you stay within these rate limits to avoid API request failures due to exceeding the limits. In case you encounter rate limit errors, implement a “back-off and retry” strategy in your application. For example, you could wait for 60 seconds (since Finch’s rate limit buckets reset after this period) and then retry the request. You could also exponentially increase the wait time between retries or use a random delay if you prefer. This will allow your application to wait for the rate limit to reset and then resume making API requests.
Below are a few ways to fix rate limit errors.
Leverage batched requests
A quick fix if you are hitting rate limit errors is to implement batched requests.
If you are try to retrieve individual details for a 1000-person company, calling the /individual endpoint 1000 times will quickly hit the API’s rate limits. Since there is no limit to the amount of IDs that you can send in a single request, you can batch the 1000 individual_ids
and send as a single request. You will receive back a single response from Finch with an array of 1000 objects containing each individual’s details.
Rate Limit Scenario
Let’s study a hypothetical scenario of how your application would encounter application level rate limits. Assume your application has five access tokens (Token A, Token B, Token C, Token D, Token E) and you are making API requests to any of the the company
, directory
, individual
, employment
, payment
, and pay-statement
endpoints.
When a request is sent to an endpoint, a single gallon of water is added to the application-level product endpoint “bucket” each time. The bucket counts all requests across all the application’s access tokens (Token A, Token B, Token C, Token D, Token E).
Organization endpoints have a capacity of 20 max requests per minute. Pay endpoints have a capacity of 12 max requests per minute.
- Token A makes 5 requests to the /company endpoint, 4 requests to the /directory endpoint, and 3 requests to the /payment endpoint within a minute. Each endpoint (i.e. bucket) is below the 20 and 12 limit capacities, so all of Token A’s requests succeed.
- Application-level rate limits
Bucket Capacity company
5/20 - success directory
4/20 - success individual
0/20 employment
0/20 payment
3/12 - success pay-statement
0/12
- Application-level rate limits
- Token B makes 5 more requests to the /company endpoint, 4 requests to the /directory endpoint, and 3 requests to the /payment endpoint within the same minute. Each endpoint is still below the 20 and 12 limit capacities, so all of Token B’s requests succeed as well.
- Application-level rate limits
Bucket Capacity company
10/20 - success directory
8/20 - success individual
0/20 employment
0/20 payment
6/12 - success pay-statement
0/12
- Application-level rate limits
- Token C and D repeats the same process as Token A and B making 5 requests to the /company endpoint, 4 requests to the /directory endpoint, and 3 requests to the /payment endpoint all within the same minute. The /company and /payment endpoints are now at full capacity, but all of Token C’s and D’s request still succeed because the limits have not been exceeded (yet).
- Application-level rate limits
Bucket Capacity company
20/20 (FULL) - success directory
16/20 - success individual
0/20 employment
0/20 payment
12/12 (FULL) - success pay-statement
0/12
- Application-level rate limits
- Now, if Token E makes 1 request to the /company endpoint and 1 request to the /directory endpoint. The
company
andpayment
request will both fail and return a 429 rate limit error because the application-level buckets are now full. No requests from any additional token will succeed either when calling the /company and /payment endpoints until those bucket’s 60-second Time-To-Live (TTL) timer resets. However, Token E’s request to the /directory endpoint will succeed because the application-leveldirectory
bucket is not full (yet). Only succeeded requests count towards the application level rate limits.- Application-level rate limits
Bucket Capacity company
20/20 (FULL) - error directory
16/20 - success individual
0/20 employment
0/20 payment
12/12 (FULL) - error pay-statement
0/12
- Application-level rate limits
Note: Only succeeded requests count towards the application level rate limits. Similarly, every 5th request to the /company endpoint for each token fails and returns a 429 rate limit error.
Rate limit example
You can use the following code example to enforce this rate limit quota at the application level. The RateLimiter
class makes requests up to the specified rate limit when initialized (ex: 20 requests per minute for the /directory endpoint) and pauses further requests until the rate limit resets after 60 seconds. The request
method of the rate limiter is used to make API requests to Finch’s endpoints, ensuring that you stay within the rate limit “bucket” quota for each endpoint. Simply initialize a new RateLimiter
class for each endpoint being called.
class RateLimiter {
constructor(limit) {
this.limit = limit;
this.requests = [];
}
async request(fn) {
const now = Date.now();
this.requests = this.requests.filter((timestamp) => now - timestamp < 60000);
if (this.requests.length >= this.limit) {
const delay = this.requests[0] + 60000 - now;
await new Promise((resolve) => setTimeout(resolve, delay));
this.requests.shift();
}
this.requests.push(now);
return fn();
}
}
const directoryRateLimiter = new RateLimiter(20); // 20 requests per minute
const url = 'https://api.tryfinch.com/employer/directory'; // Replace with the desired endpoint
const accessToken = '<your_access_token>';
const fetchIndividualData = (
) =>
fetch(url, {
method: 'GET',
headers: {
Authorization: `Bearer ${accessToken}`,
'Content-Type': 'application/json',
},
});
// Use the rate limiter to make API requests
directoryRateLimiter
.request(fetchIndividualData)
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error:', error));
Checkpoint + Next Step
After completing this step, you should be equipped to handle some of Finch’s common error scenarios and how to address them in your application leading to a more resiliant integration. It is easier to mitigate errors if your application has an adequate way to monitor API requests, which is covered in the next section Monitor API Usage.
Learn more
Was this page helpful?