Integration guidelines

To ensure a consistent and robust service for our shared clients, we require our technical partners to adhere to the following guidelines when integrating their services and solutions with Agillic.

If you have any doubts or technical questions regarding these guidelines, please contact our technical support before proceeding.

Rate limiting
The Agillic Rest API is rate limited. This means we impose certain limits on the API, per production instance, per day. We reserve the right to adjust the rate limit for the given endpoints in order to provide a high quality of service for all customers.

The rate limit enforces the following restrictions on production instances:

  1. Daily Cap on API calls: 5x times as many API calls as Active Recipients.
  2. Concurrent Call Limit: Maximum of 20 concurrent API calls
  3. Max Payload Size: Capped at 20MBs.

Exceeding this limit will result in Agillic applying backpressure to inform callers to regulate the flow of requests. Backpressure is indicated by the API returning an HTTP status code 429. When this status code is returned, it signifies that no changes have been made on the Agillic side.

API Requests
To avoid backpressure and to make a robust solution in case of network problems, temporary unavailability, upgrades, and timeouts, you must implement measures to handle the failed requests. This can be achieved by one or more of the following mechanisms :

Retry Policies: Implementing strategies to retry requests that fail due to transient issues. Retry policy includes setting appropriate intervals between retries, limiting the number of retry attempts, and using exponential backoff to gradually increase the delay between retries.

Throttling: Slowing down the request rate to avoid hitting the limit. This can be implemented by introducing delays between consecutive requests or by adjusting the rate of requests dynamically based on the API’s response.

Rate Limiting: Enforcing a fixed number of requests within a specified period. Once this limit is reached, additional requests are held back or rejected until the next time period begins.

Queue Management: Implementing a system to manage undelivered requests effectively. When requests cannot be processed immediately due to system limits or temporary issues, they should be queued for later processing to ensure that no requests are lost

Batch Processing
Batch resource handling is created for fast and efficient execution of the same operation over an entire set of resources. The efficiency of using batch operations can vary depending on the complexity and size of the batch.

Instead of calling an endpoint multiple times for every resource, you should call an endpoint once, with a set of resources. This way, the overhead of creating and executing multiple HTTP requests is mitigated.

An upper hard limit on batch sizes of 1000 entries is applied.

Connection Pooling
To reduce protocol overhead and latency you should use connection pooling with reuse of persistent connections.

Asynchronous Calls
To improve performance and responsiveness, use asynchronous API calls whenever possible. Utilize callback options available for success, failure, or both.

  • Uploading recipient data to Agillic from a CDP, CRM, POS, websites etc.
  • Triggering a send out of receipts and other transactional communication
  • Exporting activity data for further analytics, scoring, or other use.
  • Using Agillic for logins, tickets, or voucher authentication.
  • Data requests without retry, fallback, and failover handling.
  • API calls without throttling and retry handling.
  • Expose Agillic public APIs to indirect traffic from public websites or apps without taking the limit on concurrent calls into account and without designing for backpressure.