Rate Limiting Events
Event Ingress previously applied a simple API Rate Limiting strategy of accepting a fixed number of events within a time window, similar to other fine-grained APIs in Fenergo SaaS. This worked for legacy Business Processes (e.g. CreateEntity, UpdateEntity, etc) when events operated on a single entity at a time with a fixed number of internal API calls.
With our newer Business Processes (DataImport and VerifiedImport), which bring expanded capabilities and batch operations, we have had to introduce a new approach to rate limiting that's capable of evaluating the resulting compute for the incoming event. Events that update a single entity shouldn't be weighted the same as one that updates over 250 entities.
Why are we making this change?
As you can see in the table below, using the old system of limiting by API calls it is possible to submit huge numbers of entity updates in a short amount of time using the new DataImport event. These numbers could allow mis-use and negatively impact other tenants.
With the new point-based system we still allow for double the number of entity updates over the old system using V1 events, but this will better protect the platform and ensure tenants cannot be impacted by 'noisey neighbors'.
| Interval | API Limits | V1 Events | DataImport (API Limit) | Complexity Limit | DataImport (Complexity) |
|---|---|---|---|---|---|
| Second | 6 | 6 | 1,800 | – | – |
| Minute | 75 | 75 | 22,500 | 3,000 | 150 |
| Hour | 750 | 750 | 225,000 | 30,000 | 1,500 |
Complexity Points
Incoming events will now be assigned a point value based on the complexity of the event. Complexity Points for a given Business Process is calculated accordingly:
| Business Process | Complexity Points Calculation |
|---|---|
| TransactionScreening | 1 point per event |
| CreateAlert | 1 point per event |
| MigrateEntity | 1 points per event |
| CreateEntity | 20 points per event |
| UpdateEntity | 20 points per event |
| VerifiedImport | 1 point per event +2 points per entity |
| DataImport | +8 points for each Journey to be created +12 points for each Blocking Task to be completed (i.e. EventConfig.ImportBehaviors.SkipCompleteTask = False) |
Complexity Limits
Here are the updated tenant limits (Limit Buckets) using the new complexity points system.
| Tenant Type | Minute | Hour |
|---|---|---|
| SDLC | 3,000 | 30,000 |
| Production | 3,000 | 30,000 |
For clients that are exclusively using newer Business Processes we can remove the per-minute limits.
As with our rate limits, please reach out to Fenergo Support to update these limits as required for your specific use cases.
As your events are received, the calculated points for your event will be reduced from each tenant Limit Bucket - per-hour and per-minute. As long as you have 1 point remaining in a limit bucket the event will be accepted, regardless of how many points the incoming event requires.
Consider a scenario where a client is sending in two DataImport events, each updating 100 Entities. The resulting complexity points for each event is 2,000 points.
Complexity Points for DataImport Event =
[# of Journeys required] * ( [Create Journey] + [Blocking Tasks to be completed] )
100 * ( 8 + 12 ) = 2,000 points
Assume for this example there are no currently open limit windows.
1. As the first event is received we verify there are available points in the Limit Buckets.
3000 [per-minute Limit] - 2000 [Event #1 Points] = 1000 points remaining for a new minute window set to expire 60 seconds from now.
30000 [per-hour Limit] - 2000 [Event #1 Points] = 28000 points remaining for a new hourly window set to expire 60 minutes from now.
2. Assume the second event is sent back-to-back and will be received within the same minute window.
1000 [per-minute Limit] - 2000 [Event #2 Points] = -1000 points remaining for current minute window.
28000 [per-hour Limit] - 2000 [Event #2 Points] = 26000 points remaining for current hourly window.
At this point both events have been accepted but there is now a negative balance of Complexity Points in the per-minute limit bucket meaning limit has been exceeded. If a third event is recieved one of the following outcomes can occur:
- Event received within the same window. The per-minute limit has been exceeded, client receives a 429 HTTP status code response indicating that requests are being throttled.
- Event received after the window has expired. The per-minute bucket resets to the default (3,000) as the incoming event is evaluated and a new per-minute window will begin using the current timestamp. As both the per-minute and per-hour buckets have a surplus of available points the event is accepted.
Monitoring Limits
Clients can monitor the current limits and available points / window expiry using the following headers returned in the POST Event response. The headers returned depend on whether the request was accepted or rejected due to rate limiting.
Scenario 1: Request Accepted
When your event is successfully processed, the following headers are returned:
| Header | Description |
|---|---|
| Remaining-Quota | Remaining points in the bucket with the longest time window |
| Retry-After-Seconds | Remaining seconds until the longest bucket expires |
| X-Rate-Limit-Remaining | Remaining points converted to a count of entity updates remaining. Included for backwards compatibility with previous api rate-limiting strategy |
| X-Rate-Limit-Reset | Timestamp when the bucket expires (similar to Retry-After-Seconds). Included for backwards compatibility with previous api rate-limiting strategy |
Example Response Headers (Request Accepted):
Remaining-Quota: 29940
Retry-After-Seconds: 3600
X-Rate-Limit-Remaining: 1497
X-Rate-Limit-Reset: 2025-08-07T04:21:01.3529412Z
Scenario 2: Request Rejected (Too Many Requests)
When your event is rejected due to rate limiting (HTTP 429), the following headers are returned:
| Header | Description |
|---|---|
| Remaining-Quota | Will be 0 as no remaining quota |
| Retry-After-Seconds | Remaining seconds for the bucket which limit was exceeded to expire |
| X-Rate-Limit-Retry-After-Seconds | Same value as Retry-After-Seconds. Included for backwards compatibility with previous api rate-limiting strategy |
Example Response Headers (Request Rejected):
Remaining-Quota: 0
Retry-After-Seconds: 1800
X-Rate-Limit-Retry-After-Seconds: 1800
API Rate Limiting headers (X-Rate-Limit-Remaining, X-Rate-Limit-Reset, X-Rate-Limit-Retry-After-Seconds) have been included for backwards compabiltity but will be marked as deprecated.
Obsoletion Date to be determined - clients will receive notification once date is planned