Rate Limiting

Rate limiting

The Lightspeed Retail (X-Series) API is rate-limited, to prevent a single API consumer from adversely affecting others using the platform.

Current limits

We currently only rate limit on a per-retailer basis. So, if your API integration or Lightspeed Retail (X-Series) add-on works with more than one retailer, you’ll have a separate rate limit for each. Requests from all apps count against the same limit, so even if your application doesn't make a lot of requests it may hit the limit because of another applications activity.
Currently, the per-retailer limit is calculated based on how many registers the retailer has. We might adjust these limits in the future. The current limit is:

300 x <number of registers> + 50

For a store with a single register that will result in a limit of 350 requests. The rate limiter is currently based on a 5 minute (300 seconds) window. This represents slightly more than one request per second, per register.

What happens when the limit is reached?

If you hit the rate limit, you’ll be unable to make any more API requests for that retailer until the limiter resets.
Below is an example of a response to a rate-limited request:

    "error": "Too Many Requests",
    "message": "Rate limiting enforced"

The “retry-after” information is provided via the Retry-After HTTP header.

For the header, Lightspeed Retail (X-Series) uses an 'HTTP date' format (RFC1123), for example:

Retry-After: Wed, 15 Jul 2020 15:04:05 GMT

Coping with Rate Limiting

The best way to deal with rate limiting is to process your Lightspeed Retail (X-Series) API requests in a queuing system.
You can then defer jobs that access the API, upon the first 429 response, until after the rate has been reset.
This approach has the advantage of using the API as fast as is possible, then pausing.

A less desirable solution is to pause/sleep any processes that use the API after each call;
this doesn’t respond as well to changes in the limits, nor to two processes using the API at once.
For these reasons, we strongly recommend a queue; most languages have many “deferred job” systems available.