Balancing Egress Data Chunk Size

Last updated: September 23, 2025

Flatfile's GET Records endpoint gets data in chunks of 10,000 records by default, and this 10,000 record default limit also applies to convenience methods like sheet.allData(), which means pagination is required to retrieve complete datasets larger than 10,000 records. allows you to get those records in chunks of up to 50,000 records at a time. However, larger chunk sizes like this can cause longer processing times and could potentially lead to timeouts, based on the size of each record. When considering how to break up your data while sending it to your backend, there are a few factors to consider.

For any dataset larger than 10,000 records, you must implement pagination to retrieve all data. For larger datasets like 50k records or more, breaking them into smaller chunks is also advisable to prevent those long processing times and possible timeouts. A common approach is to experiment with different chunk sizes and measure the performance to find the sweet spot with your system. You can leverage the progress option on jobs to give your end user updates on what's happening.

Important: Methods like sheet.allData() will only return the first 10,000 records by default. To retrieve complete datasets larger than 10,000 records, you must implement proper pagination using the GET Records endpoint with appropriate offset and limit parameters.

Ultimately, this can be a balancing act of network efficiency, your backend processing capability, and the user experience. Once you've played around with a few options, you'll find the right combination of factors to ensure a smooth user experience and speedy exports!

For an example of how to leverage this, see our guide on batching records durning data egress.