Skip to main content
Balancing Egress Data Chunk Size
Elisa Dinsmore avatar
Written by Elisa Dinsmore
Updated over 8 months ago

Flatfile's GET Records endpoint gets data in chunks of 10,000 records by default, and allows you to get those records in chunks of up to 50,000 records at a time. However, larger chunk sizes like this can cause longer processing times and could potentially lead to timeouts, based on the size of each record. When considering how to break up your data while sending it to your backend, there are a few factors to consider.

For larger datasets like 50k records or more, breaking them into smaller chunks is advisable to prevent those long processing times and possible timeouts. A common approach is to experiment with different chunk sizes and measure the performance to find the sweet spot with your system. You can leverage the progress option on jobs to give your end user updates on what's happening.

Ultimately, this can be a balancing act of network efficiency, your backend processing capability, and the user experience. Once you've played around with a few options, you'll find the right combination of factors to ensure a smooth user experience and speedy exports!

For an example of how to leverage this, see our guide on batching records during data egress.

Did this answer your question?