What is Parallel Processing?
Elisa Dinsmore avatar
Written by Elisa Dinsmore
Updated over a week ago

This article is for Flatfile's Portal 3.0 and Workspaces. If you'd like to check out the latest version of the Flatfile Data Exchange Platform, click here!

Flatfile has invested in a new data ingestion and processing pipeline designed to improve reliability, reduce bottlenecks, and increase processing speed for Portal 3.0 and Workspaces. Our new parallel processing architecture is faster, more reliable, and more scalable. It can handle larger files, more quickly, and with fewer errors.

To accomplish this, we now process data in parallel. Instead of processing your uploads row by row, line by line, we now split your raw file into discrete chunks that can then each be separately processed simultaneously and then recombined.

The number of discrete chunks that your data is separated into depends on the size of the file you upload. Because each of these chunks is processed at the same time, large uploads are processed more quickly. And because each of these chunks is processed independently, it is easier to retry a processing step if it happens to fail. Instead of having to restart the processing on your entire file, which could take minutes, we can restart only the chunk that encountered a problem, which only takes seconds. This results in more reliability and fewer failed uploads.

Did this answer your question?