This article is for the latest version of the Flatfile Data Exchange Platform.
Flatfile's intuitive interface allows you to easily upload and map your data from various sources, including spreadsheets, CSV files, and more, with just a few clicks, saving you time and reducing the amount of manual work needed to get your data to a state that can be accepted by the system you want to import it to. No technical expertise is required.
The import flow has 3 main stages:
Adding data for import
Matching the data headers against the headers of the output template
Data validation and correction
Adding data for import
The first step is adding the input data. Over the course of the flow this data will be transformed into the requested output format, meaning that data added at this step does not need to be prepared or cleaned prior to import. Bring your data as it exists!
Data can be added the following methods:
Upload a file (accepted file formats are: .csv, .xlsx)[insert your accepted file types here] by dropping it into the area
Add data manually
If the added file has multiple sheets, you will be asked to select the relevant sheet to import. If you need to import multiple sheets, you can come back to the file and select the next sheet to upload after the first one is done.
Once you upload the file, the importer will initialize the extraction process, which you may briefly notice by looking at the top right corner - look for a truck icon! Depending on the size of your file, it may take a couple of seconds to process.
⚠️ I uploaded my file, but nothing happened
If after uploading your file, the extraction process does not start and you are not taken to the next step of the flow, you may be using an unsupported file format - please make sure the file has either .csv or .xlsx extension.
⚠️ I uploaded my file, but got an error
If your file is corrupted, Flatfile will not be able to extract the information - you will see the “Extraction failed” tag in the top right corner and a pop up message will appear indicating what exactly needs to be fixed. After addressing the issue, save the file as a new one and try uploading again
Mapping the data headers against the headers of the output template
The Mapping step is central to the Flatfile import flow, confirming the alignment of columns from your incoming file with the fields in the destination schema. Any data in columns mapped at this step will be migrated and transformed into the requested output format.
Mapping is also where Flatfile brings the most magic. Using mapping choices recorded from 1.8 billion (and counting!) rows processed in Flatfile, we’ve trained a machine learning model that works alongside memory of your and your colleagues’ past selections to accurately predict over 90% of matching actions. When available, these mappings will be applied at the mapping step, reducing the number of mapping actions we ask you to make.
When we need input, the corresponding field in the destination fields will remain blank. Use the dropdown to select the relevant field from the destination field list. To help you map those fields, you can hover over the field titles and see a brief data preview for that specific field on the right. If you hover over the destination field titles, a description tooltip will appear if that field has a description.
Once you’ve finished mapping all the necessary fields, click the Continue button in the top right corner.
If there’s a enum/category destination field, you will be taken to a separate mapping screen to map the individual values for that enum field:
⚠️ Any incoming fields that are not mapped to a destination field will be excluded from the import and will not show in any subsequent steps.
Data validation and correction
In the final step of Flatfile’s import process, you will see your data translated into the required output format. This step allows you to review and validate the data as well as correct or provide additional values that are required before the transformation is complete.
You may notice some things have changed: the table headers will now show those of the destination fields and some of the data in the fields may have been changed as well. These changes are all made according to rules set up on the destination schema and help reduce the amount of work required for you to transition your data into the requested output format. To make manual changes in your data, simply double-click into a specific cell and type!
You may also see the fields highlighted in three different ways:
Underlined - this field has been automatically transformed into a required format. Hover over the field to see the original value
Yellow background - this field has a warning - it will still be accepted by the destination system, but you may need to review it first. Hover over the field to see the warning message
Red background - this field has an error that requires your manual change. Hover over the field to see the error message and make changes accordingly
This review table is designed to help you fix any errors on large chunks of data easily. Here are some tips to help you go through the review process even quicker:
Pinning columns. If your file has many columns in it, you can pin up to three columns to keep them “frozen” in place when you scroll through the entire sheet
Sorting. You can sort your dataset easily by clicking on the sort button:
Searching and filtering. The toolbar above the table has multiple options for filtering your data - you can look at all valid or all invalid records, or you can filter based on the specific error, by clicking on “Filter by error”. You can also search for specific records by clicking on the magnifying glass icon. Aside from traditional search, you can use Flatfile Query Language to craft more complex queries on your data - see more here.
Find and replace. You have the ability to find and replace specific values in bulk, as well as replace empty values within a specific column. You can access these additional field options by clicking on the three dots menu in the column header: