Headless File Feeds
Elisa Dinsmore avatar
Written by Elisa Dinsmore
Updated over a week ago

This article is for the latest version of the Flatfile Data Exchange Platform.

Flatfile's Platform now enables you to set up headless file feeds that can be leveraged to make the import process as quick and easy as possible. We have an overview of how this can be accomplished in our Developer Documentation, but here are some code snippets and additional tips and tricks you can use to get rolling!

Create a Space:

You can automate the creation of a new space to handle the file processing once a user initiates an import or a file is uploaded to your database. This can be done with a request to the Create a Space endpoint, with workbook creation following as soon as the space has been created. We have a full code example of how you could accomplish this in our documentation.

Upload Files:

Flatfile's API allows you to upload a file to a space, bypassing the need to drag and drop or select specific files to be imported. If you already have those files hosted somewhere, they can be fed into the importer directly.

await api.files.upload(file, {
spaceId,
environmentId,
mode: "import",
});

If left unspecified, the mode property will default to "import", but you can also upload files to the "Available Downloads" tab with mode: "export".

Extract Data from Files:

CSVs uploaded to Flatfile will extract automatically, with no additional work needed on your side. However, if you're uploading other file types, you may need to run some custom extraction jobs. Our Plugins library has a number of extractors for different file types that can be leveraged, or you can build your own if you need something else! Once installed and deployed, the Flatfile extractor plugins will run automatically on the file: created event.

listener.use(xlsxExtractorPlugin());

Automate Mapping:

When working with mapping, you may need to play around with different setups to figure out how best to automate mapping your incoming files, especially if you are expecting column headers that vary wildly from the field names you're setting in your Blueprint. We have a plugin to make the mapping process easier that will run as soon as a file has been extracted, or you can modify this plugin if you find you need more customized logic.

listener.use(
automap({
accuracy: "confident",
defaultTargetSheet: "Contact",
matchFilename: /test.csv$/g,
onFailure: (event: FlatfileEvent) => {
// send an SMS, an email, post to an endpoint, etc.
console.error(
`Please visit https://spaces.flatfile.com/space/${event.context.spaceId}/files?mode=import to manually import file.`,
);
},
}),
);

You can also use the debug property to help understand where Flatfile isn't confident in its matches, and further optimize your Blueprint or your incoming files to ensure the mapping runs smoothly.

Automate Validation and Transformation:

If you have common validations you'll need to perform that will clean up all or the majority of the data on the incoming file, those can be configured in your listener code, running on commit: created. Our Record Hook plugin ensures that these will run automatically as soon as the records are mapped into the sheet.

Did this answer your question?