By using a simple CSV file, the module provides you with. “Duplicate Check” a way to check your entire dataset.
We take care to maintain compatibility when extending the CSV duplicate interface. This means that you can always use the latest version without generating additional effort when integrating it into your ERP system.
In order to guarantee the affiliation of the individual duplicate data records in your master data, you have the possibility, to specify up to two unique keys in the import file.
The default separator of the individual elements for the duplicate check is the ‘|’ character (pipe). This can be changed via the settings. Bold field names are mandatory fields (the separator can be changed via settings).
Please note that all fields must be specified in the import file, even if you do not use Key_1 and Key_2.
Example in the form of a CSV file:
key1;key2;firstname;lastname;name1;name2;name3;name4;street;number;postcode;town;department;country; val_key1;val_key2;val_firstname;val_lastname ;val_name1;val_name2;val_name3;val_name4;val_street;val_number;val_postcode;val_town;val_department;val_country;
… (more duplicate checks)
When creating the CSV import file, please pay attention to the correct number of columns (14 columns). This note is important for possible errors during import when using CSV. However, you can also use the XLSX or JSON import format to eliminate this source of errors.
The CSV export file of the duplicate check contains the transferred values as well as cleaned and marked as duplicate.
|// cleaned data|
|// applied cleaners|
|// applied duplicates|