During a data migration, dataload.py updates the following log files in real time:
dataload.log provides a running commentary on pretty much everything that happens during a data migration. For example:
The preceding log entries recap the migration for a single record, a record that contains less than 10 fields. As you can see, datalog.log is the very definition of “comprehensive,” and is obviously your go-to tool if you need to do some serious data migration debugging.
Of course, that also means that dataload.log has the potential to grow quite large, especially if you are migrating millions of records (something many organizations need to do). Make sure that you have ample disk space available before you begin your data import.
And just how much disk space is “ample” disk space? That’s a difficult question to answer: it depends on the number of records you need to migrate, the number of fields you need to migrate, the number of data transforms you need to perform, etc. As a general rule, you should take a look at the size of your datafile, multiply that value by 3, then make sure you have at least that much free disk space.
Incidentally, dataload.log is a “rotating log” with a maximum size of 500 MB. That means that, when the log file reaches 500 MB (524288000 bytes), two things will happen. First, the current log file will be closed and renamed dataload.log.1. Second, a new log file will be opened, and pick up where the first file left off. This will continue as long as there is data that needs to be logged.
Incidentally, this behavior – and the maximum file size – are configurable: you’ll just need to edit the logging_config.json file. For example, here we’ve set the maximum file size to 200 MB (209715200 bytes):
Another thing to keep in mind is that dataload.log is a cumulative (i.e., an “append”) file: it does not automatically reset each time you run dataload.py. That means that data from any previous data migrations will always be available; it also means that your log files have the ability to grow very large. If you want to reset the file, you can always open dataload.log in a text editor and then delete all the data.
Dataload_info.log provides an abridged listing of events as the script runs. This log is also configurable in logging_config.json :
Keeps a running tally of the records that were successfully migrated to the user profile store. For example, this excerpt shows that information was migrated, and a new user profile created, for the user firstname.lastname@example.org. That user has also been assigned the UUID bacfa66d-16e2-492e-a019-da6220df2ae9:
Unlike dataload.log, success.csv is not cumulative: Each time dataload.py is executed a new success CSV file is created and a timestamp is appended to the name. For example :
Keeps a running tally of the records that were not successfully migrated to the user profile store. For example, here we see that the first record in the datafile (line 2, line 1 being the data header) failed because of a duplicate value; in this case, that means that there’s already a user profile using the email address referenced in the datafile:
Like success.cvs, fail.csv is not cumulative: Each time dataload.py is executed a new fail CSV file is created and a timestamp is appended to the name:
In addition to logging all its activity, dataload constantly reports its progress on the command-line, letting you know how many records have been processed and some other useful real time measurements. For example:
|Total Success (S)||Total Fails (F)||Success Rate (SR)||Average Records per Minute (AVG)|
|S:990||F:30||SR:97.06%||AVG:12090rec/m : 50%|