Dataload.py includes a parameter (-x or --dry-run, depending on how you call it) that lets you run the script without actually copying data to the user profile store. But why would you even want to do that?
Well, to be honest, we’ve already answered that: this lets you run dataload.py without actually copying data to the user profile store. That means you can test your datafile and your data transformations as many times as you want, and without creating any actual user profiles. In turn, that means that there’s nothing to rollback if anything goes wrong.
The official Akamai data migration process calls for doing a full-scale dry run before you try to actually migrate data. (By “full-scale,” we mean doing a dry run using the exact same data that will be used when doing the actual migration.) However, as you design your schema, and as you create your data transformations, you might want to do a few iterative dry runs, just to make sure everything is working. For example, you might start off with an extremely simple datafile:
There’s not much to that datafile, but it’s worth doing a dry run (which will take just a few seconds to complete) just to make sure that the file is formatted correctly, that the attribute names are spelled (and letter-cased) correctly, and that your computer has everything it needs to run dataload.py. If everything succeeds, try adding a few more attributes and do another dry run. For example, here we’re added an attribute – and an attribute value – that requires double quote marks:
If that succeeds, your next step might be to add in an attribute that requires a data transform:
Note that the execution time for dry runs may not be the same as real migrations. The actual execution time is dependent on overall API response time.