The optimal approach to managing import limits when migrating millions of rows of data into Dataverse is to utilize execute multiple requests and batching. Using ExecuteMultipleRequest allows for efficient handling of bulk data operations by grouping multiple requests into a single call. This not only reduces the total number of requests made to the server but also can help stay within the service limits imposed by Dataverse.
Moreover, this method enables better control over the process, as batches can be processed sequentially or in parallel, adhering to any constraints on the number of operations allowed in a specified timeframe. By monitoring the results of the batched requests, failures can be caught and addressed on a per-batch basis, ensuring that any problematic records do not halt the entire migration process.
Implementing retry loops in the code, while important for error handling and reliability, does not directly address the management of import limits. Instead, it focuses on recovering from transient issues that may arise during the migration.
Using a data migration tool can certainly facilitate the migration process by providing an interface to perform operations, but it may not inherently offer the same level of control over import limits as batching does. Furthermore, raising a service request with Microsoft to increase limits may not be feasible or timely, especially when there are efficient methods