18 May 2017

ODBC data migration from Dynamics AX to 365

ODBC data migration from Dynamics AX to 365

ODBC data migration from Dynamics AX to 365

5 best practices for a Microsoft Dynamics AX to Microsoft Dynamics 365 data migration, using ODBC

A turnkey migration from Microsoft Dynamics AX 2012 to Microsoft Dynamics 365 for Operations is not yet supported by Microsoft. By using To-Increase Data Migration Studio, you can load data using a direct ODBC connection, which makes it easy to connect and load data. You might think that having to load data from one to another system could be challenging. Here are a few best practices for accomplishing exactly that.

#1: Use the process type option to optimize the result

During the loading of the data from SQL you can read data from multiple tables. When you are dealing with fewer than 10,000 records, you can simplify the mapping and set the process type option to query instead of direct.

The result is first stored in the staging table. After that an internal query is used to read the data. The result of that query can be manipulated using fetch mode and join mode options.

It's better to process direct when you have more than 10,000 records. Internally, we use one transaction to store the incoming data. The transaction ID is used to define the read set and that's why we cannot commit multiple times. Direct means that Data Migration Studio will read record by record and you cannot easily combine fields in the mapping.

#2: Keep the query as simple as possible

When you are processing the query direct, you cannot easily combine multiple tables. You can only use the query with one sub-table.

  • Header
    • Line

The line table can be used as an exists join to limit the number of records. For example, when you need only the products in the current company, you can query them like this:

  • Product
    • Item

So joins to read other tables on the same level are most of the time useless in the mapping. A sample can be to query the addresses together with the zip code, city, and country or region. This can simplify the record ID conversion in the target system.

  • Address
    • Zip code
    • City
    • Country region
#3: Use the external ID conversion

External ID conversion can help to limit the number of tables in the query. Let's take the address as an example. We can have a separate import for the zip code, city, and country or region. During this import we can generate the external ID conversions which store the old record ID and the new record ID.

During the import of the address you can apply the external ID mapping per field. That voids the lookup of the table and simplifies the mapping because you don't need to find the correct zip code, city, or country or region.

#4 Apply paging for multiple reads from the database

Data is imported using ODBC via the Azure Service Bus in Dynamics 365 for Operations. The Azure Service Bus is sending the result set back and converts it to text internally. There is a limit for this, so a read action cannot return more than 100,000 records in one go.

To solve this, we can use the paging quantity in the ODBC document. The paging size should be maximally 100,0000 records.

Improve performance

The paging will result in multiple threads which are executed and processed in parallel. That means you can calculate per table what is most efficient.  You see an example of this below. When you need to load a lot of data where you process 40 records per second on average, you can calculate the amount of records as shown below.

When you use 8 threads for one batch server, you will see a result like this:

Total number of records 100,000,000
Average speed for records per second 40
Number of threads 8
Records per second 320
Max. throughput time in hours 87

You should add more capacity to your batch server (more CPUs), or more batch servers. You need at least one CPU per thread and, of course, some threads for other tasks. When you have 40 threads available (5 batch servers) you can do a full migration in 17 hours.

Total number of records 100,000,000
Average speed for records per second 40
Number of threads 40
Records per second 1,600
Max. throughput time in hours 17

 

#5 Parallel processing using tasks

The easiest win is using the task setup to use parallel processing. The tasks can help to run the migration in the correct sequence. Per task you can define the list of messages which should have the same dependency. Each task can have a dependency from one or more other tasks so it should wait until the other tasks are finished.

When the tasks are set up, you can run the project. The project will run the tasks with the same dependency in parallel, if possible. Each task will run the linked messages in parallel and the messages can use the paging functionality.

TIP: when you need to define the dependencies per message, you can use the XRefTableRelation table to define dependencies.

 

During the import we validate all incoming data, which slows the system. But when you apply the settings correctly, you can take advantage of a scalable and predictable way to migrate data.

 

What are the typical business scenarios and what role does system integration play in them?

MicrosoftTeams-image (3)
Pieter de Jong,
Pieter de Jong,
Technical Solution Architect

Also interesting