Skip main navigation

Common Data Challenges

When bringing data into Common Data Service for Apps environments there are a number of common challenges users may face during the process, and this

When bringing data into Common Data Service for Apps environments there are a number of common challenges users may face during the process, and this step will guide us through handling some of these.

Duplicate data

By far the most common issue faced during the data import is importing data that may already exist in the target system.

A screenshot of a Failures report

Common Data Service for Apps has a duplicate detection feature that relies on rules that combine multiple fields to define a unique value per record. This value is used to detect if the record being imported is a duplicate. This setting can be separately controlled for the data import process.

A screenshot of the Duplicate Detection Settings screen, with the ‘During data import’ option selected

If duplicate detection is disabled during the import, it still can be used to identify potential duplicate records later. To deal with the duplicates, CDS for Apps include merge functionality where two selected records can be merged and one of them deactivated.

The Merge process has an interactive user interface for contact, account, and lead entities, though developers can use CDS for Apps Web API to perform merge on other entities including custom.

The other approach to stop a duplicate from entering the system is to create alternative keys for an entity. Alternate keys must have a unique value throughout all data entries within a single entity.

For example, an account number can be selected to be unique, or you may even have a business decision where all contacts within your database must have a unique email address. Unlike duplicate detection rules, it is not possible to circumvent the alternate key, the records with a duplicate key will simply not be imported.


Imported records often are related to other entities in the system. For example, contacts may have a parent account specified in the import data. The approach to handling the dependencies depend on how complex the relationships are.

In straightforward cases when one entity has a lookup to another, for example, contact has a parent account, Data Import Wizard provides the ability to import a .zip file containing data for both entities and the import process will handle the relationship.

More complex scenarios may include circular dependencies, for example when a child contact is also used as the primary contact for the parent account.

Another example includes self-referential relationships when the entity is related to itself, e.g. account can have sub-accounts. These more complex scenarios can be handled using:

  • Configuration Migration tool that is part of CDS for Apps SDK.
  • Third-party data integration utilities.

Required Data

Often, when importing data into the environment, the quality of the incoming data cannot be guaranteed. One of the most common challenges are missing values for the fields that business has set up to be mandatory.

Import processes will not fail if a mandatory field is missing from the import data. To resolve missing values, you may want to consider cleansing your data prior to the input to ensure that the values Power Query row filtering makes this process a very straightforward task.

Depending on business requirements the other alternative could be to let the data into your environment and then use the Advanced Find feature to report on the records where the mandatory fields are missing and let the users handle it.

Data Sources

Comma-separated (.csv) and Excel (.xslx) files are one of the most common data input formats and can be handled using Data Import Wizard, Configuration Migration tool, or Excel Online.

Often data are coming from other data sources, for example, SharePoint lists, third-party databases, etc. That’s where Power Query can be used to connect directly to a variety of data sources, extract, transform and cleanse your data prior to the import.


As the complexity of your environment grows, entities may have real-time workflows, entity-scoped business rules, and plugins added. These components are typically used to execute business logic, augment the data, enforce the data integrity rules, or communicate with other services, including third-party.

Often these operations have either a negligible or acceptable performance impact when individual records are updated via the user interface. However, when a large number of records are imported, the impact can become quite severe introducing an unacceptable performance penalty or even failures.

It is strongly recommended to review the components registered on the entities being imported and, where applicable, take corrective action.

  • Convert real-time workflows into background workflows where possible.
  • Disable business rules, plugins, and real-time workflows for the import duration and reenable them afterwards. You may need to work with developers to determine if compensating processes, like a bulk execution of a workflow, are required to be run.
  • Replace just-in-time validations with a batch process where applicable. For example, if you use third-party web service to validate customers’ addresses, there is probably no pressing need to do that when customer information is being imported. A valid business decision could be to run a recurrent batch job validating the addresses in bulk.
This article is from the free online

Dynamics 365: Implementing Power Platform Integrations

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now