You’re offline. This is a read only version of the page.
Skip to main content
Dynamics 365 Community
Cancel
Forums
Customer experience
| Sales, Customer Insights, CRM
Service
| Customer Service, Contact Center, Field Service, Guides
Supply chain
| Supply Chain Management, Commerce
Finance
| Project Operations, Human Resources, AX, GP, SL
Small and medium business
| Business Central, NAV, RMS
Microsoft Dynamics 365
| Integration, Dataverse, and general topics
Microsoft Cloud for Sustainability
| MSM, SDSF, ECS, EID, ESG
Archived topics
| Read-only forums of past discussions
Microsoft Dynamics AX (Archived)
Microsoft Dynamics CRM (Archived)
Microsoft Dynamics GP (Archived)
Microsoft Dynamics NAV (Archived)
Microsoft Dynamics RMS (Archived)
Microsoft Dynamics SL (Archived)
Blogs
Community blog
Dynamics 365 blogs
User groups
Galleries
Ideas
Resources
Getting started
Community news
Leaderboard
Learn
Community support
Community events
Community feedback
FastTrack
TechTalks
Blogs
FastTrack forum
Partner case studies
Implementation guide
More
Search
Notifications
Announcements
🌸 Community Spring Festival 2025 Challenge & Updates🌸
Community site session details
Session Id :
Copy
Close
🌸 Community Spring Festival 2025 Challenge & Updates🌸
Dynamics 365 Community
/
Blogs
/
Dynamics 365 FastTrack Blog
/
Leveraging CI-J interaction...
Leveraging CI-J interaction data without Fabric
Peter Krause
Follow
Like
(
0
)
Share
Report
Introduction
Microsoft Fabric is the recommended way to access and work with CI-J interaction data. Fabric abstracts the underlying format so you don’t have to worry how to process the data. Integrating
Customer Insights – Journeys
with
Microsoft Fabric
is the
recommended and most seamless way
to access and work with interaction data. Fabric abstracts the complexity of the underlying data format, allowing you to focus on building insights and reports quickly and efficiently.
However, there are situations where customers might not be able to use Fabric and wish to use a 3
rd
party system to store and process their data. In this case, there is another option to access and download the interaction data that is stored in the Managed data lake by leveraging Power Platform capabilities.
Please note: The method described in this article provides direct access to raw interaction data, which follows a specific structure and has unique characteristics. This solution is currently in
preview
and may evolve over time. Let’s explore how this data is stored and how it can be accessed.
How interaction data is stored
A Power Platform environment that has Customer Insights – Journeys provisioned, also contains a set of managed data lakes within Dataverse. Out of the box, there is no access to those storages. However, in the case of Customer Insights – Journeys interaction data, access from the outside is required for customers to create their own reports. Previously, this access required MS Fabric, which could be used to access the corresponding data via a shortcut. There is also an additional way available to access and work with the CI-J interaction data by accessing the corresponding data lakes directly.
The interaction data is stored in the format of of
Delta Lake
. This is an open-source project that enables building a Lakehouse architecture on top of data lakes. Although it is not human-readable, it has a lot of advantages like
ACID transactions
, scalable metadata handling,
streaming
and
batch
support,
time travel
and
upserts
and
deletes
.
Steps to work with the data
The Power Platform environments now allows customers to access and download the CI-J analytics data. To incorporate this data in custom reporting or a dashboard, additional steps need to be considered. The whole process could look like this:
Depending on the tooling, step 1 and 2 could be merged. Let’s look at the details of each of those steps.
Get data
If you want to create an analytics dashboard for your Customer Insights – Journeys instance, additional data from other Dataverse tables is probably required. A Fabric-less approach for this data would be to leverage the
Azure Synapse Link for Dataverse
. This would allow you to continuously export data from Dataverse to Azure Synapse Analytics or an Azure Data Lake Storage Gen2.
For CI-J interaction data, the corresponding data lakes can be accessed directly. The data that is downloaded in
Delta Lak
e format.
Important: You need a user with an administrator role to access the storage and download the data.
An automated process could look like this:
Discover the MDL container location for the specific Dataverse organization through “
/api/data/v9.2/RetrieveAnalyticsStoreDetails
”
From the Dataverse datalakefolders table, fetch the folder named “Customer Insights Journeys” to determine the path in the MDL container through
“
/api/data/v9.2/datalakefolders?$filter=name%20eq%20%27Customer%20Insights%20Journeys%27&$select=path
”
Call the
RetrieveAnalyticsStoreAccess
OData function to retrieve URL and access token for direct access to MDL. The parameters of the function are “
/api/data/v9.2/RetrieveAnalyticsStoreAccess(Url=@p1,ResourceType='Folder',Permissions='Read,List',SasTokenValidityInMinutes=30,UseBlobProxy=true)?@p1=%27{urlEncode(container+path)}%27
”:
Url: MDL container URL (from step 1) + path (from step 2)
ResourceType: 'Folder'
Permissions: 'Read,List'
SasTokenValidityInMinutes: Validity of the access token, in minutes. Maximum of 60 minutes is allowed.
UseBlobProxy: true
The URL returned can be targeted via the
ADLS Gen2 REST APIs
as follows.
Download the default.manifest.cdm.json manifest file first by calling “
https://{endpointID}.environment.api.powerplatformusercontent.com/storage/aeth-{accountID}/{folderName}/default.manifest.cdm.json?{SAStoken}
” (endpointID, accountID, folderName and SAStoken need to be preserved from the URL obtained in step 3).
The default.manifest.cdm.json contains the list of interaction (entity) types in the
entities
array. Interaction types are identified by the
entityName
property, while the
location
property (within the
dataPartitions
array) denotes their relative folder path in the file system:
List files for the chosen interaction type using “
https://
{endpointID}
.environment.api.powerplatformusercontent.com/storage/aeth-
{accountID}
?resource=filesystem&recursive=
true
&directory=
{folderName}
/
{interactionLocation}
&
(remainingSASparameters)”
Please note:
folderName
found in the URL from step 3 and
location
found from step 6 need to be concatenated to determine the relative path required for the
directory
parameter.
Please note: The path listing API uses continuation tokens for paging results. Observe the
x-ms-continuation
response header and if present, call the list API repeatedly by passing the continuation token from the
x-ms-continuation
response header to the
continuation
query string parameter of the subsequent request. Further details on this API can be found in
Path - List - REST API (Azure Storage Services) | Microsoft Learn
Use the response from path listing API to determine where individual files are located and whether they were modified since the last data download:
Download individual files using “
https://{endpointID}.environment.api.powerplatformusercontent.com/storage/aeth-{accountID}/{fileName}?{SAStoken}
”
The result of this process is that a bunch of files is provided as a download. Implementing such a program and running it on a local computer would result in a file structure like this:
Please note that the files received by this procedure are not human-readable. In the next section, we look at ways on how to process this data.
Process data
Since the interaction data is downloaded in the Delta lake format, you’ll need to use a tool which can accept Delta lake format as input for further processing. One of those technologies is Apache Spark which can be utilized from within Python and / or Jupyter notebooks.
Important note:
The data needs to be consumed as Delta Lake format. Consuming just Parquet files while disregarding Delta log files will result in consuming duplicated and deleted records.
During the processing stage, the data is read, eventually transformed and stored so that the visualization technology can make use of it, for example a CSV file, database tables or any other format.
Processing steps could look like:
Read the Delta files using PySpark:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("
APP_Name
").getOrCreate()
df = spark.read.format("delta").load("
path_to_downloaded_data
")
df.show()
Transform the data (filter, aggregate, or join with other datasets from Dataverse).
Store the processed data in a more accessible format, such as:
CSV (for simple reporting)
Database tables (SQL, Azure Synapse)
Parquet files (for efficient storage and querying)
By following these steps, you can efficiently process Customer Insights - Journeys interaction data for downstream analysis. Using Apache Spark and Delta Lake enables scalable data transformations, ensuring the data is structured in a meaningful way for reporting. Once processed, the refined dataset can be stored in a suitable format (CSV, SQL database, or Parquet) for easy access and visualization.
Next, we will explore how to take this processed data and build insightful Power BI dashboards to drive customer engagement analysis and business decision-making.
Visualize data
Once the data has been extracted and processed, the next step is to create visualizations that provide insights into interactions with customers. Power BI is the recommended tool for this, as it can natively connect to Delta Lake files via Azure Synapse Analytics or Azure Data Lake Storage Gen2.
Power BI Integration Steps:
Connect to Processed Data
If using Azure Data Lake, connect via Azure Data Lake Storage Gen2.
If stored in Azure Synapse Analytics, use the Azure Synapse connector.
If in CSV/Parquet format, import directly into Power BI.
Build Visualizations by creating custom reports
Display engagement rates, email opens, click-throughs.
Blend CI-J interaction data with Dataverse data (e.g., customer profiles, sales data) to provide deeper insights.
Automate & Refresh: Schedule automatic refresh in Power BI Service to keep dashboards updated.
By leveraging Power BI, you can transform the processed Customer Insights – Journeys interaction data into actionable insights. Using pre-built Power BI connectors or custom queries, you can visualize key engagement metrics, customer behavior trends, and campaign performance. With interactive dashboards, teams can monitor real-time data, drill down into specific segments, and make data-driven decisions to optimize marketing efforts.
Retrieving only data updates
The Customer Insights – Journeys interaction data is, in general, slow moving and vast majority of the historical data remains unchanged day to day. To reduce the download time and the data volume required for daily updates, we recommend that customers observe the following best practices:
After the initial data download, persist the data in durable storage (i.e., not deleted after each download run).
For subsequent downloads, retrieve only files that have changed since the previous iteration. Use the modified time properties on individual files (both parquet and delta log) to determine whether the file needs to be downloaded.
Download only the interaction (entity) types required for reporting. Each interaction type is stored in a separate folder, so we recommend selecting only those relevant to your reporting needs. You can find descriptions of individual interaction types in the
Overview of CustomerInsightsJourneys - Common Data Model - Common Data Model | Microsoft Learn
.
Please note that
data downloads should be run no more than once per day
to ensure optimal performance and efficient use of the system.
To ensure service availability and maintain fair usage for all customers, Microsoft reserves the right to limit or suspend access in cases of sustained excessive usage or patterns inconsistent with the intended use of the feature.
Summary
While it's possible to process Customer Insights – Journeys interaction data without using Microsoft Fabric, doing so requires additional development effort to access and incorporate the data into your own reporting strategy, as outlined in this article.
For customers seeking a streamlined and scalable solution,
integration with Microsoft Fabric
remains the recommended and most future-proof approach
. Once set up, using Fabric provides built-in capabilities for managing, transforming, and visualizing data with minimal overhead.
Comments
Add new comment
Comment on this blog post
New#123
You don't have the appropriate permissions.
Welcome,
Profile
Messages
My activity
Sign out