Create an Adobe Analytics source connection in the UI

This tutorial provides steps for creating an Adobe Analytics source connection in the UI to bring Adobe Analytics report suite data into Adobe Experience Platform.

Getting started

This tutorial requires a working understanding of the following components of Experience Platform:

  • Experience Data Model (XDM) System: The standardized framework by which Experience Platform organizes customer experience data.
  • Real-Time Customer Profile: Provides a unified, real-time consumer profile based on aggregated data from multiple sources.
  • Sandboxes: Experience Platform provides virtual sandboxes which partition a single Platform instance into separate virtual environments to help develop and evolve digital experience applications.

Key terminology

It is important to understand the following key terms used throughout this document:

  • Standard attribute: Standard attributes are any attribute that is pre-defined by Adobe. They contain the same meaning for all customers and are available in the Analytics source data and Analytics schema field groups.
  • Custom attribute: Custom attributes are any attribute in the custom variable hierarchy in Analytics. Custom attributes are used within an Adobe Analytics implementation to capture specific information into a report suite, and they can differ in their use from report suite to report suite. Custom attributes include eVars, props, and lists. See the following Analytics documentation on conversion variables for more information on eVars.
  • Any attribute in Custom field groups: Attributes that originate from field groups created by customers are all user-defined and are considered to be neither standard nor custom attributes.
  • Friendly names: Friendly names are human-provided labels for custom variables in an Analytics implementation. See the following Analytics documentation on conversion variables for more information on friendly names.

Create a source connection with Adobe Analytics

NOTE
When you create an Analytics source dataflow in a production sandbox, two dataflows are created:
  • A dataflow that does a 13-month backfill of historical report suite data into data lake. This dataflow ends when the backfill is complete.
  • A dataflow flow which sends live data to data lake and to Real-Time Customer Profile. This dataflow runs continuously.

In the Platform UI, select Sources from the left navigation to access the Sources workspace. The Catalog screen displays a variety of sources that you can create an account with.

You can select the appropriate category from the catalog on the left-hand side of your screen. You can also use the search bar to narrow down the displayed sources.

Under the Adobe applications category, select Adobe Analytics and then select Add data.

catalog

Select data

IMPORTANT
The report suites listed on the screen may come from various regions. You are responsible for understanding the limitations and obligations of your data and how you use that data in Adobe Experience Platform cross regions. Please ensure this is permitted by your company.

The Analytics source add data step provides you with a list of Analytics report suite data to create a source connection with.

A report suite is a container of data that forms the basis of Analytics reporting. An organization can have many report suites, each containing different datasets.

You can ingest report suites from any region (United States, United Kingdom, or Singapore) as long as they are mapped to the same organization as the Experience Platform sandbox instance in which the source connection is being created in. A report suite can be ingested using only a single active dataflow. A report suite that is not selectable has already been ingested, either in the sandbox that you are using or in a different sandbox.

Multiple in-bound connections can be made to bring multiple report suites into the same sandbox. If the report suites have differing schemas for variables (such as eVars or events), they should be mapped to specific fields in the custom field groups and avoid data conflicts using Data Prep. Report suites can only be added to a single sandbox.

NOTE
Data from multiple report suites can be enabled for Real-Time Customer Profile only if there are no data conflicts, such as two custom properties (eVars, lists and props) that have different meaning.

To create an Analytics source connection, select a report suite and then select Next to proceed.

Mapping

IMPORTANT
Data Prep transformations may add latency to the overall dataflow. The additional latency added varies based on the complexity of the transformation logic.

Before you can map your Analytics data to target XDM schema, you must first select whether you are using a default schema or a custom schema.

A default schema creates a new schema on your behalf, containing the Adobe Analytics ExperienceEvent Template field group. To use a default schema, select Default schema.

default-schema

With a custom schema, you can choose any available schema for your Analytics data, as long as that schema has the Adobe Analytics ExperienceEvent Template field group. To use a custom schema, select Custom schema.

custom-schema

The Mapping page provides an interface to map source fields to their appropriate target schema fields. From here, you can map custom variables to new schema field groups and apply calculations as supported by Data Prep. Select a target schema to start the mapping process.

TIP
Only schemas that have the Adobe Analytics ExperienceEvent Template field group are displayed in the schema selection menu. Other schemas are omitted. If there are no appropriate schemas available for your Report Suite data, then you must create a new schema. For detailed steps on creating schemas, see the guide on creating and editing schemas in the UI.

select-schema

The Map standard fields section displays panels for Standard mappings applied, Non matching standard mappings and Custom mappings. See the following table for specific information regarding each category:

Map standard fields
Description
Standard mappings applied
The Standard mappings applied panel displays the total number of mapped attributes. Standard mappings refer to mapping sets between all attributes in the source Analytics data and corresponding attributes in Analytics field group. These are pre-mapped and cannot be edited.
Non matching standard mappings
The Non matching standard mappings panel refers to the number of mapped attributes that contain friendly name conflicts. These conflicts appear when you are re-using a schema that already has a populated set of field descriptors from a different Report Suite. You can proceed with your Analytics dataflow even with friendly name conflicts.
Custom mappings
The Custom mappings panel displays the number of mapped custom attributes, including eVars, props, and lists. Custom mappings refer to mapping sets between custom attributes in the source Analytics data and attributes in custom field groups included in the selected schema.

map-standard-fields

To preview the Analytics ExperienceEvent template schema field group, select View in the Standard mappings applied panel.

view

The Adobe Analytics ExperienceEvent Template Schema Field Group page provides you with an interface to use for inspecting the structure of your schema. When finished, select Close.

field-group-preview

Platform automatically detects your mapping sets for any friendly name conflicts. If there are no conflicts with your mapping sets, select Next to proceed.

mapping

TIP
If there are friendly name conflicts between your source Report Suite and your selected schema, you can still continue with your Analytics dataflow, acknowledging that the field descriptors will not be changed. Alternatively, you can opt to create a new schema with a blank set of descriptors.

Custom mappings

You can use Data Prep functions to add new custom mapping or calculated fields for custom attributes. To add custom mappings, select Custom.

custom

Depending on your needs, you can select either Add new mapping or Add calculated field and proceed to create custom mappings for your custom attributes. For comprehensive steps on how to use Data Prep functions, please read the Data Prep UI guide.

The following documentation provides further resources on understanding Data Prep, calculated fields, and mapping functions:

Filtering for Real-Time Customer Profile filtering-for-profile

Once you have completed mappings for your Analytics report suite data, you can apply filtering rules and conditions to selectively include or exclude data from ingestion to the Real-Time Customer Profile. Support for filtering is only available for Analytics data and data is only filtered prior to entering Profile. All data are ingested into the data lake.

recommendation-more-help

Additional information on Data Prep and filtering Analytics data for Real-Time Customer Profile

  • You can use the filtering functionality for data that is going to Profile, but not for data going to data lake.
  • You can use filtering for live data, but you cannot filter backfill data.
    • The Analytics source does not backfill data into Profile.
  • If you utilize Data Prep configurations during the initial setup of an Analytics flow, those changes are applied to the automatic 13-month backfill as well.
    • However, this is not the case for filtering because filtering is reserved only for live data.
  • Data Prep is applied to both streaming and batch ingestion paths. If you modify an existing Data Prep configuration, those changes are then applied to new incoming data across both streaming and batch ingestion pathways.
    • However, any Data Prep configurations do not apply to data that has already been ingested into Experience Platform, regardless of whether it is streaming or batch data.
  • Standard attributes from Analytics are always mapped automatically. Therefore, you cannot apply transformations to standard attributes.
    • However, you can filter out standard attributes as long as they are not required in Identity Service or Profile.
  • You cannot use column-level filtering to filter required fields and identity fields.
  • While you can filter out secondary identities, specifically AAID and AACustomID, you cannot filter out ECID.
  • When a transformation error occurs, the corresponding column results in NULL.

Row-level filtering

IMPORTANT
Use row-level filtering to apply conditions and dictate which data to include for Profile ingestion. Use column-level filtering to select the columns of data that you want to exclude for Profile ingestion.

You can filter data for Profile ingestion at the row-level and the column-level. Row-level filtering allows you to define criteria such as string contains, equals to, begins, or ends with. You can also use row-level filtering to join conditions using AND as well as OR, and negate conditions using NOT.

To filter your Analytics data at the row-level, select Row filter.

row-filter

Use the left rail to navigate through the schema hierarchy and select the schema attribute of your choice to further drill down a particular schema.

left-rail

Once you have identified the attribute that you want to configure, select and drag the attribute from the left rail to the filtering panel.

filtering-panel

To configure different conditions, select equals and then select a condition from the dropdown window that appears.

The list of configurable conditions include:

  • equals
  • does not equal
  • starts with
  • ends with
  • does not end with
  • contains
  • does not contain
  • exists
  • does not exist

conditions

Next, enter the values that you want to include based on the attribute that you selected. In the example below, Apple and Google are selected for ingestion as part of the Manufacturer attribute.

include-manufacturer

To further specify your filtering conditions, add another attribute from the schema and then add values based on that attribute. In the example below, the Model attribute is added and models such as the iPhone 13 and Google Pixel 6 are filtered for ingestion.

include-model

To add a new container, select the ellipses (...) on the top right of the filtering interface and then select Add container.

add-container

Once a new container is added, select Include and then select Exclude from the dropdown window that appears.

exclude

Next, complete the same process by dragging schema attributes and adding their corresponding values that you want to exclude from filtering. In the example below, the iPhone 12, iPhone 12 mini, and Google Pixel 5 are all filtered from exclusion from the Model attribute, landscape is excluded from the Screen orientation, and model number A1633 is excluded from Model number.

When finished, select Next.

exclude-examples

Column-level filtering

Select Column filter from the header to apply column-level filtering.

column-filter

The page updates into an interactive schema tree, displaying your schema attributes at the column-level. From here, you can select the columns of data that you would like to exclude from Profile ingestion. Alternatively, you can expand a column and select specific attributes for exclusion.

By default, all Analytics go to Profile and this process allows for branches of XDM data to be excluded from Profile ingestion.

When finished, select Next.

columns-selected

Filter secondary identities

Use a column filter to exclude secondary identities from Profile ingestion. To filter secondary identities, select Column filter and then select _identities.

The filter only applies when an identity is marked as secondary. If identities are selected, but an event arrives with one of the identities marked as primary, then those do not get filtered out.

secondary-identities

Provide dataflow details

The Dataflow detail step appears, where you must provide a name and an optional description for the dataflow. Select Next when finished.

dataflow-detail

Review

The Review step appears, allowing you to review your new Analytics dataflow before it is created. Details of the connection are grouped by categories, including:

  • Connection: Displays the source platform of the connection.
  • Data type: Displays the selected Report Suite and its corresponding Report Suite ID.

review

Monitor your dataflow monitor-your-dataflow

Once your dataflow is complete, select Dataflows in the sources catalog to monitor the activity and status of your data.

The sources catalog with the dataflows tab selected.

A list of existing Analytics dataflows in your organization appears. From here, select a target dataset to view its respective ingestion activity.

A list of existing Adobe Analytics dataflows in your organization.

The Dataset activity page provides information on the progress of data that is being sent from Analytics to Experience Platform. The interface displays metrics such as the number of ingested records, number of ingested batches, and number of failed batches.

The source instantiates two dataset flows. One flow represents backfill data and the other is for live data. Backfill data is not configured for ingestion into Real-Time Customer Profile but is sent to the data lake for analytical and data-science use-cases.

For more information on backfill, live data, and their respective latencies, read the Analytics source overview.

The dataset activity page for a given target dataset for Adobe Analytics data.

View individual batches using the legacy monitoring interface

The dataset activity page does not display a list of individual batches. To view a list of individual batches, select a chart in the dataset activity interface.

The dataset activity page with a chart selected.

You are taken to the Monitoring dashboard. Next, select ONLY INGEST FAILURES: YES to clear the filter and view a list of individual batches.

The monitoring dashboard with the failure filter selected.

The interface updates to a list of individual batches, including information on their respective metrics.

The legacy monitoring page for batch data.

table 0-row-2 1-row-2 2-row-2 3-row-2 4-row-2 5-row-2 6-row-2 7-row-2 8-row-2 9-row-2 layout-auto
Metrics Description
Batch ID The ID of a given batch. This value is generated internally.
Dataset name The name of a given dataset used for Analytics data.
Source The source of the ingested data.
Updated The date of the most recent flow run iteration.
Records in dataset The total count of records in the dataset. Note: This parameter will occasionally display a status of in-progress. This status indicates that the record ingestion process is not yet complete.
New profile fragments The total count of new profile fragments that were ingested.
Existing profile fragments The total count of existing profile fragments.
Identity records stitched The total count of identity records that were stitched together after ingestion.
Records in Profile The total count of records that were ingested to Real-Time Customer Profile.

Next steps and additional resources

Once the connection is created, the dataflow is automatically created to contain the incoming data and populate a dataset with your selected schema. Furthermore, data back-filling occurs and ingests up to 13 months of historical data. When the initial ingestion completes, Analytics data and be used by downstream Platform services such as Real-Time Customer Profile and Segmentation Service. See the following documents for more details:

The following video is intended to support your understanding of ingesting data using the Adobe Analytics Source connector:

WARNING
The Platform UI shown in the following video is out-of-date. Please refer to the documentation above for the latest UI screenshots and functionality.
Transcript
In this video, I’ll explain how users can ingest their data from Adobe Analytics into Adobe Experience Platform and enable the data for platform’s Real-Time Customer Profile. Data ingestion is a fundamental step to getting value from the Experience Platform such as building robust Real-Time Customer Profiles and using them to provide meaningful experiences. If you’re using Adobe Analytics, you already know it’s a powerful analytical engine that helps you learn more about your customers, see how your digital properties are performing, and identify areas of improvement. The data connector lets you easily tap into this data to use in the Real-Time Customer Profile in the least amount of time compared to other methods. These are the areas I’ll cover in this video; ingesting analytics data using a standard or custom schema, why you’d want to use a custom schema which requires a bit more setup, data prep functions available in the custom schema workflow, how to enable this data for Real-Time Customer Profile where you can monitor your analytics data flow to make sure there’s no errors or gaps with the data coming through. Now, the Analytics source connector isn’t the only way to get your analytics data into platform but it is the fastest method requiring the least amount of effort from you or your resources. You may have use cases that require edge or streaming segmentation based on analytics attributes. That topic is out of scope for this video. I’ll show you a mapping feature in the user interface as well as calculations you can apply to your data. This is all done within the platform user interface. Now let’s look at some architecture. Analytics collects data from various digital channels using multiple data centers around the world. Once the data is collected, you can use processing or VISTA rules to modify the incoming data for better reporting. Once this lightweight processing happens, it’s ingested into platform and is ready for consumption by the Real-Time Customer Profile and associated services like segmentation and activation within a couple of minutes. A caveat to this is if you use analytics as a reporting source for target activities also known as A4T. In that case, it will add another 15 minutes to data availability. The same data is micro batched into the data lake with a latency of about 15 to 30 minutes for consumption by things like query service and intelligent services as well as Customer Journey Analytics. To set up the Adobe Analytics source connector, just log into Experience Platform and navigate to Sources and open the Sources catalog. Under Adobe applications, look for Adobe Analytics. Select the Adobe Analytics source connector to add data. In the Analytics source add data step, you can choose the source data from a classification file or a report suite. For this video, I’ll select the report suite data option. Now select the report suite you want to ingest data from from the list below. I’m going to select Hotel Reservations. Then at the top, select Next. Now at this point, I have two choices; default schema and custom schema. First, what is schema? Well, it’s a set of rules that validates the structure and format of data and is used by platform to ensure the consistency and quality of data coming in. Now I’m going to show you the Hotel Reservation schema, just so you have some exposure to it. When I expand the analytics structure and then select custom dimensions, notice the objects that you see underneath. You should be familiar with the things like eVars, listProps, and props, just to name a few. What we’re looking at right now is the default analytics field group. Conceptually, it’s important to understand that selecting the analytics default schema in the source connector workflow will automatically map your report suite data to the default schema without any additional effort on your end. You don’t need to create a new schema before invoking the workflow using the default schema. Everything comes over as is. The descriptors used for the analytics variables in the report suite selected will be copied over to platform after initial setup. I’ll be showing you the more flexible option in this video, which is based on the custom schema. Before we get into that, I’ll explain the difference between the default schema workflow versus the custom schema workflow. Here you see at the top that everything comes over as is using the default analytics schema. However, you can add additional field groups to the schema that you define. Okay, so what’s a field group? Well, a field group represents a building block of a schema that organizes a group of fields into an object that can be reused in multiple schemas of the same class. Now at the bottom, there are data prep features you can leverage as you map your standard attributes to new custom attributes. When you use a custom schema in the analytics data connector workflow, you need to plan and create your schema in advance. Let me show you what I mean. Here, I have the schema that I showed you earlier. There are two field groups that are part of the schema; the Adobe Analytics ExperienceEvent template, which I showed you earlier, and there’s a second field group called Reservation Details. This is a user-defined field group added to the schema. Here, we see fields that have descriptive names like transaction, cancellation, and confirmationNumber. I’m going to be mapping some analytics variables to a couple of these new attributes in the source connector workflow. Here are the main use cases for our custom analytics schema. First, it’s more user friendly to leverage semantic or descriptive attribute names in things like segmentation service and Customer Journey Analytics instead of referencing native analytics eVars like eVar1, for example. Second, if you want a more standardized way of referencing the same data that might be captured differently across report suites, using custom attributes is the way to go as you see illustrated in this table. Third, you may have data in analytics that is stored in a pipe-delimited format or maybe you want to join two values together in a single attribute. The custom schema workflow will help you accomplish both of these goals. Last, let’s say you want more flexibility for defining identities for your analytics data beyond the ECID, which is the Experience Cloud ID Service. You can do that by setting up a new attribute in your custom field group and marking it as an identity field. Okay, let’s put it all together again before we go back to the UI. To recap, use the custom schema and associated workflow in the source connector setup when you have use cases to extend the schema beyond the default analytics field group if you don’t use the default setup which has fewer steps and a lower level of effort. Also, if you have multiple report suites to bring into platform, you’ll need to set up a data connector workflow for each one. Next, I’m going to select the Hotel Reservation schema from the list. The map standard field section gives you details about the default mapping that occurs from your report suite to the Analytics ExperienceEvent field group in the schema. If there were any descriptor conflicts when mapping your report suite to a preexisting schema, you’d see them here. I’ll show you that for demonstration purposes now. You can see the conflict in the descriptor names; custom insight on the left versus new insight on the right. At this point, you could either accept the mapping discrepancy or create new mappings using the Custom tab. I’m going to select Hotel Reservations. Let’s look at some mapper functions. They allow pass-through mapping as well as calculated field mapping, which performs data manipulation operations for things like string, numeric, daytime, array, and data types. First, let’s work with pass-through mapping. I’m going to select add new mapping under Custom. In the source field, which is coming from my report suite, I’m going to select eVar5. My report suite doesn’t have a label or descriptor for this, but let’s say it contains the confirmationNumber. I want to map this to the semantic field created in the Reservation Details custom field group that’s part of the schema. Next, let’s work with a calculated field. I’m going to select add calculated field. Once this opens, I get a friendly editor to work with. It’s going to contain functions, fields, and operators on the left, a text editor and preview section in the middle, and, as we’ll see, a contextual help reference on the right. I can search for a specific function using the search box, which I’ll do now. I’ll type in trim, and I’m going to add it to the editor by clicking on the plus sign. Now I’ll type in lower and add that to the editor as well. Now I’m going to select the field link to search for the analytics field I want to work with, and I’m going to search for eVar2. Let’s say eVar2 contains a transactionID. And my goal here is that I want to trim any spaces and ensure the value is all lowercase. Last, I’m going to make sure to rearrange the formula so that the syntax is correct. Now I’ll click on Save in the upper right corner. This adds the calculated field in the mapping screen. I’m going to pop open the schema from the target field section by clicking on this icon on the right, and I’m going to select transactionID in the transaction object. Next, I’ll click on Select at the bottom. After all of the mappings are addressed, click on Next in the upper right corner. On the data flow detail page, I’ll provide a name. I can also select any of these alerts about ingestion status for this data flow. Now I’m going to click on Next in the upper right corner. This is going to let me review all the information about this data flow and when it looks good, click on the Finish button. Okay, so I just created that data flow, and I want to show you some other validations and configurations you can do once your analytics data has started to ingest. I’ll use a different analytics data flow to show you this. Next, I’ll select the third data flow item from the top. And when I do this, some properties display in the right panel. One of these is the dataset name used for ingesting this report suite data. I’m going to click on it, and this opens the dataset activity page. Under the dataset activity, there’s a quick summary of ingested batches and failed batches during a specific time window. I can scroll to see ingested batch IDs. Now each batch represents actual data ingested. Once everything looks good, enable the dataset for Real-Time Customer Profile. To do this, there are two objects that require configuration for profile; the schema and the dataset. On the properties for the dataset, I’ll open the schema in a new tab. In the properties for the schema, I can see it’s already enabled for profile. It’s important to note that once a schema is enabled for profile, it can’t be disabled or deleted. Moreover, fields can’t be removed from the schema after this point, but you can add new fields. So keep this in mind if you’re working in your production environment, and if you’re unsure, just use the soundbox until you’re ready. Now, I’m going to go back to the dataset tab. The profile flag would need to be set here as well in order for this data to be sent to the profile store. Now, remember the profile store is what is leveraged by the Real-Time Customer Profile. You’d simply click on it to toggle the configuration and then select Enable. However, I’m going to cancel out of this since I’m just demonstrating. Okay, you should now know how to ingest data from Adobe Analytics into platform using the sources connector. You should also feel comfortable using a custom schema as well as the mapper and data prep features. Also, you should be able to enable this data for the Real-Time Customer Profile. Good luck. -
337b99bb-92fb-42ae-b6b7-c7042161d089