Show Menu
TOPICS×

Partial batch ingestion

Partial batch ingestion is the ability to ingest data containing errors, up to a certain threshold. With this capability, users can successfully ingest all their correct data into Adobe Experience Platform while all their incorrect data is batched separately, along with details as to why it is invalid.
This document provides a tutorial for managing partial batch ingestion.
In addition, the appendix to this tutorial provides a reference for partial batch ingestion error types.

Getting started

This tutorial requires a working knowledge of the various Adobe Experience Platform services involved with partial batch ingestion. Before beginning this tutorial, please review the documentation for the following services:
  • Batch ingestion : The method which Platform ingests and stores data from data files, such as CSV and Parquet.
  • Experience Data Model (XDM) : The standardized framework by which Platform organizes customer experience data.
The following sections provide additional information that you will need to know in order to successfully make calls to Platform APIs.

Reading sample API calls

This guide provides example API calls to demonstrate how to format your requests. These include paths, required headers, and properly formatted request payloads. Sample JSON returned in API responses is also provided. For information on the conventions used in documentation for sample API calls, see the section on how to read example API calls in the Experience Platform troubleshooting guide.

Gather values for required headers

In order to make calls to Platform APIs, you must first complete the authentication tutorial . Completing the authentication tutorial provides the values for each of the required headers in all Experience Platform API calls, as shown below:
  • Authorization: Bearer {ACCESS_TOKEN}
  • x-api-key: {API_KEY}
  • x-gw-ims-org-id: {IMS_ORG}
All resources in Experience Platform are isolated to specific virtual sandboxes. All requests to Platform APIs require a header that specifies the name of the sandbox the operation will take place in:
  • x-sandbox-name: {SANDBOX_NAME}
For more information on sandboxes in Platform, see the sandbox overview documentation .

Enable a batch for partial batch ingestion in the API

This section describes enabling a batch for partial batch ingestion using the API. For instructions on using the UI, please read the enable a batch for partial batch ingestion in the UI step.
You can create a new batch with partial ingestion enabled.
To create a new batch, follow the steps in the batch ingestion developer guide . Once you reach the Create batch step, add the following field within the request body:
{
    "enableErrorDiagnostics": true,
    "partialIngestionPercentage": 5
}

Property
Description
enableErrorDiagnostics
A flag that allows Platform to generate detailed error messages about your batch.
partialIngestionPercentage
The percentage of acceptable errors before the entire batch will fail. So, in this example, a maximum of 5% of the batch can be errors, before it will fail.

Enable a batch for partial batch ingestion in the UI

This section describes enabling a batch for partial batch ingestion using the UI. If you have already enabled a batch for partial batch ingestion using the API, you can skip ahead to the next section.
To enable a batch for partial ingestion through the Platform UI, you can create a new batch through source connections, create a new batch in an existing dataset, or create a new batch through the "Map CSV to XDM flow".

Create a new source connection

To create a new source connection, follow the listed steps in the Sources overview . Once you reach the Dataflow detail step, take note of the Partial ingestion and Error diagnostics fields.
The Partial ingestion toggle allows you to enable or disable the use of partial batch ingestion.
The Error diagnostics toggle only appears when the Partial ingestion toggle is off. This feature allows Platform to generate detailed error messages about your ingested batches. If the Partial ingestion toggle is turned on, enhanced error diagnostics are automatically enforced.
The Error threshold allows you to set the percentage of acceptable errors before the entire batch will fail. By default, this value is set to 5%.

Use an existing dataset

To use an existing dataset, start by selecting a dataset. The sidebar on the right populates with information about the dataset.
The Partial ingestion toggle allows you to enable or disable the use of partial batch ingestion.
The Error diagnostics toggle only appears when the Partial ingestion toggle is off. This feature allows Platform to generate detailed error messages about your ingested batches. If the Partial ingestion toggle is turned on, enhanced error diagnostics are automatically enforced.
The Error threshold allows you to set the percentage of acceptable errors before the entire batch will fail. By default, this value is set to 5%.
Now, you can upload data using the Add data button, and it will be ingested using partial ingestion.

Use the "Map CSV to XDM schema" flow

To use the "Map CSV to XDM schema" flow, follow the listed steps in the Map a CSV file tutorial . Once you reach the Add data step, take note of the Partial ingestion and Error diagnostics fields.
The Partial ingestion toggle allows you to enable or disable the use of partial batch ingestion.
The Error diagnostics toggle only appears when the Partial ingestion toggle is off. This feature allows Platform to generate detailed error messages about your ingested batches. If the Partial ingestion toggle is turned on, enhanced error diagnostics are automatically enforced.
The Error threshold allows you to set the percentage of acceptable errors before the entire batch will fail. By default, this value is set to 5%.

Downloading file-level metadata

Adobe Experience Platform allows users to download the metadata of the input files. The metadata will be retained within Platform for up to 30 days.

List input files

The following request will allow you to view a list of all the files provided in a finalized batch.
Request
curl -X GET https://platform.adobe.io/data/foundation/export/batches/{BATCH_ID}/meta?path=input_files \
  -H 'Authorization: Bearer {ACCESS_TOKEN}' \
  -H 'x-api-key: {API_KEY}' \
  -H 'x-gw-ims-org-id: {IMS_ORG}' \
  -H 'x-sandbox-name: {SANDBOX_NAME}'

Response
A successful response will return HTTP status 200 with JSON objects containing path objects detailing where the metadata was saved.
{
    "_page": {
        "count": 1,
        "limit": 100
    },
    "data": [
        {
            "_links": {
                "self": {
                    "href": "https://platform.adobe.io/data/foundation/export/batches/{BATCH_ID}/meta?path=input_files/fileMetaData1.json"
                }
            },
            "length": "1337",
            "name": "fileMetaData1.json"
        },
                {
            "_links": {
                "self": {
                    "href": "https://platform.adobe.io/data/foundation/export/batches/{BATCH_ID}/meta?path=input_files/fileMetaData2.json"
                }
            },
            "length": "1042",
            "name": "fileMetaData2.json"
        }
    ]
}

Retrieve input file metadata

Once you have retrieved a list of all the different input files, you can retrieve the metadata of the individual file by using the following endpoint.
Request
curl -X GET https://platform.adobe.io/data/foundation/export/batches/{BATCH_ID}/meta?path=input_files/fileMetaData1.json \
  -H 'Authorization: Bearer {ACCESS_TOKEN}' \
  -H 'x-api-key: {API_KEY}' \
  -H 'x-gw-ims-org-id: {IMS_ORG}' \
  -H 'x-sandbox-name: {SANDBOX_NAME}'

Response
A successful response will return HTTP status 200 with JSON objects containing path objects detailing where the metadata was saved.
{"path": "F1.json"}
{"path": "etc/F2.json"}

Retrieve partial batch ingestion errors

If batches contain failures, you will need to retrieve error information about these failures so you can re-ingest the data.

Check status

To check the status of the ingested batch, you must supply the batch's ID in the path of a GET request.
API format
GET /catalog/batches/{BATCH_ID}

Parameter
Description
{BATCH_ID}
The id value of the batch you want to check the status of.
Request
curl -X GET https://platform.adobe.io/data/foundation/catalog/batches/{BATCH_ID} \
  -H 'Authorization: Bearer {ACCESS_TOKEN}' \
  -H 'x-api-key: {API_KEY}' \
  -H 'x-gw-ims-org-id: {IMS_ORG}' \
  -H 'x-sandbox-name: {SANDBOX_NAME}'

Response without errors
A successful response returns HTTP status 200 with detailed information about the batch's status.
{
    "af838510-2233-11ea-acf0-f3edfcded2d2": {
        "status": "success",
        "tags": {
            "acp_enableErrorDiagnostics": true,
            "acp_partialIngestionPercent": 5
        },
        "relatedObjects": [
            {
                "type": "dataSet",
                "id": "5deac2648a19d218a888d2b1"
            }
        ],
        "id": "af838510-2233-11ea-acf0-f3edfcded2d2",
        "externalId": "af838510-2233-11ea-acf0-f3edfcded2d2",
        "inputFormat": {
            "format": "parquet"
        },
        "imsOrg": "{IMS_ORG}",
        "started": 1576741718543,
        "metrics": {
            "inputByteSize": 568,
            "inputFileCount": 4,
            "inputRecordCount": 519,
            "outputRecordCount": 497,
            "failedRecordCount": 0
        },
        "completed": 1576741722026,
        "created": 1576741597205,
        "createdClient": "{API_KEY}",
        "createdUser": "{USER_ID}",
        "updatedUser": "{USER_ID}",
        "updated": 1576741722644,
        "version": "1.0.5"
    }    
}

Property
Description
metrics.failedRecordCount
The number of rows that were not able to be processed due to parsing, conversion, or validation. This value can be derived by subtracting the inputRecordCount from the outputRecordCount . This value will be generated on all batches regardless if errorDiagnostics is enabled.
Response with errors
If the batch has one or more errors and has error diagnostics enabled, the status will be success with more information about the errors provided both within the response and in a downloadable error file.
{
    "01E8043CY305K2MTV5ANH9G1GC": {
        "status": "success",
        "tags": {
            "acp_enableErrorDiagnostics": true,
            "acp_partialIngestionPercent": 5
        },
        "relatedObjects": [
            {
                "type": "dataSet",
                "id": "5deac2648a19d218a888d2b1"
            }
        ],
        "id": "01E8043CY305K2MTV5ANH9G1GC",
        "externalId": "01E8043CY305K2MTV5ANH9G1GC",
        "inputFormat": {
            "format": "parquet"
        },
        "imsOrg": "{IMS_ORG}",
        "started": 1576741718543,
        "metrics": {
            "inputByteSize": 568,
            "inputFileCount": 4,
            "inputRecordCount": 519,
            "outputRecordCount": 514,
            "failedRecordCount": 5
        },
        "completed": 1576741722026,
        "created": 1576741597205,
        "createdClient": "{API_KEY}",
        "createdUser": "{USER_ID}",
        "updatedUser": "{USER_ID}",
        "updated": 1576741722644,
        "version": "1.0.5",
        "errors": [
           {
             "code": "INGEST-1212-400",
             "description": "Encountered 5 errors in the data. Successfully ingested 514 rows. Please review the associated diagnostic files for more details."
           },
           {
             "code": "INGEST-1401-400",
             "description": "The row has corrupted data and cannot be read or parsed. Fix the corrupted data and try again.",
             "recordCount": 2
           },
           {
             "code": "INGEST-1555-400",
             "description": "A required field is either missing or has a value of null. Add the required field to the input row and try again.",
             "recordCount": 3
           }
        ]
    }
}

Property
Description
metrics.failedRecordCount
The number of rows that were not able to be processed due to parsing, conversion, or validation. This value can be derived by subtracting the inputRecordCount from the outputRecordCount . This value will be generated on all batches regardless if errorDiagnostics is enabled.
errors.recordCount
The number of rows that failed for the specified error code. This value is only generated if errorDiagnostics is enabled.
If error diagnostics are not available, the following error message will appear instead:
{
    "errors": [{
        "code": "INGEST-1211-400",
        "description": "Encountered errors while parsing, converting or otherwise validating the data. Please resend the data with error diagnostics enabled to collect additional information on failure types"
    }]
}

Next steps

This tutorial covered how to create or modify a dataset to enable partial batch ingestion. For more information on batch ingestion, please read the batch ingestion developer guide .

Partial batch ingestion error types

Partial batch ingestion has three different error types when ingesting data.

Unreadable files

If the batch ingested has unreadable files, the batch's errors will be attached on the batch itself. More information on retrieving the failed batch can be found in the retrieving failed batches guide .

Invalid schemas or headers

If the batch ingested has an invalid schema or invalid headers, the batch's errors will be attached on the batch itself. More information on retrieving the failed batch can be found in the retrieving failed batches guide .

Unparsable rows

If the batch ingested has unparsable rows, the batch's errors will be stored in a file that can be accessed by using the endpoint outlined below.
API format
GET /export/batches/{BATCH_ID}/meta?path=row_errors

Parameter
Description
{BATCH_ID}
The id value of the batch you are retrieving error information from.
Request
curl -X GET https://platform.adobe.io/data/foundation/export/batches/{BATCH_ID}/meta?path=row_errors \
  -H 'Authorization: Bearer {ACCESS_TOKEN}' \
  -H 'x-api-key: {API_KEY}' \
  -H 'x-gw-ims-org-id: {IMS_ORG}' \
  -H 'x-sandbox-name: {SANDBOX_NAME}'

Response
A successful response returns HTTP status 200 with details of the unparsable rows.
{
    "_corrupt_record": "{missingQuotes:"v1"}",
    "_errors": [{
         "code": "1401",
         "message": "Row is corrupted and cannot be read, please fix and resend."
    }],
    "_filename": "a1.json"
}