Show Menu
TOPICS×

Train Smart Tag service and tag your images

Organizations that deal with digital assets increasingly use taxonomy-controlled vocabulary in asset metadata. Essentially, it includes a list of keywords that employees, partners, and customers commonly use to refer to and search for their digital assets. Tagging assets with taxonomy-controlled vocabulary ensures that the assets can be easily identified and retrieved by tag-based searches.
Compared to natural language vocabularies, tagging based on business taxonomy helps align the assets with a company's business and ensures that the most relevant assets appear in searches. For example, a car manufacturer can tag car images with model names so only relevant images are displayed when searched to design a promotion campaign.
In the background, the Smart Tags uses an artificial intelligence framework of Adobe Sensei to train its image recognition algorithm on your tag structure and business taxonomy. This content intelligence is then used to apply relevant tags on a different set of assets.
To use smart tagging, complete the following tasks:
Smart Tags are applicable only for Adobe Experience Manager Assets customers. The Smart Tags is available for purchase as an add-on to Experience Manager.

Integrate Experience Manager with Adobe Developer Console

The new Experience Manager Assets deployments are integrated with Adobe Developer Console by default. It helps configure the smart tags functionality faster. On the existing deployments, administrators can manually configure smart tags integration .
You can integrate Adobe Experience Manager with the Smart Tags using Adobe Developer Console. Use this configuration to access the Smart Tags service from within Experience Manager. See configure Experience Manager for smart tagging of assets for tasks to configure the Smart Tags. At the back end, the Experience Manager server authenticates your service credentials with the Adobe Developer Console gateway before forwarding your request to the Smart Tags service.

Understand tag models and guidelines

A tag model is a group of related tags that are by a visual aspect of the image. For example, a shoes collection can have different tags but all the tags are related to shoes and can belong to the same tag model. Tags can relate only with the distinctly different visual aspects of images. To understand the content representation of a training model in Experience Manager, visualize a training model as a top-level entity comprised of a group of manually added tags and example images for each tag. Each tag can be exclusively applied to an image.
Tags that cannot realistically be handled are related to:
  • Non-visual, abstract aspects such as the year or season of release of a product, mood of or emotion evoked by an image.
  • Fine visual differences in products such as shirts with and without collars or small product logos embedded on products.
Before you create a tag model and train the service, identify a set of unique tags that best describe the objects in the images in the context of your business. Ensure that the assets in your curated set conform to the training guidelines .

Training guidelines

The images in your training set should conform to the following guidelines:
Quantity and size: Minimum 10 images and maximum 50 images per tag.
Coherence : Images for a tag should be visually similar. It is best to add together the tags about the same visual aspects (such as the same type of objects in an image) together into a single tag model. For example, it is not a good idea to tag all of these images as my-party (for training) because they are not visually similar.
Coverage : There should be sufficient variety in the images in the training. The idea is to supply a few but reasonably diverse examples so that AEM learns to focus on the right things. If you're applying the same tag on visually dissimilar images, include at least five examples of each kind. For example, for the tag model-down-pose , include more training images similar to the highlighted image below for the service to identify similar images more accurately during tagging.
Distraction/obstruction : The service trains better on images that have less distraction (prominent backgrounds, unrelated accompaniments, such as objects/persons with the main subject). For example, for the tag casual-shoe , the second image is not a good training candidate.
Completeness: If an image qualifies for more than one tag, add all applicable tags before including the image for training. For example, for tags, such as raincoat and model-side-view , add both the tags on the eligible asset before including it for training.
Number of tags : Adobe recommends that you train a model using at least two distinct tags and at least 10 different images for each tag. In a single tag model, do not add more than 50 tags.
Number of examples : For each tag, add at least 10 examples. However, Adobe recommends about 30 examples. A maximum of 50 examples per tag are supported.
Prevent false positives and conflicts : Adobe recommends creating a single tag model for a single visual aspect. Structure the tag models in a way that avoids overlapping tags between the models. For example, do not use a common tags like sneakers in two different tag models names shoes and footwear . The training process overwrites one trained tag model with the other for a common keyword.
Examples : Some more examples for guidance are:
  • Create a tag model that includes,
    • only the tags related to car models.
    • only the tags related to colors of shirts.
    • only the tags related to jackets for women and men.
  • Do not create,
    • a tag model that includes car models released in 2019 and 2020.
    • multiple tag models that include the same few car models.
Images used to train : You can use the same images to train different tag models. However, the do not associate an image with more than one tag in a tag model. Hence, it is possible to tag the same image with different tags belonging to different tag models.
You cannot undo the training. The above guidelines should help you choose good images to train.

Train the model for your custom tags

To create and train a model for your business-specific tags, follow these steps:
  1. Create the necessary tags and the appropriate tag structure. Upload the relevant images in the DAM repository.
  2. In Experience Manager user interface, access Assets > Smart Tag Training .
  3. Click Create . Provide a Title , Description .
  4. Browse and select the tags from the existing tags in cq:tags that you want to train the model for. Click Next .
  5. In the Select Assets dialog, click Add Assets against each tag. Search in the DAM repository or browse the repository to select at least 10 and at most 50 images. Select assets and not the folder. Once you've selected the images, click Select .
  6. To preview the thumbnails of the selected images, click the accordion in front of a tag. You can modify your selection by clicking Add Assets . Once satisfied with the selection, click Submit . The user interface displays a notification at the bottom of the page indicating that the training is initiated.
  7. Check the status of the training in the Status column for each tag model. Possible statuses are Pending, Trained, and Failed.
Figure: Steps of the training workflow to train tagging model.

View training status and report

To check whether the Smart Tags service is trained on your tags in the training set of assets, review the training workflow report from the Reports console.
  1. In Experience Manager interface, go to Tools > Assets > Reports .
  2. In the Asset Reports page, click Create .
  3. Select the Smart Tags Training report, and then click Next from the toolbar.
  4. Specify a title and description for the report. Under Schedule Report , leave the Now option selected. If you want to schedule the report for later, select Later and specify a date and time. Then, click Create from the toolbar.
  5. In the Asset Reports page, select the report you generated. To view the report, click View from the toolbar.
  6. Review the details of the report. The report displays the training status for the tags you trained. The green color in the Training Status column indicates that the Smart Tags service is trained for the tag. Yellow color indicates that the service is not completely trained for a particular tag. In this case, add more images with the particular tag and run the training workflow to train the service completely on the tag. If you do not see your tags in this report, run the training workflow again for these tags.Tags
  7. To download the report, select it from the list, and click Download from the toolbar. The report downloads as an Microsoft Excel spreadsheet.

Tag assets

After you have trained the Smart Tags service, you can trigger the tagging workflow to automatically apply appropriate tags on a different set of similar assets. You can apply the tagging workflow periodically or whenever required. The tagging workflow applies to both, assets and folders.

Tag assets from the workflow console

  1. In Experience Manager interface, go to Tools > Workflow > Models .
  2. From the Workflow Models page, select the DAM Smart Tags Assets workflow and then click Start Workflow from the toolbar.
  3. In the Run Workflow dialog, browse to the payload folder containing assets on which you want to apply your tags automatically.
  4. Specify a title for the workflow and an optional comment. Click Run .
    Navigate to the asset folder and review the tags to verify whether your assets are tagged properly. For details, see manage smart tags .

Tag assets from the timeline

  1. From the Assets user interface, select the folder containing assets or specific assets to which you want to apply smart tags.
  2. From upper-left corner, open the Timeline .
  3. Open actions from the bottom of the left sidebar and click Start Workflow .
  4. Select the DAM Smart Tag Assets workflow, and specify a title for the workflow.
  5. Click Start . The workflow applies your tags on assets. Navigate to the asset folder and review the tags to verify whether your your assets are tagged properly. For details, see manage smart tags .
In the subsequent tagging cycles, only the modified assets are tagged again with newly-trained tags.However, even unaltered assets are tagged if the gap between the last and current tagging cycles for the tagging workflow exceeds 24 hours. For periodic tagging workflows, unaltered assets are tagged when the time gap exceeds 6 months.

Tag uploaded assets

Experience Manager can automatically tag the assets that users upload to DAM. To do so, administrators configure a workflow to add an available step of to smart tag assets. See how to enable smart tagging for uploaded assets .

Manage smart tags and image searches

You can curate smart tags to remove any inaccurate tags that may have been assigned to your brand images so only the most relevant tags are displayed.
Moderating Smart tags also helps refine tag-based searches for images by ensuring that your image appears in search results for the most relevant tags. Essentially, it helps eliminate the chances of unrelated images from showing up in search results.
You can also assign a higher rank to a tag to increase its relevance with respect to an image. Promoting a tag for an image increases the chances of the image appearing in search results when a search is performed based on the particular tag.
  1. In the Omnisearch box, search for assets based on a tag.
  2. Inspect the search results to identify an image that you don't find relevant to your search.
  3. Select the image, and then click the Manage Tags icon from the toolbar.
  4. From the Manage Tags page, inspect the tags. If you don't want the image to be searched based on a specific tag, select the tag and then click the delete icon from the toolbar. Alternatively, click X symbol that appears beside the label.
  5. To assign a higher rank to a tag, select the tag and click the promote icon from the toolbar. The tag you promote, is moved to the Tags section.
  6. Click Save , and then click OK to close the Success dialog.
  7. Navigate to the properties page for the image. Observe that the tag you promoted is assigned a high relevance and, therefore, appears higher in the search results.

Understand AEM search results with smart tags

By default, AEM search combines the search terms with an AND clause. Using smart tags does not change this default behavior. Using smart tags adds an additional OR clause to find any of the search terms in the applies smart tags. For example, consider searching for woman running . Assets with just woman or just running keyword in the metadata do not appear in the search results by default. However, an asset tagged with either woman or running using smart tags appears in such a search query. So the search results are a combination of,
  • assets with woman and running keywords in the metadata.
  • assets smart tagged with either of the keywords.
The search results that match all search terms in metadata fields are displayed first, followed by the search results that match any of the search terms in the smart tags. In the above example, the approximate order of display of search results is:
  1. matches of woman running in the various metadata fields.
  2. matches of woman running in smart tags.
  3. matches of woman or of running in smart tags.

Tagging limitations

Enhanced smart tags are based on learning models of brand images and their tags. These models are not always perfect at identifying tags. The current version of the Smart Tags has the following limitations:
  • Inability to recognize subtle differences in images. For example, slim versus regular fitted shirts.
  • Inability to identify tags based on tiny patterns/parts of an image. For example, logos on T-shirts.
  • Tagging is supported in the locales that AEM is supported in. For a list of languages, see Smart Tags release notes .
To search for assets with smart tags (regular or enhanced), use the Assets Omnisearch (full-text search). There is no separate search predicate for smart tags.
The ability of the Smart Tags to train on your tags and apply them on other images depends on the quality of images you use for training. For best results, Adobe recommends that you use visually similar images to train the service for each tag.