Show Menu
TOPICS×

Understand your Test Results

During the Pipeline process, a number of metrics are captured and compared to either the Key Performance Indicators (KPIs) defined by the business owner, or standards set by Adobe Managed Services.
These are reported using the three-tier gating system as defined in this section.

Three-Tier Gates while Running a Pipeline

There are three gates in the pipeline:
  • Code Quality
  • Performance Testing
  • Security Testing
For each of these gates, there is a three-tier structure for issues identified by the gate.
  • Critical - These are issues identified by the gate which cause an immediate failure of the pipeline.
  • Important - These are issues identified by the gate which cause the pipeline to enter a paused state. A deployment manager, project manager, or business owner can either override the issues, in which case the pipeline proceeds, or they can accept the issues, in which case the pipeline stops with a failure.
  • Info - These are issues identified by the gate which are provided purely for informational purposes and have no impact on the pipeline execution.

Code Quality Testing

As part of the pipeline the source code is scanned to ensure that deployments meet certain quality criteria. Currently, this is implemented by SonarQube. There are over 100 rules combining generic Java rules and AEM-specific rules. The following table summarizes the rating for testing criteria:
Name
Definition
Category
Failure Threshold
Security Rating
A = 0 Vulnerability
B = at least 1 Minor Vulnerability
C = at least 1 Major Vulnerability
D = at least 1 Critical Vulnerability
E = at least 1 Blocker Vulnerability
Critical
< B
Reliability Rating
A = 0 Bug
B = at least 1 Minor Bug
C = at least 1 Major Bug
D = at least 1 Critical Bug E = at least 1 Blocker Bug
Important
< C
Maintainability Rating
Outstanding remediation cost for code smells is:
  • <=5% of the time that has already gone into the application, the rating is A
  • between 6 to 10% the rating is a B
  • between 11 to 20% the rating is a C
  • between 21 to 50% the rating is a D
  • anything over 50% is an E
Important
< A
Coverage
A mix of unit test line coverage and condition coverage using this formula:
Coverage = (CT + CF + LC)/(2*B + EL)
where: CT = conditions that have been evaluated to 'true' at least once while running unit tests
CF = conditions that have been evaluated to 'false' at least once while running unit tests
LC = covered lines = lines_to_cover - uncovered_lines
B = total number of conditions
EL = total number of executable lines (lines_to_cover)
Important
< 50%
Skipped Unit Tests
Number of skipped unit tests.
Info
> 1
Open Issues
Overall issue types - Vulnerabilities, Bugs, and Code Smells
Info
> 1
Duplicated Lines
Number of lines involved in duplicated blocks.
For a block of code to be considered as duplicated:
  • Non-Java projects:
  • There should be at least 100 successive and duplicated tokens.
  • Those tokens should be spread at least on:
  • 30 lines of code for COBOL
  • 20 lines of code for ABAP
  • 10 lines of code for other languages
  • Java projects:
  • There should be at least 10 successive and duplicated statements whatever the number of tokens and lines.
Differences in indentation as well as in string literals are ignored while detecting duplications.
Info
> 1%
Refer to Metric Definitions for more detailed definitions.
You can download the list of rules here sonarqube-rules.xlsx
To learn more about the custom SonarQube rules executed by Cloud Manager, please refer to Custom Code Quality Rules.

Dealing with False Positives

The quality scanning process is not perfect and will sometimes incorrectly identify issues which are not actually problematic. This is referred to as a "false positive".
In these cases, the source code can be annotated with the standard Java @SuppressWarnings annotation specifying the rule ID as the annotation attribute. For example, one common problem is that the SonarQube rule to detect hardcoded passwords can be aggressive about how a hardcoded password is identified.
To look at a specific example, this code would be fairly common in an AEM project which has code to connect to some external service:
@Property(label = "Service Password")
private static final String PROP_SERVICE_PASSWORD = "password";
SonarQube will then raise a Blocker Vulnerability. After reviewing the code, you identify that this is not a vulnerability and can annotate this with the appropriate rule id.
@SuppressWarnings("squid:S2068")
@Property(label = "Service Password")
private static final String PROP_SERVICE_PASSWORD = "password";
However, on the other hand, if the code was actually this:
@Property(label = "Service Password", value = "mysecretpassword")
private static final String PROP_SERVICE_PASSWORD = "password";
Then the correct solution is to remove the hardcoded password.
While it is a best practice to make the @SuppressWarnings annotation as specific as possible, i.e. annotate only the specific statement or block causing the issue, it is possible to annotate at a class level.

Security Testing

Cloud Manager runs the existing AEM Security Heath Checks on stage following the deployment and reports the status through the UI. The results are aggregated from all AEM instances in the environment.
If any of the Instances report a failure for a given health check, the entire Environment fails that health check. As with Code Quality and Performance Testing, these health checks are organized into categories and reported using the three-tier gating system. The only distinction is that there is no threshold in the case of security testing. All the health checks are simply pass or fail.
The following table lists the current checks:
Name
Health Check Implementation
Category
Deserialization firewall Attach API Readiness is in an acceptable state
Deserialization Firewall Attach API Readiness
Critical
Deserialization firewall is functional
Deserialization Firewall Functional
Critical
Deserialization firewall is loaded
Deserialization Firewall Loaded
Critical
AuthorizableNodeName implementation does not expose authorizable ID in the node name/path.
Authorizable Node Name Generation
Critical
Default passwords have been changed
Default Login Accounts
Critical
Sling default GET servlet is protected from DOS attacks.
Sling Get Servlet
Critical
Dispatcher is properly filtering requests
CQ Dispatcher Configuration
Critical
The Adobe Granite HTML Library Manager is configured appropriately
CQ HTML Library Manager Config
Critical
The Sling Java Script Handler is configured appropriately
Sling Java Script Handler
Critical
The Sling JSP Script Handler is configured appropriately
Sling JSP Script Handler
Critical
The Sling Referrer Filter is configured in order to prevent CSRF attacks
Sling Referrer Filter
Critical
SSL is configured correctly
SSL Configuration
Critical
No obviously insecure user profile policies found
User Profile Default Access
Critical
CRXDE Support bundle is disabled
CRXDE Support
Important
Sling DavEx bundle and servlet are disabled
DavEx Health Check
Important
Sample content is not installed
Example Content Packages
Important
Both the WCM Request Filter and the WCM Debug Filter are disabled
WCM Filters Configuration
Important
Sling WebDAV bundle and servlet are configured appropriately
WebDAV Health Check
Important
The web server is configured to prevent clickjacking
Web Server Configuration
Important
Replication is not using the 'admin' user
Replication and Transport Users
Info

Performance Testing

Performance testing in Cloud Manager is implemented using a 30 minute test.
During pipeline setup, the deployment manager can decide how much traffic to direct to each bucket.
You can learn more about bucket controls, from Configure your CI/CD Pipeline.
To setup your program and define your KPIs, see Setup your Program.
The following table summarizes the performance test matrix using the three-tier gating system:
Metric
Category
Failure Threshold
Page Request Error Rate %
Critical
>= 2%
CPU Utilization Rate
Critical
>= 80%
Disk IO Wait Time
Critical
>= 50%
95 Percentile Response Time
Important
>= Program-level KPI
Peak Response Time
Important
>= 18 seconds
Page Views Per Minute
Important
< Program-level KPI
Disk Bandwidth Utilization
Important
>= 90%
Network Bandwidth Utilization
Important
>= 90%
Requests Per Minute
Info
< 6000

Performance Testing Results Graphs

New graphs and download options have been added to the Performance Test Results dialog.
When you open the Performance Test dialog, the metric panels can be expanded to display a graph, provide a link to a download, or both.
For Cloud Manager Release 2018.7.0, this functionality is available for the following metrics:
  • CPU Utilization
    • A graph of CPU Utilization during the test period.
  • Disk I/O Wait Time
    • A graph of Disk I/O Wait Time during the test period.
  • Page Error Rate
    • A graph of page errors per minute during the test period.
    • A CSV file listing pages which have produced an error during the test.
  • Disk Bandwidth Utilization
    • A graph of Disk Bandwidth Utilization during the test period.
  • Network Bandwidth Utilization
    • A graph of Network Bandwidth Utilization during the test period.
  • Peak Response Time
    • A graph of peak response time per minute during the test period.
  • 95th Percentile Response Time
    • A graph of 95th percentile response time per minute during the test period.
    • A CSV file listing pages whose 95th percentile response time has exceeded the defined KPI.
The following images display the performance test graphs: