fbpx

Risk Gadgets

Risk related gadgets are available on Enterprise Tester Dashboards.  See the Creating Dashboards article on how to create a dashboard and add gadgets.

Risk Gadgets

There are two gadgets available for Risk.  Risk must be enabled for the related project for these gadgets to be meaningful.

Risk Monitor (%)

The Risk Monitor gadget displays the actual level of Risk (blue line) versus the acceptable level of Risk expressed as a percentage (green line).

In configuration for this gadget:

  • Select a package or a query containing execution data.
  • Select the date range.
  • Add the % of Acceptable Risk.

As tests are executed the actual Risk (blue line) will decrease.  Testing can stop once the blue line converges with the green line.

Risk Monitor (Score)

Risk Monitor (Score) can be used from the project inception – before Test Scripts have even been developed estimates can be entered to assist in developing a Risk Profile.

In configuration for this gadget:

  • Select a package or a query containing execution data.
  • Select the date range.
  • Add the % of Acceptable Risk.
  • Estimated Number of Test Scripts
  • Estimated Risk Score.

Blue Line: Risk Score (Actual + Estimated)

This is the sum of the Actual Risk Score for all execution data that is selected PLUS the total Estimated Risk score LESS the number of script assignments actually executed.

Green Line: Acceptable Risk

This is the % of Acceptable Risk as entered in the configuration screen.  

Orange Line: Risk Score (Actual)

Sum of Actual Risk Score for all execution data selected.  Only Script Assignments with the latest execution date are included.

Purple Line: Risk Score (Estimated)

Used in planning before Test Scripts have been developed.  The Estimated Number of Test Scripts TIMES the Estimated Risk Score LESS the number of Script Assignments actually executed.

Red Line: Risk Profile

Sum of Actual Risk Score for all execution data selected PLUS the total Estimated Risk Score.


Risk Profiler User Guide

Before getting started, please read commonly used terms in Risk Profiler

Project Configuration

  • Risk Configuration tab: Location of information used to calculate risk, exists independently for each project.

  • Requirement Risk Configuration: Section in the Risk Configuration tab, displays Requirement Factors to be used in calculations.

  • TestScript Risk Configuration: Section in the Risk Configuration tab, displays Test Script Factors to be used in calculations.

  • Risk Score Calculation: Section in the Risk Configuration tab, displays the calculation used to determine the Risk Score to apply toTest Execution data.

Calculated Fields 

  • Requirements Calculated Risk: A read-only field on a Requirement, displays the calculated risk.

  • Test Script Calculated Risk: A read-only field on a Test Script, displays the calculated risk.

  • Risk Score: A read-only field on a Test Script Assignment, displays the calculated risk score.

Setting up Project Risk Configuration

1) Under Admin tab in Enterprise Tester navigate to your project.

2) Double click on the project to edit, click on the Risk Configuration tab.

     Risk is turned off by default.
     Two default tables are displayed, Requirement Risk Configuration and TestScript Risk Configuration .

The Priority columns for the tables are populated from the Priority Picklist. (Admin > Project > Edit Pick Lists)The Calculated Risk columns for the tables  are populated from the values in the Sort Order field and are reversed. 
If you have a Priority that has a value of 0 , this is not displayed in the Calculated Risk values.
These values are just default values and can be changed at any time. 

Requirement Risk Factors

Many factors may be required to understand the risk for each Requirement.   Combining these factors will result in the Requirements Calculated Risk field being updated on each Requirement.

Any Inbuilt or Custom Combo Box fields can be added to the table.

Add Requirement Risk Factor

  1. In  the Requirements Risk Configuration table, click the Add Risk Factor  button
  2. Select Risk Factor from the dropdown list
  3. Select the column information for each Risk Factor selected from the drop down list
  4. Update the value in the Calculated Risk field for each row (numeric only)

Test Case Risk Factors

Many factors may be required to understand the risk for each Test Script.   Combining these factors will result in the Test Script Calculated Risk field being updated on each Test Script.

Any Inbuilt or Custom Combo Box fields can be added to the table. 

Add TestScript Risk Factor

  1. In  the TestScript Risk Configuration table,  click the Add Risk Factor  button
  2. Select Risk Factor from the dropdown list
  3. Select the column information for each Risk Factor selected from the drop down list
  4. Updated the value in the Calculated Risk field for each row (numeric only)

Risk Configuration is unique to each project.

To copy Risk Configuration between projects, simply copy the rows from each section. Paste into the same section in the other projects Edit screen.

All drop down values do not have to be mapped.

If values are not mapped the Calculated Risk field on a Requirement or Test Script will be blank.

Saving the settings will not turn on the Risk Profiler, it must be enabled.

Risk Score Calculation

The Risk Score Calculation determines the Risk Score applied to Test Execution data (Script Assignments).

Field Information

  • Constant: Fixed value to be used in calculating the Risk Score.  Numeric field between 0 – 99.99, up to 2 decimal places.

  • Operator: Multiply or Add

  • Requirements Calculated Risk: Calculated Risk field value from each Requirement

  • Script Calculated Risk: Calculate Risk field value from each Test Script

  • Default Requirement Risk Value:  Default value to be used if a Requirement does not have a Calculated Risk value or if there is no relationship with a Test Script.

  • Default Script Risk Value: Default value to be used if a Test Script does not have a Calculated Risk value.

In addition to the values seen here, the calculation takes into account Relationships between Requirements and Test Cases.

Enable Risk Calculations

Once all settings are configured, click the Enable button and select the Save button.
Note: Risk will not be enabled until you click on the Save button. 

Intensive processing is required for updates to Risk Calculations and Risk Scores for Requirements, Test Scripts and Test Execution data.

There may be a short delay in values being assigned.

Example Calculations

Using this Risk Configuration:

A Requirement in the relevant project with the following attributes would have a Calculated Risk of 9:

  • Requirement Complexity: High
  • Bug Prone Area: High

A Test Script in the relevant project with the following attributes would have a Calculated Risk of 2:

Tester Experience: Intermediate

Risk Score Scenario: If the Test Script is not related to this Requirement, the Test Assignment Risk Score is 6. (uses the Default Requirement Risk Value)

Calculation:   Requirements Calculation (1 * 3) * Script Calculation (1 * 2)  = Risk Score (6)

Risk Score Scenario: If the Test Script is related to the above Requirement, the Test Assignment Risk Score is 18

Calculation:  Requirements Calculation (1 * 9) * Script Calculation (1 * 2)  = Risk Score (18)

Risk Score Scenario: If the Test Script is related to the above Requirement – (Calculated Risk value =  9) and a new Requirement – (Calculated Risk value =  8) , the Test Assignment Risk Score is 34.

Calculation:  [(1 * 9) * (1 * 2) + (1 * 8) * (1 * 2)]

Requirement 1 – Requirements Calculation (1 * 9) * Script Calculation (1 * 2)  = 18 + Requirement 2 – Requirements Calculation (1 *8) * Script Calculation (1 * 2) = 16
Risk Score = 18 +16 


Risk Score Scenario: 
If a Test Script has no Calculated Risk value and is not related to a Requirement, the Test Assignment Risk Score is 9 (uses Default Risk values).

Calculation:  (1 * 3) * (1 * 3) 

Risk Score Scenario:  If a Test Script has no Calculated Risk value and is related to a Requirement with no Calculated Risk value, the Test Assignment Risk Score is 9 (uses Default Risk values).

Calculation:  (1 * 3) * (1 * 3)

Risk Score Scenario: If multiple Test Scripts are related to a single Requirement, the Test Assignment Risk Score will still be unique for each Test Assignment.  

This type of relationship does not affect the Risk Score for each Test Assignment.

Calculation:  (1 * Requirement Risk Value) * (1 * Test Script Risk Value) 

Viewing Calculated Risk Values in Enterprise Tester

Once Risk Configuration is enabled for the project, it is applied to following entities:-

Requirements: (Calculated Risk)

Test Scripts: (Calculated Risk)

Test Executions: (Risk Score)

For Requirement and Test Script Calculated Risk fields you can move them anywhere on the screen using Field Configuration. see Screen Field Configuration.

Viewing Calculated Risk fields in the Grids

You can also view the Calculated Risk and Risk Score fields in the Requirement, Test Script and Test Execution Grids.

You can place the field anywhere in the Grid. By default it is at the end of the grid.

  1. Go to the grid and select the Column button
  2. On the right hand side under Visible Columns , Calculated Risk is displayed at the bottom. Move it to where you want to place it in the grid.

Requirements Grid:

Test Script Grid:

Test Executions Grid:


Installing Risk Profiler

Getting Started

Please contact our Customer Team for a Risk Profiler License.

Risk Profiler is only available with Enterprise Tester 6.2 and above.

Once you have Risk Profiler License text, enable Risk Profiler by performing the following:

From the Admin tab, expand the Extensions folder and expand the Plugins folder.

  1. Double click on Risk Profiler, the License details screen will open.

  2. Paste your Risk Profiler license text to the screen and click Save. A message will now display indicating you must restart Enterprise Tester for the changes to take place.

  3. Click the Restart link and wait for Enterprise Tester to restart.

  4. Head over to the user guide for detailed instructions on using Risk Profiler.

Setting up Automated Tests in the Script Library

If you have Duette installed, automated tests can easily be added to the script library. To add an automated test to your library:

Right click on your script package and select Add Automated Test.

You can also simply click on the automated test icon on the navigator tool bar.

You can also create a script for the automated test from the Script Library summary screens:

The Edit Automated Test Screen will open.  Here you can add the Name of the script, assign it to a user and select the type of automated test.  Once these fields are completed, the Default Path field will open.  In this field, the path to the automated test tool results in your file system can be specified.

Additionally, if you select Unit Test Results, a Sub-Type field will be displayed where you can select the type of Unit Tests required to be imported.  If you select a Custom Sub-Type you must provide the path to a valid XSLT file.

Similarly to manual test scripts, you can view all relationships associated to the script from the Relationships tab which displays all associated requirements and execution sets. 

When all details have been completed, click Save and Close.  The Automated Test Script will now be listed in the Script Library.

JUnit Format

Automated tests are also included in the following execution status screen, displayed when users double click on an execution package:

  • Execution Status Grid

The Execution Status on the execution set package now contains all automated and manual test execution results.

  • Filter By Entity Type

You can filter by entity type in the assignments grid view.

  • Assign Automated Tests

You can assign automated tests to specific Enterprise Tester users if required.

<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
   <testsuite name="JUnitXmlReporter" errors="0" tests="0" failures="0" time="0" timestamp="2013-05-24T10:23:58" />
   <testsuite name="JUnitXmlReporter.constructor" errors="0" skipped="1" tests="3" failures="1" time="0.006" timestamp="2013-05-24T10:23:58">
      <properties>
         <property name="java.vendor" value="Sun Microsystems Inc." />
         <property name="compiler.debug" value="on" />
         <property name="project.jdk.classpath" value="jdk.classpath.1.6" />
      </properties>
      <testcase classname="JUnitXmlReporter.constructor" name="should default path to an empty string" time="0.006">
         <failure message="test failure">Assertion failed</failure>
      </testcase>
      <testcase classname="JUnitXmlReporter.constructor" name="should default consolidate to true" time="0">
         <skipped />
      </testcase>
      <testcase classname="JUnitXmlReporter.constructor" name="should default useDotNotation to true" time="0" />
   </testsuite>
</testsuites>

Below is the documented structure of a typical JUnit XML report. Notice that a report can contain 1 or more test suite. Each test suite has a set of properties (recording environment information).   Each test suite also contains 1 or more test case and each test case will either contain a skipped, failure or error node if the test did not pass.  If the test case has passed, then it will not contain any nodes.  For more details of which attributes are valid for each node please consult the following section “Schema”.

<testsuites>        => the aggregated result of all junit testfiles
  <testsuite>       => the output from a single TestSuite
    <properties>    => the defined properties at test execution
      <property>    => name/value pair for a single property
      ...
    </properties>
    <error></error> => optional information, in place of a test case - normally if the tests in the suite could not be found etc.
    <testcase>      => the results from executing a test method
      <system-out>  => data written to System.out during the test run
      <system-err>  => data written to System.err during the test run
      <skipped/>    => test was skipped
      <failure>     => test failed
      <error>       => test encountered an error
    </testcase>
    ...
  </testsuite>
  ...
</testsuites>

Schema

The JUnit XML Report output comes from a build tool called Nant, as opposed to the JUnit project itself – thus it can be a little tricky to nail down an official spec for the format, even though it’s widely adopted and used.   There have been a number of attempts to codify the schema, first off there is an XSD for JUnit:

XSD

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="unqualified">
   <xs:annotation>
      <xs:documentation xml:lang="en">JUnit test result schema for the Apache Ant JUnit and JUnitReport tasks
Copyright © 2011, Windy Road Technology Pty. Limited
The Apache Ant JUnit XML Schema is distributed under the terms of the GNU Lesser General Public License (LGPL) http://www.gnu.org/licenses/lgpl.html
Permission to waive conditions of this license may be requested from Windy Road Support (http://windyroad.org/support).</xs:documentation>
   </xs:annotation>
   <xs:element name="testsuite" type="testsuite" />
   <xs:simpleType name="ISO8601_DATETIME_PATTERN">
      <xs:restriction base="xs:dateTime">
         <xs:pattern value="[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}" />
      </xs:restriction>
   </xs:simpleType>
   <xs:element name="testsuites">
      <xs:annotation>
         <xs:documentation xml:lang="en">Contains an aggregation of testsuite results</xs:documentation>
      </xs:annotation>
      <xs:complexType>
         <xs:sequence>
            <xs:element name="testsuite" minOccurs="0" maxOccurs="unbounded">
               <xs:complexType>
                  <xs:complexContent>
                     <xs:extension base="testsuite">
                        <xs:attribute name="package" type="xs:token" use="required">
                           <xs:annotation>
                              <xs:documentation xml:lang="en">Derived from testsuite/@name in the non-aggregated documents</xs:documentation>
                           </xs:annotation>
                        </xs:attribute>
                        <xs:attribute name="id" type="xs:int" use="required">
                           <xs:annotation>
                              <xs:documentation xml:lang="en">Starts at '0' for the first testsuite and is incremented by 1 for each following testsuite</xs:documentation>
                           </xs:annotation>
                        </xs:attribute>
                     </xs:extension>
                  </xs:complexContent>
               </xs:complexType>
            </xs:element>
         </xs:sequence>
      </xs:complexType>
   </xs:element>
   <xs:complexType name="testsuite">
      <xs:annotation>
         <xs:documentation xml:lang="en">Contains the results of exexuting a testsuite</xs:documentation>
      </xs:annotation>
      <xs:sequence>
         <xs:element name="properties">
            <xs:annotation>
               <xs:documentation xml:lang="en">Properties (e.g., environment settings) set during test execution</xs:documentation>
            </xs:annotation>
            <xs:complexType>
               <xs:sequence>
                  <xs:element name="property" minOccurs="0" maxOccurs="unbounded">
                     <xs:complexType>
                        <xs:attribute name="name" use="required">
                           <xs:simpleType>
                              <xs:restriction base="xs:token">
                                 <xs:minLength value="1" />
                              </xs:restriction>
                           </xs:simpleType>
                        </xs:attribute>
                        <xs:attribute name="value" type="xs:string" use="required" />
                     </xs:complexType>
                  </xs:element>
               </xs:sequence>
            </xs:complexType>
         </xs:element>
         <xs:element name="testcase" minOccurs="0" maxOccurs="unbounded">
            <xs:complexType>
               <xs:choice minOccurs="0">
                  <xs:element name="error">
                     <xs:annotation>
                        <xs:documentation xml:lang="en">Indicates that the test errored.  An errored test is one that had an unanticipated problem. e.g., an unchecked throwable; or a problem with the implementation of the test. Contains as a text node relevant data for the error, e.g., a stack trace</xs:documentation>
                     </xs:annotation>
                     <xs:complexType>
                        <xs:simpleContent>
                           <xs:extension base="pre-string">
                              <xs:attribute name="message" type="xs:string">
                                 <xs:annotation>
                                    <xs:documentation xml:lang="en">The error message. e.g., if a java exception is thrown, the return value of getMessage()</xs:documentation>
                                 </xs:annotation>
                              </xs:attribute>
                              <xs:attribute name="type" type="xs:string" use="required">
                                 <xs:annotation>
                                    <xs:documentation xml:lang="en">The type of error that occured. e.g., if a java execption is thrown the full class name of the exception.</xs:documentation>
                                 </xs:annotation>
                              </xs:attribute>
                           </xs:extension>
                        </xs:simpleContent>
                     </xs:complexType>
                  </xs:element>
                  <xs:element name="failure">
                     <xs:annotation>
                        <xs:documentation xml:lang="en">Indicates that the test failed. A failure is a test which the code has explicitly failed by using the mechanisms for that purpose. e.g., via an assertEquals. Contains as a text node relevant data for the failure, e.g., a stack trace</xs:documentation>
                     </xs:annotation>
                     <xs:complexType>
                        <xs:simpleContent>
                           <xs:extension base="pre-string">
                              <xs:attribute name="message" type="xs:string">
                                 <xs:annotation>
                                    <xs:documentation xml:lang="en">The message specified in the assert</xs:documentation>
                                 </xs:annotation>
                              </xs:attribute>
                              <xs:attribute name="type" type="xs:string" use="required">
                                 <xs:annotation>
                                    <xs:documentation xml:lang="en">The type of the assert.</xs:documentation>
                                 </xs:annotation>
                              </xs:attribute>
                           </xs:extension>
                        </xs:simpleContent>
                     </xs:complexType>
                  </xs:element>
               </xs:choice>
               <xs:attribute name="name" type="xs:token" use="required">
                  <xs:annotation>
                     <xs:documentation xml:lang="en">Name of the test method</xs:documentation>
                  </xs:annotation>
               </xs:attribute>
               <xs:attribute name="classname" type="xs:token" use="required">
                  <xs:annotation>
                     <xs:documentation xml:lang="en">Full class name for the class the test method is in.</xs:documentation>
                  </xs:annotation>
               </xs:attribute>
               <xs:attribute name="time" type="xs:decimal" use="required">
                  <xs:annotation>
                     <xs:documentation xml:lang="en">Time taken (in seconds) to execute the test</xs:documentation>
                  </xs:annotation>
               </xs:attribute>
            </xs:complexType>
         </xs:element>
         <xs:element name="system-out">
            <xs:annotation>
               <xs:documentation xml:lang="en">Data that was written to standard out while the test was executed</xs:documentation>
            </xs:annotation>
            <xs:simpleType>
               <xs:restriction base="pre-string">
                  <xs:whiteSpace value="preserve" />
               </xs:restriction>
            </xs:simpleType>
         </xs:element>
         <xs:element name="system-err">
            <xs:annotation>
               <xs:documentation xml:lang="en">Data that was written to standard error while the test was executed</xs:documentation>
            </xs:annotation>
            <xs:simpleType>
               <xs:restriction base="pre-string">
                  <xs:whiteSpace value="preserve" />
               </xs:restriction>
            </xs:simpleType>
         </xs:element>
      </xs:sequence>
      <xs:attribute name="name" use="required">
         <xs:annotation>
            <xs:documentation xml:lang="en">Full class name of the test for non-aggregated testsuite documents. Class name without the package for aggregated testsuites documents</xs:documentation>
         </xs:annotation>
         <xs:simpleType>
            <xs:restriction base="xs:token">
               <xs:minLength value="1" />
            </xs:restriction>
         </xs:simpleType>
      </xs:attribute>
      <xs:attribute name="timestamp" type="ISO8601_DATETIME_PATTERN" use="required">
         <xs:annotation>
            <xs:documentation xml:lang="en">when the test was executed. Timezone may not be specified.</xs:documentation>
         </xs:annotation>
      </xs:attribute>
      <xs:attribute name="hostname" use="required">
         <xs:annotation>
            <xs:documentation xml:lang="en">Host on which the tests were executed. 'localhost' should be used if the hostname cannot be determined.</xs:documentation>
         </xs:annotation>
         <xs:simpleType>
            <xs:restriction base="xs:token">
               <xs:minLength value="1" />
            </xs:restriction>
         </xs:simpleType>
      </xs:attribute>
      <xs:attribute name="tests" type="xs:int" use="required">
         <xs:annotation>
            <xs:documentation xml:lang="en">The total number of tests in the suite</xs:documentation>
         </xs:annotation>
      </xs:attribute>
      <xs:attribute name="failures" type="xs:int" use="required">
         <xs:annotation>
            <xs:documentation xml:lang="en">The total number of tests in the suite that failed. A failure is a test which the code has explicitly failed by using the mechanisms for that purpose. e.g., via an assertEquals</xs:documentation>
         </xs:annotation>
      </xs:attribute>
      <xs:attribute name="errors" type="xs:int" use="required">
         <xs:annotation>
            <xs:documentation xml:lang="en">The total number of tests in the suite that errorrd. An errored test is one that had an unanticipated problem. e.g., an unchecked throwable; or a problem with the implementation of the test.</xs:documentation>
         </xs:annotation>
      </xs:attribute>
      <xs:attribute name="time" type="xs:decimal" use="required">
         <xs:annotation>
            <xs:documentation xml:lang="en">Time taken (in seconds) to execute the tests in the suite</xs:documentation>
         </xs:annotation>
      </xs:attribute>
   </xs:complexType>
   <xs:simpleType name="pre-string">
      <xs:restriction base="xs:string">
         <xs:whiteSpace value="preserve" />
      </xs:restriction>
   </xs:simpleType>
</xs:schema>

Relax NG Compact Syntax

There is also a Relax NG Compact Syntax Schema:

junit.rnc: 
#---------------------------------------------------------------------------------- 
start = testsuite 
property = element property { 
   attribute name {text}, 
   attribute value {text} 
} 
properties = element properties { 
   property* 
} 
failure = element failure { 
   attribute message {text}, 
   attribute type {text}, 
   text 
} 
testcase = element testcase { 
   attribute classname {text}, 
   attribute name {text}, 
   attribute time {text}, 
   failure? 
} 
testsuite = element testsuite { 
   attribute errors {xsd:integer}, 
   attribute failures {xsd:integer}, 
   attribute hostname {text}, 
   attribute name {text}, 
   attribute tests {xsd:integer}, 
   attribute time {xsd:double}, 
   attribute timestamp {xsd:dateTime}, 
   properties, 
   testcase*, 
   element system-out {text}, 
   element system-err {text} 
} 
#---------------------------------------------------------------------------------- 

and junitreport.rnc 
#---------------------------------------------------------------------------------- 
include "junit.rnc" { 
   start = testsuites 
   testsuite = element testsuite { 
      attribute errors {xsd:integer}, 
      attribute failures {xsd:integer}, 
      attribute hostname {text}, 
      attribute name {text}, 
      attribute tests {xsd:integer}, 
      attribute time {xsd:double}, 
      attribute timestamp {xsd:dateTime}, 
      attribute id {text}, 
      attribute package {text}, 
      properties, 
      testcase*, 
      element system-out {text}, 
      element system-err {text} 
   } 
} 
testsuites = element testsuites { 
   testsuite* 
}

Sourcecode

The JUnit XML report format originates the JUnit ANT task – this is the definitive source for the JUnit Report XML format – and source code can be found on the apache SVN repository here:

http://svn.apache.org/repos/asf/ant/core/trunk/src/main/org/apache/tools/ant/taskdefs/optional/junit/XMLJUnitResultFormatter.java

This can be useful if attempting to do anything “tricky” with JUnit output where you need to confirm it’s compliant.

Stack Overflow Discussions

Here are some additional topics on stack overflow that discuss the JUnit XML format which can be useful for learning about the schema for the JUnit XML results format, as well to get hints about what minimum subset is normally suitable for most tools such as Jenkins (the same rules apply to ET generally as well).

Execution Status Screen

Automated tests are also included in the following execution status screen, displayed when users double click on an execution package:

  • Execution Status Grid

The Execution Status on the execution set package now contains all automated and manual test execution results.

  • Filter By Entity Type

You can filter by entity type in the assignments grid view.

  • Assign Automated Tests

You can assign automated tests to specific Enterprise Tester users if required.

Viewing Automated Run History

Right click on the automated script in the execution set

Select Run History

The Run History screen provides a summary of all runs:

You can select a run in the grid to view the full results.

You can select runs from the grid to change the runs represented in the charts.

Viewing Imported Results

To view the automation results that were most recently imported you can either double click on the run in the execution set grid or right click on the automated script in the execution set on the Explorer tab in the Tree view navigator and select View Automated Test Assignment.

The Automated Test Results screen will open and default to the Summary tab. 

The following tabs are available:

  • Summary
  • Results
  • Test Data
  • Attachments
  • Relationships

Summary Tab

The Summary tab lists all import information. This information is sourced from the imported automated test results and also from the information the user has entered when importing these scripts into Enterprise Tester.  
The Summary tab provides two charting widgets showing the Run Status Over Time and Status Group by Day ( widget is configurable and you can select the field or the date parameter to chart against).

The Summary tab also provides a summary grid of all run results for this automated test script assignment.  Similar to other summary screen grids, the run history grid is configurable.  You can select the columns to display or filter the results using TQL.

Results Tab

The Results tab provides a tree view of the results of your automated test run.  You can drill down to the detail in each node.
In the top right corner is a status filter where you can select on the status that you want to be displayed e.g. only failed and warnings. This will allow you to quickly filter your results set to only the relevant statuses for review.

Each node has an icon indicating results:

Users can add attachments, add and link incidents and view the node details via the screen.

If a user clicks node details a Result Node screen is displayed with the following tabs:
 All of this information is sourced from the automated test tool itself. 

Details – high level information relating to the specific results node. You can add notes on your results node which will display in the Results tree view in the Notes column.

Parameters – automated tool parameters. 

Metadata – any metadata utilized in the automated scripts.

Test Data Tab

The Test Data tab displays table driven variable information from the automated test tool.  Currently this function will only be displayed for imported QTP data.

Attachments and Relationships Tabs

The Attachments and Relationships tabs have been implemented in a consistent way with other areas of Enterprise Tester:

Attachments can be added when required from the Attachments link.

Any relationships to the automated test, such as incidents, can also be viewed.

Importing Automated Test Results – Manual Import

To import automated test results, like manual tests, In the script library you first need to add an automated test script.  This is not the script that your automated tool will run.  It is a placeholder in the Script Library for your Automated Test that specifies the type of results and the pathway to the results to import.

To do this,  from the Explorer tab in the tree view navigator, click to select the package you wish to add your automated test script to.  

Then either click on the Add Automated Test icon from the menu bar.

Or right click on the Package and select Add Automated Test.

The New Automated Test screen will appear.  Complete the details:

FieldDescription
NameEnter a name for your test.
Assigned ToEnter the User name of the person assigned to the Automated Test.
TypeSelect the test type from the drop down list. HP Quick Test Professional, IBM Rational Functional Tester,Selenium or Unit Test Results
Sub-TypeOnly applicable for Unit Test Results. Select from the dropdown list the type of results you are importing
Default PathEnter the path to a specific results file or folder. This is not required. When importing results you can manually select the file to upload.

Once the Automated Test Script is created, you then need to add it to your execution set by dragging and dropping from the script library to the execution set package or by using the Create Execution button on the grid tool bar.

Import Results

To import the results, right click on the Automated Test Assignment in the execution set and select Import Result from the menu or double click on the Automated Test Assignment in the execution set grid to open the script assignment.

A dialogue box will appear that displays the following two tabs:

  • Import from path – If you setup a default path in the Automated Test setup in the script library this will be displayed in the Results Path field. If the default path is still correct, enter your automated test results file name and click Import.
  • Upload file – You can select to upload your file. 

Progress information will be displayed followed by the Success message.

Importing Automated Results – Scheduled Import

Duette Schedules allow you to create schedules where individual or multiple sets of result files can be imported into Enterprise Tester, with the associated automated test, automated test assignment and runs being created automatically.

Features

Duette Schedules supports the following features:

  • Ability to schedule importing on an ad-hoc, periodic or daily basis.
  • Support for multiple schedules.
  • Support for record summary and details history of imports.
  • Ability to manually trigger an import schedule.
  • Ability to import multiple files using a file name “capture” within the source path of the tests.
  • Ability to import multiple files using a folder name “capture” within the source path of the tests (can not be combined with filename capture).
  • Ability to combine multiple source files into a single automated test result.
  • Ability to either always import results on scheduled run, or only import results if the source files have changed since last time.
  • Ability to duplicate an existing import configuration.
  • Ability to enable or disable import configurations and schedules.
  • Ability to automatically maintain only a certain number of result files (automatically purging the oldest runs over the maximum number of retained results).

Creating an Import Schedule

You can configure Duette Schedules from the Resources tab of the tree view navigator.  When the Duette Plugin is enabled, a section called “Duette Schedules” is available.

If Duette Plugin is enabled and Duette Schedules is not available in the Resources tab, contact your Admin to ensure you have the correct permissions for Duette Plugin.

To create a new schedule, expand Duette Schedules to list all projects. 

Right click on the project you wish to create the schedule for.
Select Add Schedule from the menu. 
The Edit Automated Test Schedule screen will be displayed where you can add one or more configurations and set schedules for imports ( adhoc, periodic or daily intervals).

On the Configurations tab, select the Add icon.
Complete the details:

FieldDescription
Configuration NameEnter a name for your Configuration
TypeSelect the test type from the drop down list. HP Quick Test Professional, IBM Rational Functional Tester,Selenium or Unit Test Results
Sub-TypeOnly applicable for Unit Test Results. Select from the dropdown list the type of results you are importing
Default PathIf you also plan on manually importing results enter the pathway. Otherwise leave this field blank.
Source PathEnter the path to the results file to upload e.g. c:\testdata\{Name}.xml
Name TemplateEnter a name for the results files e.g.{name}
Combine ResultsCheck to combine all results in the path into a single run. This can be useful for unit test where there maybe many xml files which comprise of a single run.
Skip files if unchangedCheck to skip importing files if they have not changed since the last import. e.g. the tests have not run since the last schedule import.
Root Script PackageSelect the package/ folder in the Script Library where you would like the Automated Test Script created.
Root Execution PackageSelect the execution package to import the run results to.
Retained Max # ResultsEnter the maximum number of runs to retain ( in the run history of the automated test assignment). Runs outside of the retention period will be removed.

Using the Source Path and Name Template 


Creating a Single Automated Test from a Single File
In its simplest form a schedule can be created to import a single file as a result into an automated test assignment. 

Creating a Single Automated Test from a Single File

Source Path: c:\testdata\results.xml
Name Template: my result

Creating an Automated Test from a Folder of Results

Creating an Automated Test from a Folder of Results

Source Path: c:\testdata\
Name Template: my results

These both behave in the same manor as manually importing an automated test result and filling in the “Results Path” field on screen. In addition, zip files are also supported, which effectively works similarly to configuring a folder (with the contents of the zip file treated the same as the contents of the folder). 


Capturing part of the path into the “name” variable
It is also possible to “capture” part of the path into a special variable called “name” which can then be referenced within the “Name template” to generate a unique name for each test. 
Below is an example using files:

Examples – Capturing part of the path into the “name” variable

Source Path: c:\testdata\{name}.xml
Name Template: {name}

For a directory of files, here is what would be generated 

FileGenerated NameComments
c:\testdata\report-results.xmlreport-resullts
c:\testdata\report.pngskipped, as it doesn’t match source path
c:\testdata\ui-results.xmlui-results

Source Path: c:\testdata\{name}\

Name Template: {name}

FileGenerated NameComments
c:\testdata\build123\build123
c:\testdata\builder456\build456 
c:\testdata\test.xmlskipped, as file is not a folder

Wildcards are also supported.  In the Source path, you can replace the name with “*” . Wildcards are not supported in the Name Template field 

Example – Using wildcards

Source Path: c:\testdata\*\
Name Template: {name}

FileGenerated NameComments
c:\testdata\build123\build123
c:\testdata\builder456\build456 
c:\testdata\test.xmlskipped, as file is not a folder

Limitations

 There some limitation so the Source Path and Name Template fields.  These include the following:

  •  A folder name with a file cannot be captured e.g. “c:\testdata\{name}\input.xml” is not supported.
  • The combine results option is not supported for zip files e.g. “c:\testdata\{name}.zip”.
  • The combine results option is not supported when capturing folders e.g. “c:\testdata\{name}\” you can not use the “combine results” option.
  • If capturing files which include zip and non-zip files, the zip files will not be automatically expanded/traversed (but you can use the “combine results” option).  In this case zip files are treated like any other file making up a single result.

Configuring the Schedule

Now that your configuration is complete you are ready to set up your Import Schedule.  The import frequency can be configured from the Schedules tab. There are three options that can be configured:

  1. Adhoc
  2. Periodic
  3. Daily
TypePeriodTime
AdhocN/AN/A
PeriodicSpecify the synchronization frequency in minutesN/A
DailyN/ASpecify the time using the (24hr clock) when the synchronization will occur daily.

Once you have configured your import schedule, a summary of the configured import schedules is available.  You can see the time of the Last Run, the Next Run (if applicable), whether the schedule is enabled or not and the current import status.

You can use the tool bar to add new import schedules, delete an existing configuration, enable or disable an existing schedule, configure an existing schedule or manually initiate an import.

Import History

You can view the import history from Import History tab.
From the Import History screen you can do the following;

  • View all synchronization events
  • Select to only view errors
  • Export the synchronization events to a csv file
  • Clear the history.