Saturday, April 22, 2017

Adding some automated testing via TMS

A simple automation scenario is to execute test cases in sequence via some external tool like maven, QTP etc.

As main trigger in this sample scenario I am using test execution plans assigned to a special user "agent1". Once some plans are assigned to it a python script can search for issues, execute and transition execution to pass or failed. This will allow you to distribute execution across multiple servers.

Lets get started with the agent preparation, we will need a python installation and the jira pip package (pip install jira).

The following script will be responsible for search and execution of tests (this can also scheduled in cron or systemd for example:

from jira import JIRA
import json
import logging
import shlex, subprocess
import re

FORMAT = '%(asctime)-15s %(message)s'logging.basicConfig(format=FORMAT)
logger = logging.getLogger()
logger.setLevel(logging.INFO)

jira = JIRA(basic_auth=('agent1', 'agent1'), server='http://192.168.0.1:8080')

allfields=jira.fields()
nameMap = {field['name']:field['id'] for field in allfields}

my_test_plans = jira.search_issues('assignee=currentUser() and issueType="Test Execution Plan" and status=Open')

for issue in my_test_plans:
    logger.info ( "Test execution plan: " + str(issue) )
    transitions = jira.transitions(issue)
    inprogid = jira.find_transitionid_by_name(issue, 'In Progress')

    jira.transition_issue(issue, str(inprogid))

    subtask_issues = issue.fields.subtasks
    for subtask in subtask_issues:
        passid = jira.find_transitionid_by_name(subtask, 'Passed')
        failid = jira.find_transitionid_by_name(subtask, 'Failed')
        subtask_issue = jira.issue(subtask.key)
        steps =  getattr(subtask_issue.fields, nameMap['Execution Steps'])
        count = 0        stop_execution = False        for step in steps:
            count=count + 1            logger.info ( str(count) + ": " + str(step))
            jsonsteps = json.loads(step)
            if str(jsonsteps['step']).startswith("execute"):
                try:
                    ret=subprocess.check_output(
                        str(jsonsteps['step']).replace("execute",""),
                        stderr=subprocess.STDOUT,
                        shell=True)
                    logger.info ( "Execution finished ok ... checking output: " + ret )
                    pattern = re.compile(str(jsonsteps['expected']))
                    if pattern.match(ret):
                        logger.info ( "Output looks ok !"  )
                    else:
                        logger.info ( "Output looks BAD !"  )
                        stop_execution = True                        jira.transition_issue(subtask_issue, str(failid))
                        break                except:
                    logger.info ( "Exception occured" )
                    stop_execution = True                    jira.transition_issue(subtask_issue, str(failid))
                    break                jira.transition_issue(subtask_issue, str(passid))

        if stop_execution:
            break

 So how can something like that be used ? The whole script actually relies on 2 conventions, when you got some automation you start the step with the word "execute" while in the expected results field you place a regular expression that will match the output of the command.


Also most applications on the market will return some error code if they encounter an error, the tests are failed also in this case.

The sample script above has quite limited functionality, error handling etc, there is room for improvement and is not intended for production use in the current format, it is only intended to demonstrate the capabilities.

Monday, April 17, 2017

Adding new features to TMS addon for Jira, improved reporting

Adding, running test cases are not the only activities a test team needs, easy exploration of stories, test plans, executions are often required.

The issue explorer is using the issue linking feature of Jira to indicate relationship between story/requirements and test plans or from a plan to its executions.

Most links are created automatically during scheduling of executions the only missing link is story to test plan or epic to test plan. As soon as a Jira issue is being linked to a test plan using "Is verified by" link the report should provide a comprehensive view over tests linked and their progress.


Quick reference of the links used:

Story "is verified by" test plan which has "subtasks" - test cases which are "related" to executions.


Monday, April 10, 2017

A new JIRA addon for the test teams out there

Checkout the new test management inside JIRA

https://marketplace.atlassian.com/plugins/com.valens.testmanagement.tmsframework/server/overview