Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents
outlinetrue
maxlevel3
indent5px
stylenone
separatornewline

General Introduction

This page is meant to provide a general overview of how the OLE and OLETS Jira Projects are used in the OLE QA Testing process. Wherever possible, this page will include links to both other pages in this wiki and external resources for more detailed information on a given subject.

Return to Top

What is Jira?

Issue Tracking Software

Jira is a web-based Issue-Tracking Software application. Issue Tracking Software (ITS) is primarily used in software development to track the progress of issues related to the software's function. The issues tracked could be tasks to complete in order to establish a given function, requests to repair bugs in existing functions, requests from users to add new features, and a number of other development related tasks.

Jira is highly configurable, and can be customized to track a number of issue types in each installation. All of the above issue types are used in Kuali's Jira installation, as well as a number of others, largely tailored to the type of development process used by a given software project.

Jira Projects

In a single installation, Jira can host a number of projects. Each project roughly represents a single software development effort, with a myriad of issues being tracked independently, divided by type. Each Kuali application has its own Jira project, and some applications have multiple Jira projects. Issues can be interrelated across Jira projects, which is helpful for a community like Kuali, where our software often depends on other Kuali components, like Rice (for document routing and workflow control) or Kuali Financial System (for purchasing and invoicing functionality).

Return to Top

Anchor
KeyTerms
KeyTerms

Key Terms

OLE (Jira Project)

The OLE Jira project is the main project for tracking the tasks we've marked out that define how the OLE application should function. The core team uses the OLE project to track functional specifications, and the development team uses the project to track coding progress.

...

Return to Section
Return to Top

OLETS (Jira Project)

The OLE Test Scenarios (OLETS) Jira project is used for tracking issues from the main Jira project which have been designated as ready for testing. Software testing is tracked both for progress and posterity: we run reports on the completeness of overall software testing, and we keep testing materials in the OLETS project so that we can access them again later.

Return to Section
Return to Top

Story

The Story is the OLE Jira issue type that serves as the master task for a given piece of functionality. Th Kuali design philosophy means that any major function requested in our software, like the ability to create a purchase order, starts out as a story that a user has told us; for example, "I work in acquisitions, and I need the software to enable me to place an order with a vendor." The sum of the Story issues in the OLE Jira project describe the full functionality that we want to offer with OLE.

Return to Section
Return to Top

Task

The Task is an OLE Jira issue type that serves as a "to do" item in developing a piece of functionality. The development team reviews the Stories and the functional specifications attached to them, and then determines what Tasks need to be accomplished in order to establish the requested functionality. Breaking up Stories into Tasks allows for faster and more incremental development

Return to Section
Return to Top

Bug/Defect

Like the Task, the Bug/Defect issue type also serves as a "to do" for developers. In this case, the "to do" is to notify programmers that an already-established, previously tested piece of functionality is no longer working.

Return to Section
Return to Top

Test Case

The Test Case is the basic OLETS issue type. Test Cases tend to represent one item from the Acceptance Criteria of a Functional Specification document. Alternately, a Test Case can represent one Bug/Defect item which is ready for testing.

...

Return to Section
Return to Top

Review of Issue Types



The table above provides a brief summary of all the information provided in this section. See the Linking section below for an expanded version of this table with a review of the above issue types and their relationship to Test Cases.

Return to Section
Return to Top

Anchor
JiraLinking
JiraLinking

Linking in Jira

Jira is able to link issues both within the same project and across different projects. The link referred to here is essentially a normal Internet hyperlink, with an added custom field, "link type," to explain the logical relationship between the linked issues. Any time a Jira issue ID is entered into a text field or a comment on a Jira issue, a link will automatically be generated.

Generally only two link types are used in the OLE testing process: the "Parent Jira" link and the "Tests/Tested By" link, explained below in more detail.

Parent Jira

The link used between Stories and Test Cases is the "Parent Jira" link. This is a one-way link declaring that the Story is the parent to the Test Case. The link will only be shown on the Test Case, not the Story. One Story may be the parent to many Test Cases, but a Test Case will not have more than one parent.

...

Return to Section
Return to Top

Tests/Tested By

The link type used between Tasks (or Bug/Defects) and Test Cases is the "Tests/Tested By" link. This is a two-way link, declaring that the Test Case tests the Task. On the OLE Jira project, the Task will show a "Tested By" link to the Test Case, and on the OLETS Jira project, the Test Case will show a "Tests" link to the Task. This two-way link is automatically generated on both issues when it is created on one, so there is no need to duplicate the link on the Task if it has been declared on the Test Case. Generally, one Test Case will test one Task or Bug/Defect.

...

Anchor
LinkingImage
LinkingImage

Review of Jira Link Types



The table above provides a brief summary of all the information provided in this section: a brief review of Jira issue types and their respective projects, as well as their link relationship to Test Cases.

Return to Section
Return to Top

The Testing Process in Jira

Outlined below is a brief overview of the OLE testing process. The outline below is more of a summary than a detailed exploration, and is meant to help give a general picture of the OLE testing workflow, and how it fits into the larger scheme of the OLE development cycle.

Story Development

  • A Story issue is handed off for testing, with the Functional Specification document attached and complete Acceptance Criteria in place.
  • The Story is broken down into Tasks by the development team.
  • Code is written and submitted one Task at a time by the developers.
  • The code for each Task is checked individually by the development team.
  • Completed Tasks are moved to "Testing" status, and the related code is imported into the OLE Test Environment in bi-weekly updates.
  • The testers review the Release Documentation on the Kuali wiki to determine if a Task belonging to one of their Stories is ready for testing.
    • OLE Releases are divided first into Milestones. A Milestone Release is the release of a new numbered version of the OLE software package, and represents the implementation of a large bundle of new functions and features.
      • The Demo Environment and official download link are updated only on Milestone releases.
      • The current Milestone Release is OLE-0.8.0.
    • OLE Milestone Releases are subdivided into Iterations. Iterations are marked with a letter, and are bi-weekly, internal updates to the OLE application for testing purposes. Each Iteration marks the implementation of a large grouping of Tasks and newly resolved Bug/Defect issues.
      • The Test Environment is updated with each Iteration.
      • The current Iteration is OLE 0.8.0-I. (8/20/2012)
  • The testers revisit the OLETS Test Cases relevant to the promoted Tasks or Bug/Defects. The Test Case may need to be revised at this point, especially for Task testing.
    • If the functionality to be tested was not previously available, this is the best time for testers to review the process necessary to accomplish the main function described by the Test Case.
    • The Test Case must have the following information to be ready for testing:
      • A description stating the purpose of the Test Case
      • Steps describing the method for executing the test
    • A Selenium test script is recorded while the test is being executed.
  • Results are gathered from testing, and a determination is made as to whether the Test Case should pass or fail.
    • Pass
      • If the tester is able to successfully execute all testing steps necessary to fulfill the purpose of the Test Case, the test can be considered passed.
    • Fail
      • If the tester is unable to successfully execute all testing steps required by the Test Case, the test can be considered failed.
      • If the tester is able to execute all necessary steps, but the outcome does not satisfy the Acceptance Criteria statement on which the Test Case is based, then the test can be considered failed.
  • If the test passes, it is automatically assigned to the QA Analyst (Jain).
  • If the test fails, it is automatically assigned to the QA Manager (Rich).
    • In case of failure, the Task or Bug/Defect associated with the Test Case will be returned to an "in development" status.

Return to Section
Return to Top

Anchor
DocumentingResults
DocumentingResults

Documenting Your Testing Results

This section covers how to document the results of the testing process in the OLETS Jira project. In some cases, the results you need to document will depend upon which of two testing methods you employ.

...

"Selenium Testing" is generally only used for Bug/Defect issues, and involves selecting a pre-populated OLETS Test Case and executing the pre-recorded Selenium script to determine success or failure.

User Testing

  • User Testing consists of the following steps:
    • Revising the Test Case
      • You will need to revisit the Test case and ensure that the steps, if they exist, adequately match the current functionality of the OLE system.
      • If testing steps do not yet exist, you will need to review the steps necessary to fulfill the purpose statement of the Test Case.
    • Executing the Steps
      • You will need to execute the testing steps by hand. Selenium will need to be running so that it can record your actions.
    • Recording Your Results
      • Once you have determined whether the test should pass or fail, you will need to advance the Test Case through the workflow, and comment with your results and any other feedback you would like to include.

Return to Section
Return to Top

Selenium Testing

  • Selenium Testing consists of the following steps:
    • Executing the Selenium Test Script
      • You will need to open the Test Case and find the Selenium test script in the "Attachments" section. It will be included as an HTML file, with a name like "OLETS-### - Title of Test Case," where ### is the OLETS Test Case ID.
    • Recording Your Results
      • In Selenium testing, your comments will largely focus on whether the script passed or failed. If the script failed, you will need to include additional documentation, such as a screenshot or a copy of the Selenium log.
        HTML Comment

        PLEASE UPDATE WITH LINKS TO INDIVIDUAL WALK-THROUGHS ONCE AVAILABLE!

Return to Section
Return to Top

Determining Success or Failure

User Testing

The main point on which success or failure hinges during user testing is the fulfillment of the Acceptance Criteria statement, written out as a statement of purpose on the Test Case. The AC statement is meant to give the QA Team and testers a clear, well-defined goal to meet in testing.

...

Return to Section
Return to Top

Selenium Testing

In Selenium testing, success or failure depends upon the outcome of the test script. If the Selenium script finishes and reports 0 failures, the test was successful, and the Test Case should be passed. If the Selenium script reports a failure, the test has failed, and the Test Case should be failed.

...

Return to Section
Return to Top

Recording Your Results

This section details a number of methods for providing feedback on the results of your test. The first and most important thing is to advance the Test Case through its workflow by selecting "Test Passed" or "Test Failed" from the menu bar at the top of the screen in OLETS. This will assign the Test Case to the QA Team, notifying us that you've finished testing the issue.

...

Return to Section
Return to Top

Advance the Test Case in the Workflow

This should always be the first step you take in providing feedback on the outcome of your Test Case. You will click either "Test Passed" or "Test Failed" in the menu bar at the top of the Test Case (pictured below).

...

Return to Section
Return to Top

Document Your Findings with a Comment

There are two things to include when writing up a comment to document your test results. The first, and most important, is to be explicit about whether the test passed or failed. The reason for this is that once your comment has been attached, it will appear in a long list of comments on the issue, and will not be attached in any particular way to the results you selected from the Test Results Dialog box.

...

Return to Section
Return to Top

Attach a Screenshot of OLE

It may sometimes be the case that you will want to show a particular error message that you have received, or that you need to show where some element is missing from the layout of a particular screen. In this case, you will want to include a screenshot of the OLE application along with your test results.

...

Return to Section
Return to Top

Attach a Screenshot of Selenium

Selenium screenshots are the easiest way of providing feedback during Selenium testing. If a Selenium script fails and only one error is reported, a screenshot of the Selenium window showing both the failed command and the error message it generates will help the QA Team to determine the cause of the failure.

...

Return to Section
Return to Top

Attach a Selenium Log File

If you notice multiple failed commands during the course of Selenium testing, the most helpful feedback option will be a copy of the Selenium log. You can take multiple screenshots of the failures, but adding the log file is the more efficient choice.

...

Return to Section
Return to Top

How to Contact the QA Team

Section
Column
width40%

Rich Slabach, QA Manager

Column
width10%

Column
width40%

Jain Waldrip, QA Analyst

...