Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 5.3

Anchor
TopOfPage
TopOfPage

Center
Table of Contents
outlinetrue
maxlevel3
indent5px
stylenone
separatornewline

General Introduction

This page is meant to provide a general overview of how the OLE and OLETS Jira Projects are used in the OLE QA Testing process. Wherever possible, this page will include links to both other pages in this wiki and external resources for more detailed information on a given subject.

...

Linking in Jira

Jira is able to link issues both within the same project and across different projects. The link referred to here is essentially a normal Internet hyperlink, with an added custom field, "link type," to explain the logical relationship between the linked issues. Any time a Jira issue ID is entered into a text field or a comment on a Jira issue, a link will automatically be generated.

Generally only two link types are used in the OLE testing process: the "Parent Jira" link and the "Tests/Tested By" link, explained below in more detail.

Parent Jira

The link used between Stories and Test Cases is the "Parent Jira" link. This is a one-way link declaring that the Story is the parent to the Test Case. The link will only be shown on the Test Case, not the Story. One Story may be the parent to many Test Cases, but a Test Case will not have more than one parent.

The Parent Jira link is declared when the OLE Test Case is written, or while it is open for editing. For more information on how to declare a Parent Jira link on an OLE Test Case, please refer to the wiki page "Creating Test Case Content."

HTML Comment

PROVIDE LINK to SPECIFIC SECTION of this document once available.

Return to Section
Return to Top

Tests/Tested By

The link type used between Tasks (or Bug/Defects) and Test Cases is the "Tests/Tested By" link. This is a two-way link, declaring that the Test Case tests the Task. On the OLE Jira project, the Task will show a "Tested By" link to the Test Case, and on the OLETS Jira project, the Test Case will show a "Tests" link to the Task. This two-way link is automatically generated on both issues when it is created on one, so there is no need to duplicate the link on the Task if it has been declared on the Test Case. Generally, one Test Case will test one Task or Bug/Defect.

The "Tests/Tested By" link is usually declared after a Test Case has been written, and cannot be added while the Test Case is open for editing. For more on the relationship between Tasks and Test Cases, please refer to the wiki page "Creating Test Case Content." For specific instructions on how to create a link of this type, please refer to the "Linking Tasks" section of the "Creating Test Case Content" page.

HTML Comment

LINKS NOT YET AVAILABLE, PLEASE UPDATE.

Return to Section
Return to Top

...

Review of Jira Link Types

...

Return to Section
Return to Top

...

Documenting Your Testing Results

This section covers how to document the results of the testing process in the OLETS Jira project. In some cases, the results you need to document will depend upon which of two testing methods you employ.

"User Testing" is the most common testing method, and refers to the process of testing an issue by hand, linking it to the appropriate OLE issues, and recording a Selenium script.

"Selenium Testing" is generally only used for Bug/Defect issues, and involves selecting a pre-populated OLETS Test Case and executing the pre-recorded Selenium script to determine success or failure.

User Testing

  • User Testing consists of the following steps:
    • Revising the Test Case
      • You will need to revisit the Test case and ensure that the steps, if they exist, adequately match the current functionality of the OLE system.
      • If testing steps do not yet exist, you will need to review the steps necessary to fulfill the purpose statement of the Test Case.
    • Executing the Steps
      • You will need to execute the testing steps by hand. Selenium will need to be running so that it can record your actions.
    • Recording Your Results
      • Once you have determined whether the test should pass or fail, you will need to advance the Test Case through the workflow, and comment with your results and any other feedback you would like to include.

Return to Section
Return to Top

Selenium Testing

  • Selenium Testing consists of the following steps:
    • Executing the Selenium Test Script
      • You will need to open the Test Case and find the Selenium test script in the "Attachments" section. It will be included as an HTML file, with a name like "OLETS-### - Title of Test Case," where ### is the OLETS Test Case ID.
    • Recording Your Results
      • In Selenium testing, your comments will largely focus on whether the script passed or failed. If the script failed, you will need to include additional documentation, such as a screenshot or a copy of the Selenium log.
        HTML Comment

        PLEASE UPDATE WITH LINKS TO INDIVIDUAL WALK-THROUGHS ONCE AVAILABLE!

Return to Section
Return to Top

Determining Success or Failure

User Testing

The main point on which success or failure hinges during user testing is the fulfillment of the Acceptance Criteria statement, written out as a statement of purpose on the Test Case. The AC statement is meant to give the QA Team and testers a clear, well-defined goal to meet in testing.

There may be some cases in which the function is fulfilled, but not in the exact way that was requested by the Functional Specification. The details are a secondary point, but it is sometimes necessary to fail a Test Case based on the finer details of functionality. If, for example, information needs to be displayed, but is not displayed in a way which is clear and useful to the users, a Test Case might fulfill its stated purpose yet fail on finer details.

Failure for such issues of nuance is a difficult thing to decide. The QA Team encourages you to think in terms of the big picture: does the software accomplish its task in a reasonably useful way? If so, the best course of action may be to pass the Test Case, but share your insight in a comment so that we can address the issue in a later release of the software.

Return to Section
Return to Top

Selenium Testing

In Selenium testing, success or failure depends upon the outcome of the test script. If the Selenium script finishes and reports 0 failures, the test was successful, and the Test Case should be passed. If the Selenium script reports a failure, the test has failed, and the Test Case should be failed.

There may be times at which a non-essential function causes a test script to fail. There may also be cases in which a test script fails simply because the OLE system was referring to a function in a slightly different way than previously. The QA Team will review all failures, and if a failure in Selenium testing was determined to be the result of a non-essential function or a faulty test script, the script will be revised and you will be asked to re-test the issue.

There are some commands that will suggest that a true failure has occurred during test script execution. If a command that begins with "assert" or "verify" fails, it will be highlighted in dark red in the command window, and an error message will be populated in the log window in a bold red font.

It is also worth noting that warnings may appear frequently in the Selenium log file, especially if you happen to be testing a function that involves pop-up windows or items opening in new tabs. These warnings will appear in red, but they will not be bolded, and they will be prefixed with "warn" rather than "error" in the log window. These are just informational messages, and do not signify an impending failure.

Return to Section
Return to Top

Recording Your Results

This section details a number of methods for providing feedback on the results of your test. The first and most important thing is to advance the Test Case through its workflow by selecting "Test Passed" or "Test Failed" from the menu bar at the top of the screen in OLETS. This will assign the Test Case to the QA Team, notifying us that you've finished testing the issue.

The second most important feedback method to emphasize is use of comments in OLETS. While the QA Team can determine if an issue has been passed or failed by reviewing its workflow, putting this information in a comment will allow you to make your reasons explicit, especially in the case of failure. We always need to know why a Test Case has failed, and in the case of Bug/Defect testing, we may also need to know why it passed.

Return to Section
Return to Top

Advance the Test Case in the Workflow

This should always be the first step you take in providing feedback on the outcome of your Test Case. You will click either "Test Passed" or "Test Failed" in the menu bar at the top of the Test Case (pictured below).

Image Removed

A dialog box will open, allowing you to enter more specific information. You'll need to select "Pass" or "Fail" from the "Test Results" selection on the dialog box (pictured below), enter the most recent date on which you ran the test, and then enter your feedback in the "Comment" section.

Image Removed

Note that there is an "Attachment" option. If you want to add an attachment, you can do so through this dialog box. The different kinds of attachments most used in OLE testing will be discussed in detail a bit later on in this document, with links to more specific instructions on how to create and attach them.

Return to Section
Return to Top

Document Your Findings with a Comment

There are two things to include when writing up a comment to document your test results. The first, and most important, is to be explicit about whether the test passed or failed. The reason for this is that once your comment has been attached, it will appear in a long list of comments on the issue, and will not be attached in any particular way to the results you selected from the Test Results Dialog box.

If you need to add further information or rectify a mistake you made in the comment, you can do so by scrolling down to your comment at the bottom of the screen, then clicking the pencil icon that appears when you hover your mouse over the comment. If you need to add an additional comment at any time – to reply to a discussion that develops on a particular Test Case, for example – you can use the "Comment" button at the bottom of the screen.

HTML Comment

REDO THIS SCREENSHOT SO THAT IT SHOWS THE PENCIL ICON (BLUE-GLOW IT)

Image Removed

Comments are where you will want to document any feedback you have to give on a particular Test Case. If, for example, you feel that a Test Case should pass because it does accomplish exactly what the Acceptance Criteria specified, but you feel that it does not do so in a particularly satisfactory way, we really want to hear your insights on the matter. Kuali software is meant to be designed by the users, and we want to craft the application so that it can be used efficiently and effectively.

Return to Section
Return to Top

Attach a Screenshot of OLE

It may sometimes be the case that you will want to show a particular error message that you have received, or that you need to show where some element is missing from the layout of a particular screen. In this case, you will want to include a screenshot of the OLE application along with your test results.

Detailed instructions are available for taking screenshots and attaching files to Jira issues.

...

...

Attach a Screenshot of Selenium

Selenium screenshots are the easiest way of providing feedback during Selenium testing. If a Selenium script fails and only one error is reported, a screenshot of the Selenium window showing both the failed command and the error message it generates will help the QA Team to determine the cause of the failure.

Detailed instructions are available for taking screenshots, capturing the right information in a Selenium screenshot, and attaching files to Jira issues.

Return to Section
Return to Top

Attach a Selenium Log File

If you notice multiple failed commands during the course of Selenium testing, the most helpful feedback option will be a copy of the Selenium log. You can take multiple screenshots of the failures, but adding the log file is the more efficient choice.

Detailed instructions are available for copying the Selenium log and attaching files to Jira issues.

Return to Section
Return to Top

How to Contact the QA Team

Section
Column
width40%

Rich Slabach, QA Manager

Column
width10%

Column
width40%

Jain Waldrip, QA Analyst

Return to Top

Center