Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Testing

This article covers how to execute and document the results of your tests in the OLETS Jira project. In some cases, the results you need to document will depend upon which of two testing methods you employ.

User Testing is the most common testing method, and refers to the process of testing an issue by hand, linking it to the appropriate OLE issues, and recording a Selenium script.

Selenium Testing is generally only used for Bug/Defect issues, and involves selecting a pre-populated OLETS Test Case and executing the pre-recorded Selenium script to determine success or failure.

Top of Page

User Testing

  • User Testing consists of the following steps:
    • Revising the Test Case
      • You will need to revisit the Test case and ensure that the steps, if they exist, adequately match the current functionality of the OLE system.
      • If testing steps do not yet exist, you will need to review the steps necessary to fulfill the purpose statement of the Test Case.
    • Executing the Steps
      • You will need to execute the testing steps by hand. Selenium will need to be running so that it can record your actions.
    • Recording Your Results
      • Once you have determined whether the test should pass or fail, you will need to advance the Test Case through the workflow, and comment with your results and any other feedback you would like to include.

Top of Page
Top of Section

Selenium Testing

  • Selenium Testing consists of the following steps:
    • Executing the Selenium Test Script
      • You will need to open the Test Case and find the Selenium test script in the "Attachments" section. It will be included as an HTML file, with a name like "OLETS-### - Title of Test Case," where ### is the OLETS Test Case ID.
    • Recording Your Results
      • In Selenium testing, your comments will largely focus on whether the script passed or failed. If the script failed, you will need to include additional documentation, such as a screenshot or a copy of the Selenium log.

Top of Page
Top of Section

Determining Success or Failure

The methods by which you determine success or failure of a given test depend upon the testing procedure used. Below, you will find separate instructions for User Testing and Selenium Testing.

Top of Page

User Testing

The main point on which success or failure hinges during user testing is the fulfillment of the Acceptance Criteria statement, written out as a statement of purpose on the Test Case. The AC statement is meant to give the QA Team and testers a clear, well-defined goal to meet in testing.

There may be some cases in which the function is fulfilled, but not in the exact way that was requested by the Functional Specification. The details are a secondary point, but it is sometimes necessary to fail a Test Case based on the finer details of functionality. If, for example, information needs to be displayed, but is not displayed in a way which is clear and useful to the users, a Test Case might fulfill its stated purpose yet fail on finer details.

Failure for such issues of nuance is a difficult thing to decide. The QA Team encourages you to think in terms of the big picture: does the software accomplish its task in a reasonably useful way? If so, the best course of action may be to pass the Test Case, but share your insight in a comment so that we can address the issue in a later release of the software.

Top of Page
Top of Section

Selenium Testing

In Selenium testing, success or failure depends upon the outcome of the test script. If the Selenium script finishes and reports 0 failures, the test was successful, and the Test Case should be passed. If the Selenium script reports a failure, the test has failed, and the Test Case should be failed.

There may be times at which a non-essential function causes a test script to fail. There may also be cases in which a test script fails simply because the OLE system was referring to a function in a slightly different way than previously. The QA Team will review all failures, and if a failure in Selenium testing was determined to be the result of a non-essential function or a faulty test script, the script will be revised and you will be asked to re-test the issue.

There are some commands that will suggest that a true failure has occurred during test script execution. If a command that begins with "assert" or "verify" fails, it will be highlighted in dark red in the command window, and an error message will be populated in the log window in a bold red font.

It is also worth noting that warnings may appear frequently in the Selenium log file, especially if you happen to be testing a function that involves pop-up windows or items opening in new tabs. These warnings will appear in red, but they will not be bolded, and they will be prefixed with "warn" rather than "error" in the log window. These are just informational messages, and do not signify an impending failure.

Top of Page
Top of Section

Recording Your Results

This section details a number of methods for providing feedback on the results of your test. The first and most important thing is to advance the Test Case through its workflow by selecting "Test Passed" or "Test Failed" from the menu bar at the top of the screen in OLETS. This will assign the Test Case to the QA Team, notifying us that you've finished testing the issue.

The second most important feedback method to emphasize is use of comments in OLETS. While the QA Team can determine if an issue has been passed or failed by reviewing its workflow, putting this information in a comment will allow you to make your reasons explicit, especially in the case of failure. We always need to know why a Test Case has failed, and in the case of Bug/Defect testing, we may also need to know why it passed.

Top of Page

Advance the Test Case in the Workflow

This should always be the first step you take in providing feedback on the outcome of your Test Case. You will click either "Test Passed" or "Test Failed" in the menu bar at the top of the Test Case (pictured below).

A dialog box will open, allowing you to enter more specific information. You'll need to select "Pass" or "Fail" from the "Test Results" selection on the dialog box (pictured below), enter the most recent date on which you ran the test, and then enter your feedback in the "Comment" section.

Note that there is an "Attachment" option. If you want to add an attachment, you can do so through this dialog box. The different kinds of attachments most used in OLE testing will be discussed in detail a bit later on in this document, with links to more specific instructions on how to create and attach them.

Top of Page
Top of Section

Document Your Findings with a Comment

There are two things to include when writing up a comment to document your test results. The first, and most important, is to be explicit about whether the test passed or failed. The reason for this is that once your comment has been attached, it will appear in a long list of comments on the issue, and will not be attached in any particular way to the results you selected from the test results dialog box.

If you need to add further information or rectify a mistake you made in the comment, you can do so by scrolling down to your comment at the bottom of the screen, then clicking the pencil icon that appears when you hover your mouse over the comment. If you need to add an additional comment at any time – to reply to a discussion that develops on a particular Test Case, for example – you can use the "Comment" button at the bottom of the screen.

Comments are where you will want to document any feedback you have to give on a particular Test Case. If, for example, you feel that a Test Case should pass because it does accomplish exactly what the acceptance criteria specified, but you feel that it does not do so in a particularly satisfactory way, we really want to hear your insights on the matter. Kuali software is meant to be designed by the users, and we want to craft the application so that it can be used efficiently and effectively.

Top of Page
Top of Section

Attach a Screenshot of OLE

It may sometimes be the case that you will want to show a particular error message that you have received, or that you need to show where some element is missing from the layout of a particular screen. In this case, you will want to include a screenshot of the OLE application along with your test results.

Detailed instructions are available for taking screenshots and attaching files to Jira issues.

Top of Page
Top of Section

Attach a Screenshot of Selenium

Selenium screenshots are the easiest way of providing feedback during Selenium testing. If a Selenium script fails and only one error is reported, a screenshot of the Selenium window showing both the failed command and the error message it generates will help the QA Team to determine the cause of the failure.

Detailed instructions are available for taking screenshots, capturing the right information in a Selenium screenshot, and attaching files to Jira issues.

Top of Page
Top of Section

Attach a Selenium Log File

If you notice multiple failed commands during the course of Selenium testing, the most helpful feedback option will be a copy of the Selenium log. You can take multiple screenshots of the failures, but adding the log file is the more efficient choice.

Detailed instructions are available for copying the Selenium log and attaching files to Jira issues.

Top of Page
Top of Section

  • No labels