...
Anchor | ||||
---|---|---|---|---|
|
Selenium Testing
- Selenium Testing consists of the following steps:
- Executing the Selenium Test Script
- You will need to open the Test Case and find the Selenium test script in the "Attachments" section. It will be included as an HTML file, with a name like "OLETS-### - Title of Test Case," where ### is the OLETS Test Case ID.
- Recording Your Results
- In Selenium testing, your comments will largely focus on whether the script passed or failed. If the script failed, you will need to include additional documentation, such as a screenshot or a copy of the Selenium log.
- Executing the Selenium Test Script
...
Return to Section
Return to Top
Anchor | ||||
---|---|---|---|---|
|
Selenium Testing
In Selenium testing, success or failure depends upon the outcome of the test script. If the Selenium script finishes and reports 0 failures, the test was successful, and the Test Case should be passed. If the Selenium script reports a failure, the test has failed, and the Test Case should be failed.
...
It is also worth noting that warnings may appear frequently in the Selenium log file, especially if you happen to be testing a function that involves pop-up windows or items opening in new tabs. These warnings will appear in red, but they will not be bolded, and they will be prefixed with "warn" rather than "error" in the log window. These are just informational messages, and do not signify an impending failure.Return to Section
Return to Top
Anchor | ||||
---|---|---|---|---|
|
Recording Your Results
This section details a number of methods for providing feedback on the results of your test. The first and most important thing is to advance the Test Case through its workflow by selecting "Test Passed" or "Test Failed" from the menu bar at the top of the screen in OLETS. This will assign the Test Case to the QA Team, notifying us that you've finished testing the issue.
The second most important feedback method to emphasize is use of comments in OLETS. While the QA Team can determine if an issue has been passed or failed by reviewing its workflow, putting this information in a comment will allow you to make your reasons explicit, especially in the case of failure. We always need to know why a Test Case has failed, and in the case of Bug/Defect testing, we may also need to know why it passed.
Advance the Test Case in the Workflow
...
Note that there is an "Attachment" option. If you want to add an attachment, you can do so through this dialog box. The different kinds of attachments most used in OLE testing will be discussed in detail a bit later on in this document, with links to more specific instructions on how to create and attach them.
Document Your Findings with a Comment
...
Comments are where you will want to document any feedback you have to give on a particular Test Case. If, for example, you feel that a Test Case should pass because it does accomplish exactly what the Acceptance Criteria specified, but you feel that it does not do so in a particularly satisfactory way, we really want to hear your insights on the matter. Kuali software is meant to be designed by the users, and we want to craft the application so that it can be used efficiently and effectively.
Attach a Screenshot of OLE
...
Detailed instructions are available for taking screenshots and attaching files to Jira issues.
Attach a Screenshot of Selenium
...
Detailed instructions are available for taking screenshots, capturing the right information in a Selenium screenshot, and attaching files to Jira issues.
Return to Section
Return to TopTop of Page
Top of Section
Attach a Selenium Log File
If you notice multiple failed commands during the course of Selenium testing, the most helpful feedback option will be a copy of the Selenium log. You can take multiple screenshots of the failures, but adding the log file is the more efficient choice.
Detailed instructions are available for copying the Selenium log and attaching files to Jira issues.
Return to Section
Return to TopTop of Page
Top of Section
Center |
---|