Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Update "Coding" section

...

Outlined below is a brief overview of the OLE testing process. The outline below is more of a summary than a detailed exploration, and is meant to help give a general picture of the OLE testing workflow, and how it fits into the larger scheme of the OLE development cycle.

Return to Top

...

Feature Development

  • A Story An Enhancement (PP) issue is handed off for testing, with the Functional Specification document attached and a complete Acceptance Criteria section in place.
  • The Story is broken down into Tasks by the development team.
  • Testers create Test Cases based on the acceptance criteria in the document.

...

  • Code is written and submitted one Task Enhancement (PP) at a time by the developers.
  • The code for each Task is checked individually is given an initial review by the development team.
  • Completed Tasks Enhancements (PP) are moved to "TestingQA Review" status, and the related code is imported into the OLE Test Environment in bi-weekly updatesas needed.

Return to Top

Code Promotion

  • The testers review the Release Documentation on the Kuali wiki to determine if a Task an Enhancement or Bug/Defect belonging to one of their Stories Test Cases is ready for testing.
    • OLE Releases are divided first into Milestonesrelease versions. A Milestone Release is the release of major release is a new numbered version of the OLE software package, and represents the implementation of a large bundle of new functions and features.  A minor release is a bundle of updates and bug fixes released as needed, and marked with a tag rather than a major version number.
      • The Demo Environment and official download link are updated only on Milestone releases.The current Milestone Release is OLE-for major release versions (e.g., OLE 0.8, OLE 1.0.
      OLE Milestone Releases are subdivided into Iterations. Iterations are marked with a letter, and are bi-weekly, internal updates to the OLE application for testing purposes. Each Iteration marks the implementation of a large grouping of Tasks and newly resolved Bug/Defect issues
      • , OLE 1.5, OLE 2.0, and so on).
      • The Test Environment is updated with each Iteration.The current Iteration is OLE 0.8.0-J. (9/7/2012)minor release versions (e.g. r16094, r16528).  Updates to the Test Environment occur as needed, when there is a sufficiently stable working version of the code that functional testing can begin. 

Return to Top

Review of Existing Test Cases

  • The testers revisit the OLETS Test Cases relevant to the promoted Tasks or Enhancements (PP) and Bug/Defects. The Test Case may Defect fixes.  Test Cases will need to be revised at this point, especially for Task testing.
  • If the functionality to be tested was not previously available, this is the best time for testers to review the process necessary to accomplish the main function described by the Test Case.
  • The , either to add the descriptive testing steps if a Test Case is new, or possibly to modify them if any part of the workflow changes, as is sometimes necessary in the case of bug fixes.
  • A Test Case must have the following information to be ready for testing:
    • A description stating the purpose of the Test Case
    • Steps describing the method for executing the test
    • A Selenium test script is recorded while the test is being executed.

Return to Top

Testing Outcome

  • Results are gathered from testing, and a determination is made as to whether the Test Case should pass or fail.
    • Pass
      • If the tester is able to successfully execute all testing steps necessary to fulfill the purpose of the Test Case, the test can be considered passed.
    • Fail
      • If the tester is unable to successfully execute all testing steps required by the Test Case, the test can be considered failed.
      • If the tester is able to execute all necessary steps, but the outcome does not satisfy the Acceptance Criteria statement on which the Test Case is based, then the test can be considered failed.
    • Feedback
      • A comment indicating that a test has passed is helpful when reviewing the Test Case's history later.
      • If a Test Case fails, a comment explaining the reasons for failure is necessary for creating an adequate Bug/Defect, or for giving an adequate explanation for the developers in the case that an existing Bug/Defect requires additional work.
  • If the test passes, it is automatically assigned to the QA Analyst (Jain).
  • If the test fails, it is automatically assigned to the QA Manager (Rich).
    • In the case of failure, the Task or a Bug/Defect associated with being tested by the failing Test Case will be returned to an "in development" statusthe developers.

Return to Top

Center