Who does Testing?
Before we jump into the main topic which is “How to Identify the Situation When to Start and Stop Software Testing”, let’s learn who does the testing first. It is dependent on the project’s process as well as the various stakeholders involved in it (s). Large businesses in the information technology sector typically have a team that is tasked with evaluating newly developed software in light of the requirements that have been specified. In addition, developers engage in testing, which is more specifically known as “Unit Testing.” In the vast majority of instances, the following professionals are involved in the testing of a system in their respective capacities:
- Software Tester
- Software Developer
- Project Lead/Manager
- End User
People who test software can be given a variety of titles, depending on the company they work for, including Software Tester, Software Quality Assurance Engineer, QA Analyst, and so on. These titles are given to reflect the individual’s level of experience and expertise. The software cannot be tested at any point during its cycle because it is not possible. In this section, we will discuss how to identify the situation and when to start and stop software testing
How and When Testing Starts
It is always recommended that the testing team be involved as early on in the process of requirement analysis as possible. This will ensure that the information system in question has improved quality, reliability, and performance. Because the testing team will have a clearer vision of the functionality of the system thanks to the active involvement of the testing team, we can anticipate a product that is both of higher quality and contains fewer errors.
Following the completion of the requirements analysis, the leader of the development team will be responsible for preparing the system requirement specification as well as the requirement traceability matrix. After that, he will make arrangements to speak with the members of the testing team (the test lead and tester chosen for that project). The head of the development team will provide an overview of the project, including the overall schedule for modules, deliverables, and versions.
This will be the point at which the testing team becomes involved. The Test Leads are responsible for preparing the Test Strategy and Test Plan, which together constitute the schedule for the entire testing procedure. In this section, he will schedule when each phase of the testing process, such as the unit testing, integration testing, system testing, and user acceptance testing, will be carried out. In most cases, businesses will use the V-model for their product development and testing procedures. Before moving on to the next phase of the software development life cycle, which is known as design, the development team conducts an analysis of the requirements and then creates a system requirement specification, a requirement traceability matrix, a software project plan, a software configuration management plan, a software measurements and metrics plan, and a software quality assurance plan. All of these plans must be completed before moving on to the design phase. They will prepare several important documents here, including the Detailed Design Document, an updated requirement traceability matrix, a Unit Test Cases Document (which will be prepared by the developers if there are no separate white-box testers), an Integration Test Cases Document, a System Test Plan Document, and Review and SQA audit reports for each and every Test Case.
Following the completion of the test plan, the individual testers receive their assignments from the person in charge of the test (white-box testers and black-box testers). The work of the testers will begin at this stage; using an automation tool or a standard template, they will prepare test cases based on the software requirement specification or the functional requirement document. After that, they will send them to the test lead so that they can be reviewed. After receiving approval from the person in charge of the test, those responsible for the test will set up the testing environment, also known as the test bed. In most cases, the test environment will mimic the configuration of the client-side system. We are prepared to go through testing now. At the same time that the testing team is working on the test strategy, test planning, and test cases, the development team will be working on their own individual modules. They are going to provide the testing team with a release of the interim build three or four days before the initial release. They are going to install that software on the testing machine, and then they are going to begin the actual testing. The responsibility for build configuration management lies with the testing team.
After that, the testing team performs testing against the test cases that have already been prepared, and then reports any bugs that they find using an automation tool or a bug report template (based on organization). In order to keep track of the bugs, they will update the “Bug” status after each and every stage. After the testing for Cycle #1 has been completed, they will send the bug report to the test lead. The test lead will then discuss these issues with the lead of the development team. After that, they will work on the bugs and attempt to fix them. They are going to release the next build as soon as they have all the bugs fixed. At this point, the testing for Cycle#2 gets underway, and we have to now run all of the test cases in order to determine whether or not all of the errors discovered in Cycle#1 have been fixed.
And now we will perform regression testing, which essentially means that we will check to see if the change in the code has any unintended consequences for the code that has already been tested. Once more, up until the time of delivery, we will continue the same process. In most cases, the document that describes the test case will contain documentation of four cycles of information. When the product is published, it is anticipated that there won’t be any bugs of a high priority or high severity. There may be a few insignificant mistakes, but they will all be fixed before the following release or iteration (generally called “deferred bugs”). In addition, once the delivery testing phase is complete, the lead tester along with the individual testers will prepare some reports. There are also occasions in which the testers take part in the code reviews, which are examples of static tests. They will validate the code by comparing it to a historical checklist of logical errors and ensuring that it has the appropriate indentation and commenting. In order to deliver a high-quality product that is free of errors, it is also the responsibility of the testing team to monitor and maintain the change management system.
In simple word, when to start testing?
It is more cost effective and takes less time to produce error-free software that is delivered to the client if testing is started earlier in the development process. On the other hand, testing can begin at any point during the Software Development Life Cycle (SDLC), beginning with the phase known as “Requirements Gathering,” and continuing right up until the software is deployed. It also is contingent on the type of growth model that is being implemented. For instance, in the Waterfall model, formal testing is carried out during the testing phase. In contrast, testing is carried out at the end of each increment or iteration in the incremental model, and the application as a whole is tested at the conclusion of the process. At each stage of the SDLC, various types of testing are carried out:
- It is possible to count the analysis and verification of requirements as testing even while they are still being gathered as part of the requirements phase.
- Testing can also include looking over the design while it’s still in the conceptualization phase with the goal of making the design better.
- The testing that is carried out by a programmer after the completion of the code is also considered to be testing.
What should be present in Exit Criteria?
In an ideal world, the Exit or Stop Criteria are defined by combining a number of different factors, and as a result, each project has its own set of criteria. Because it is dependent on the requirements of the project, it needs to be defined during the test planning phase, which takes place at the beginning and start of the project. It is important that the parameters it defines be quantified to the greatest extent possible. In the event that you are performing functional or system testing, the following are a few pointers that should be considered while defining exit criteria. When determining where to stop testing based on the requirements of your project, you can use any combination of the factors listed below.
WHEN TO STOP TESTING? EXIT CRITERIA
One of the most common concerns is regarding the requirements for successfully completing the test. Let’s take a look at the most crucial considerations that go into deciding when the testing phase is complete. When asked this question, novice testers frequently respond with something along the lines of “I will test until I find all of the bugs.” Is it possible? Even though a number of knowledgeable testers have examined this application, it is impossible to state with certainty that it is bug-free because nobody can make that claim.
This is the top five most typical exit requirements.
When the testing deadlines that have been committed to or planned are about to pass, the testing should be halted. Because a product or new feature was supposed to be delivered by a certain date, the project manager and the Team Lead must decide which bugs need to be fixed and which ones can be postponed until the next release based on the priority and severity of the bugs in order to meet the delivery deadline. Therefore, the testing is finished after a certain amount of time.
2) Complete Testing Budget is exhausted
It is not a secret that when there is no money left in the testing budget, all work comes to a halt. As is the case on the freelance exchange, the client sometimes merely pays for the amount of time that the outsourced tester puts in, and other times, the cost does not conform to the budget; in these instances, the client examines the written test cases and discards some of them because they do not conform to the budget.
3) Test Coverage reaches a specified point
In most cases, testers will make an effort to provide the broadest possible test coverage. If everything goes perfectly, we should be able to achieve a Test Coverage of 95%. However, the time allotted and the budgetary resources available for testing are frequently inadequate. In addition, the scope of the tests covers a tremendous area. In such a scenario, the predetermined percentage of the test’s coverage is taken into consideration, for example 92%. (the division of test cases covered by the general amount of test cases).
4) Minimum accepted bug rate
The number of defects decreased below a predetermined threshold, and there were no major defects found. It has been determined that the defect density is within acceptable limits, that the code coverage achieved in accordance with the test plan is adequate, and that both the number and severity of open bugs are at low levels. The overarching objective is to lessen the likelihood that catastrophic errors will occur when the product is made available to consumers.
5) All test cases passed, found bugs fixed and re-checked, requirements are fulfilled
In order to test the application, the tester first needs to be familiar with the requirements and functional specifications for the application, if there are any; alternatively, the tester can learn from the customer what the expected behaviour is for various use cases, applications, or features by listening to what they have to say. The next step is to complete the preparation of the test documentation, which includes writing test cases, writing a test plan if one is required, and covering all of the functionality and requirements for the application. Also, have a team discussion and decide whether or not it is necessary to conduct non-functional testing, such as performance testing (Performance and Load Testing), usability testing (Usability Testing), and so on. After that, begin to perform and pass test cases, and once all of the test cases are finished, as well as the bugs being fixed and rechecked, you will be able to declare the testing to be complete.
Stopping when all defects are found: Is it possible?
The majority of software is difficult to understand and has an extensive testing scope. Finding all of the flaws in the software is not impossible, but doing so will take an extremely long time.
Even after a large number of errors were discovered within the software, no one can actually guarantee that the software is now free of any defects. It is impossible for us to be in a position where we can say with absolute certainty that we have finished testing, discovered all of the flaws in the software, and that it does not have any more bugs. In addition, the objective of testing is not to discover each and every flaw that may exist in the software. The purpose of software testing is to demonstrate that the programme functions correctly by either breaking it or locating inconsistencies between the way it is currently behaving and the way it is supposed to behave.
Because there are an infinite number of bugs in software, it is impractical to test it until each and every bug is discovered. This is because there is no way to tell which bug will be the final one. To tell you the truth, we cannot count on finding all of the bugs in the software before we can declare our testing complete. Testing, to tell you the truth, never ends, and testing cycles will go on as long as there isn’t a decision about when and where to stop them. Now, reaching a conclusion about whether or not to continue testing becomes even more difficult. If “stopping when all defects are found” is not the criteria for when testing should be completed, then on what basis should this decision be made?
How to Track Testing Progress to Meet Exit Criteria?
Despite the fact that developing and settling on exit criteria is a significant amount of work. The following step is to perform consistent checks to ensure that all of the exit criteria have been satisfied. As testing teams, we need to be aware that if the product is not shipped on time (after meeting exit criteria), then our company may experience severe negative repercussions in the form of decreased sales and significant advantages being gained by the competition, to name a few of these consequences. It is of the utmost importance to perform consistent monitoring of the testing progress in relation to the exit criteria. The following below is a list of some of the suggested methods for carrying it out:
1. Establish a Dashboard that is able to effectively summarise all of the variables in relation to the exit criteria.
2. Establish on-going sync-ups with the various stakeholders to discuss risks and contingency plans, as well as to unblock people who have been blocked.
Although there is no simple answer to the question of when testing should be stopped, as was mentioned earlier, we hope that this blog post provides you with enough food for thought to determine your exit criteria and the points at which testing should be stopped.
In terms of Entry and Exit Criteria Considerations, the following are some of the best practises that are currently considered to be industry-leading:
- It is important, before beginning any process, to define the entry and exit criteria for each type of test in a clear and concise manner.
- quantitative rather than qualitative representation of the conditions or measures being examined
- In the event that entry or exit criteria are not met, the appropriate corrective action must be assigned, or the entire process must be restarted along with any necessary changes.
- During the time that the process is being created and reviewed, there must be constant vigilance coupled with follow-up from the moderator.
To summarise, it is absolutely necessary to define the entry and exit criteria in testing. The requirements that were covered earlier will be of assistance to the testing teams in terms of planning and driving testing tasks within the allotted time frames. It is possible to achieve this without sacrificing the quality, functionality, effectiveness, or efficiency of the software that is being developed.
“How And When Testing Starts.” Software Testing Times – Tutorials, Manual Testing, Automation Testing, DevOps, Software Quality, 24 Apr. 2010, softwaretestingtimes.com/2010/04/how-and-when-testing-starts.html.
“Software Testing – Overview.” Software Testing – Overview, www.tutorialspoint.com/software_testing/software_testing_overview.htm. Accessed 14 Dec. 2022.
Baghdasaryan, Arine. “WHEN TO STOP TESTING? EXIT CRITERIA.” Medium, 9 Aug. 2021, medium.com/fintegro-company-inc/when-to-stop-testing-exit-criteria-bba26c4dc03e.
“Nazeer Ahamed – the Test Tribe.” The Test Tribe, www.thetesttribe.com/resume/nazeer-ogeru8je-etl-testing-chennai. Accessed 14 Dec. 2022.
Corrales, L., Sekhar, N., Singh, S. K., Venkataraghavan, kumari, S., Kar, N., Wafa, A., Vauch, J., B, N., Raghav, Akos, dixit, R., Satish, D., Gunasti, I., Sfeldman97, Kant, R., Imam, D., Ahmad, T., Dares, E., … Ram, S. (2022, December 5). When to stop testing (exit criteria in software testing). Software Testing Help. Retrieved December 14, 2022, from https://www.softwaretestinghelp.com/when-to-stop-testing-exit-criteria-in-software-testing/
“Entry and Exit Criteria in Software Testing Life Cycle.” Rishabh Software, 18 Mar. 2019, www.rishabhsoft.com/blog/entry-and-exit-criteria-in-software-testing.