Lecture 6 - Classifying the Most Common Types of Testing -> Quick Intro -> By Knowledge Of The Internals Of The Software -> By The Object Of Testing -> By Time Of Test Execution -> By Positivism Of Test Scenarios -> By Degree Of Isolation Of Tested Components -> By Degree Of Automation -> By Preparedness -> Lecture Recap -> Questions & Exercises
1. We’ve learned about most common types of software testing. We used classifications to organize the knowledge.
2. Black box, grey box and white box testing are 3 approaches based on the knowledge of the internals of the system.
3. Black box tester
– usually has no idea about the internals of the back end
– implements ideas for testing that come from expected patterns of user behavior.
4. Black box testing has the advantage of an unaffiliated opinion on the one hand and the disadvantage of blind exploring on the other.
5. During black box testing, the tester’s actions are NOT limited to actions that can be performed by the users. For example, testers can use helpers for semi-automated testing.
6. Expected patterns of user behavior can be figured out with help of:
– spec
– exploring
– black box methodology
– intuition
– communication
– other sources
7. Actual patterns of user behavior can be found out with help of:
– reporting data
– data from customer support
– info from forums, blogs, etc.
8. White box testing (also known as “glass box testing,” “open box testing” encompasses a number of testing techniques that require a comprehensive understanding of the software code. White box testing is usually done by a programmer against his own code.
9. If test case checks legitimate scenario, it doesn’t mean that it’s well-geared to find a bug. The real power in finding bugs is invoked when we use the professional body of knowledge known as the black box testing methodology.
10. Black box testing and white box testing are a great combination that helps to find bugs by improving:
– Test comprehensiveness – i.e., checking the software from different angles
– Test coverage
11. Depending on context, “test coverage” means:
– either the coverage of possible scenarios
– or test case execution coverage
12. In case of grey box testing
– On the one hand, the tester uses black box methodology to come up with test scenarios.
– On the other hand, the tester possesses some knowledge about the back end, AND he actively uses that knowledge.
13. During black box testing, the tester’s actions are NOT limited to actions that can be performed by the users.
14. One of the most critical things to keep in mind while doing black, white, or grey box testing is to come up with expected results that serve as true indicators of whether the software works or not.
15. We also learned that:
Functional testing is needed to find logical bugs in Web site functionalities.
UI testing is needed to find bugs in the presentation of the Web site user interface.
Usability testing is the evaluation of a user’s experience when he uses our software.
Localization testing is required to find bugs in the adaptation of our software for users from different countries.
Security testing refers to testing the protection against security breaches.
16. Problem called “incompatibility” may take place when certain hardware and/or software on client-side interact with our software, i.e., a Web site.
17. Load/performance testing is a set of testing techniques designed to load the system or its component(s) and then measure how the system or its component(s) react
18. The usual purpose of load/performance testing is to find a bottleneck; i.e., a part of the system or its component(s) that slows down response time.
19. Functionally correct code – e.g., certain SQL statements – can cause performance problems.
20. Alpha testing is done before any type of release. When you hear “alpha testing,” it refers to the time when the testing is done, not how testing was done.
21. Beta testing is done after a beta release.
22. Positive testing checks situations where:
– The software is used in a normal, error-free way and/or
– The system is assumed to be sound
23. Negative testing checks situations involving:
– User error and/or
– System failure
24. Positive test cases must be executed before negative test cases.
25. Error handling is:
a. How the system responds to errors made by users
OR
b. How the system reacts to errors that happen when the software is running.
26. Error message is a message that provides information about error(s).
27. An error message is an important measure that:
– guides users in case of mistakes.
– gives debugging info to developers.
28. Component testing is the functional testing of a logical component.
29. Integration testing is the functional testing of the interaction between two or more integrated components.
30. System (end-to-end) testing is the functional testing of a logically complete path consisting of two or more integrated components.
31. You have to really know what you are doing when you hack into the DB or change some global setting, e.g., server time.
32. Comma-delimited format is popular format for data files. Blocks of data are separated by commas.
33. Manual testing is done without the help of any test automation programs.
34. Semi-automated testing is manual testing done with partial usage of the test automation, usually in form of helpers.
35. Automated testing is completely done by running test automation tools.
36. Formal/documented testing is a planned activity that requires the usage of test documentation – e.g., test cases.
37. Ad hoc testing is done without any preparation. Ad hoc testing relies on knowledge of the software and/or common sense and/or intuition. Next ->
Lecture 6 - Classifying the Most Common Types of Testing -> Quick Intro -> By Knowledge Of The Internals Of The Software -> By The Object Of Testing -> By Time Of Test Execution -> By Positivism Of Test Scenarios -> By Degree Of Isolation Of Tested Components -> By Degree Of Automation -> By Preparedness -> Lecture Recap -> Questions & Exercises