Upgrade to remove ads
CSE 565 Midterm Exam
Terms in this set (87)
Are we building the product right?
Are we building the right product?
The probability that a software program operates for some given time period without a software error
Red: Write a minimal test on the behavior needed, Green: Write only enough code to make the failing test pass, Refactor: Improve code while keeping tests green
Mistake made by a human
Defect / Fault
Result of an error from the code
Software doesn't do what it is supposed to do
Unit / Component Testing
Does the code do what it is supposed to do?
Hardware/software integration (functional & non-functional testing on security, usability, and safety)
Testing the way the user will be using the system. Both verification and validation. Applicable at every testing level.
Is it usable and safe? Applicable at higher levels of testing.
Look at data flow and code coverage. Applicable at unit testing
Absence of errors fallacy
Even if you prove the software is right, it doesn't mean the customer's needs are met
When you can stop testing
When beta testing can start
ISTQB Code of Ethics (Public)
Act consistent with public interest
ISTQB Code of Ethics (Client & Employer)
Act consistent with employer's interest (but keep in mind public interest)
ISTQB Code of Ethics (Product)
Ensure deliverables meet highest professional standard
ISTQB Code of Ethics (Judgement)
Don't say it's ready when it's not
ISTQB Code of Ethics (Management)
Managers should promote an ethical approach
ISTQB Code of Ethics (Profession)
Advance the integrity and reputation of the profession
ISTQB Code of Ethics (Colleagues)
Be cooperative with the devs
ISTQB Code of Ethics (Self)
T/F: The goal of QA is to find bugs
False. It's to find important bugs
T/F: You should always stick to the test plan
False. Be flexible.
T/F: You shouldn't automate everything.
Why do defects cluster?
Due to complexity of the code and/or programmer skill
When should you stop testing?
When test objectives have been met
T/F: If all test cases pass, it means the program is defect free.
What does each of these waterfall steps correspond to with the testing process?
1. Requirements phase
2. Design phase
3. Code phase
4. Test phase
5. Maintenance phase
1. Requirements phase === Test objectives phase
2. Design phase === Test design phase (select your sampling strategy)
3. Code phase === Write test cases
4. Test phase === Execute tests
5. Maintenance phase === Maintain tests
T/F: 90% of all problems in safety critical systems are due to requirements not being specific enough.
T/F: The goal should be to exhaustively test the system.
False. It's impossible most of the time.
Black box testing where you evaluate inputs and outputs but do not look inside the code
Traditional, requirements driven testing. Mainly performs verification
Scenario based-testing where all functional requirements are captured in use-cases. Performs validation & verification
Use cases construction steps
1. Identify the actors (can be humans or subsystems) to determine the scope of the system.
2. For each actor, identify how the actor will use the system to accomplish its functions
3. Detail each use-case identifying each of the flows as a scenario.
T/F: Use cases can find cracks in the requirements.
T/F: Use cases should be motivating, encapsulate every scenario, complex, and be easy to evaluate
False. They should be motivating, credible (likely to happen), complex, and easy to evaluate.
Equivalence partitioning steps
1. For each input, identify set of equivalence partitions & label them
2. Write test cases to cover as many uncovered valid equivalence partitions as possible
3. For each invalid, equivalence partition, write ONE test case for each uncovered partition
Technique for diving input domain of a program into a finite number of equivalence partitions (both valid and invalid partitions are considered)
Weak Robust Equivalence Testing
Tests each error separately
Strong Robust Equivalence Testing
Tests combination of error values as well as each error
Boundary Value Testing
Test boundary conditions on, above, and below the edges of both input and output equivalence partitions
Testing boundary values separately
Testing a combination of boundary values
Cause Effect Analysis
Testing a combination of inputs.
Can use decision trees and tables, where each path / column becomes a test case. Can result in combinatorial explosion
T/F: Decision trees and tables will come up with the same number of test cases.
False. Trees can have less than tables.
T/F: Timelines perform verification
False. Timelines model async events and identify significant use cases and place them on the timeline. This performs validation.
State Based Testing
State diagram corresponding to states the system can be in and corresponding events to go to new states
State diagrams should be inspected for 4 things. What are they?
Completeness: Every state & event pair must be identified and conditional transitions should be correct
Contradiction: Two transitions from the same state should not contain the same event. Danger occurs with nested state charts
Unreachable States: States that cannot be entered
Dead States: States that cannot be exited.
Design of Experiments (DOE)
A systematic approach for evaluating a system of process which derives good code coverage. The goal is to minimize the amount of test cases needed.
Full Factorial Design
test for every factor value combination
Fractional Factorial Design
Only a fraction of combinations are addressed.
Steps for designing DOE pairwise testing
1. Identify parameters that define each input
2. For each parameter, find the partition
3. Specify constraints prohibiting combinations of configuration partitions
4. Specify configurations to test to cover all pairwise combos
Developing multiple versions of the code with mutations (seeded faults). Typically syntactical modifications of source code.
Competent Programmer Hypothesis
Programmers generally create code that is close to being correct, reflecting only minor errors.
Two assumptions made by mutation testing
Component Programmer Hypothesis and Coupling Effect
Detecting small errors also detects complex errors
Steps to mutation testing
1. Generate mutants
2. Execute the original system and mutants
3. Result analysis (0-100, 100 being high quality)
Test Oracle Problem
Hard to determine expected results
Identify vulnerabilities in the software by inserting random test cases
Two types of fuzz testing
1. Mutation Based: Generates test by random modifications of valid test data. No knowledge of inputs needed.
2. Generation Based: Generates tests based on specifications. Anomalies are added to each input. Better results than mutation based.
Confidence that software is free from vulnerabilities
Predict output of certain inputs given an input and an output. Assumption of a metaphoric property existing.
T/F: Metaphoric testing is good for machine learning and big data
T/F: Keeping metaphoric testing at a general systems level increase effectiveness.
False. Going into a feature level increases effectiveness by 170%.
Defect Based Testing
Targeting particular defects utilizing a defect taxonomy - creating test cases for each type of defect.
T/F: Exploratory testing is ad-hoc
False. A systematic strategy is needed.
White Box / Glass Box Testing
Looks at the internal structure of the code to write test cases.
Primarily at the unit and service level of testing
Levels of control flow testing
1. Statement Coverage: Every statement is executed once. Use a control flow graph (CFG)
2. Decision Coverage: Each branch in the CFG is traversed at least once
3. Decision / Condition Coverage: Each condition in a decision takes on all possible outcomes at least once.
4. Multiple Condition Coverage: All combinations in a decision are covered at least once.
T/F: Statement coverage ensures decision coverage.
False, but decision coverage ensures statement coverage.
What is the formula for cyclomatic complexity?
# of test predicates + 1
Steps to identifying basis paths
1. Select an arbitrary path through the graph
2. Flip first decision but keep others constant
3. Reset first decision and flip second
4. Continue until all have been flipped
Steps to data flow testing
1. Annotate control flow graph with 3 sets for each node:
Def(i): set of variables defined in node i
C-use(i): set of variables used in computation in node i
P-use(i): set of variables used in a test predicate
2. Test each variable in the 3 different ways it can be used
Definition Clear Path
Path from node i to node j for a variable x where x is defined in node i and used in node j, but not changed in between
For each definition of a variable, develop test cases to execute all DU paths
Definition Path (DU path)
Start with definition of variable and end with c-use or p-use along definition clear path
Doesn't require execution of program to create & evaluate. Uses data flow analysis and looks for anomalies (like variables defined & redefined before being used)
Executing a program with variables. Determines if a path is infeasible
T/F: DevOps is a software methodology.
False. It is a software development culture that stresses collaboration between everyone on the software team (operations, QA, developers, etc)
T/F: DevOps increases software failure but decrease the time to recover from failures.
False. It decreases both the number of failures and the time needed to recover from failures.
Customer reported unique defects per 1,000 lines of code
Mean Time Between Failures
You only have to loop through code twice to see how the data will flow.
OTHER SETS BY THIS CREATOR
CSE 565: Final Exam Review
CSE 355 Midterm 2
cse 355 midterm reew
POS 150 Comparative Politics Midterms #1