Upgrade to remove ads
CSE 565: Final Exam Review
Terms in this set (108)
Performance testing objective
Verify requirements are met for specific load conditions including stress and volume scenarios
What are the 4 entry criteria for performance testing?
1. Measurable performance requirements
2. Reasonably stable system
3. Test environment similar to the customer's site
4. Tools (such as load generator and resource monitor)
What all should be covered by load testing?
Different volumes of activity and different mixes of activity
T/F: For all use cases, the load should be varied and response times should be tracked
False. This should only be done for relevant use cases.
What can resource usage help identify?
Bottlenecks or sources of performance problems
Stress testing objective
Verify the requirements are met when resources are saturated and pushed beyond their limits
Stress testing steps
1. Identify stress points
2. Develop a strategy to stress the parts of the system identified in step 1
3. Verify the intended stress is actually generated
4. Observe behavior
When observing behavior during stress testing, what should you be looking for?
Stress related requirements are met and functional correctness is achieved
What are some reasons stress may not be actually generated during stress testing?
1. The testing strategy may be ineffective
2. The system performance may be better than expected
Volume testing objective
Verify requirements are met when there is a large amount of activity over an extended period of time
What errors can be detected through volume testing?
1. Memory leaks
2. Counter overflows
3. Resource depletion
Configuration testing objective
Verify functional and performance requirements are met on different configurations the application runs on
Configuration testing steps
1. Identify parameters that define each configuration that may have an impact on the system's ability ot function (i.e. cpu, memory, os, database, etc)
2. Group similar parameters together to reduce number of configurations
3. Identify configuration combinations to test (boundaries, risk based, DoE)
Why might modifications to code introduce errors?
1. Code ripple effects
2. Unintended feature interactions
3. Changes in performance synchronization, resource sharing, etc
T/F: Regression testing is only performed at the unit level
False. Can also be at the integration and system test level
T/F: Usually just a subset of regression tests will be run based on the modification.
True. It is often impractical to run all tests.
What are the two strategies to choosing the subset of regression tests to run?
1. Testing of code deltas
2. Ripple effect analysis
What does testing of code deltas require?
A configuration management tool to identify code change deltas
What does ripple effect analysis require?
Developers must identify how the changes could impact other requirements or features
What do the tests in the regression testing confidence test suite address?
1. High frequency use-cases
2. Critical functionality
3. Functional breadth
What is the usual testing approach for error detection and recovery?
Injecting the error into the system
What is subjective satisfaction?
The user's overall feeling about the product
What is efficiency in terms of usability?
The speed at which a user can perform a task
What do "errors" stand for in terms of usability testing?
Measuring the number of incorrect actions a user makes when trying to accomplish a task.
What does "memorability" stand for in terms of usability testing?
The ability to retain skills in using a product once it's learned
What does "learnability" stand for in terms of usability testing?
The type and amount of training a user has to have to achieve a desired level of performance
T/F: As a part of usability, users should be protected from the consequences of their actions
Would you get the same results if the tests were repeated?
Does the test measure something of relevance?
Learn what aspects of the interface are good and bad and how it can be improved
Assesses the overall quality of the interface.
T/F: Only novice users should be test users.
False. Test users should be at all levels.
T/F: You should begin with easy tasks when doing usability testing.
True - it is important to build their confidence first.
Stages of usability testing
1. Preparation (set up environment)
2. Introduction (welcome, purpose, overview)
3. Run the test
(MTTF/(MTTF x MTTR)) x 100%
MTTF = mean time to failure
MTTR = mean time to repair
How many minutes can a system be down in a year to have 5NINEs of availability?
What can operational profiles be used for?
What is an operational profile?
A set of major functions performed by specific sets of users and their occurrence probabilities.
Steps to construct an operational profile
1. Identify the major functions performed by the system (what types of users and external entities use the system? what are the use-cases?)
2. Identify the occurrence rates (using historical data or marketing)
3. Calculate the occurrence probability
What is the goal in developmental testing?
To remove faults that cause failures
What is the goal in certification testing?
To determine whether a software component or system should be accepted or rejected.
T/F: Software being correct implies that the software is secure.
False. Software correctness and security are not the same.
What are the three security fundamentals?
1. Confidentiality (no one sees data they aren't supposed to)
2. Integrity (no one modifies the data)
3. Availability (no denial of service)
T/F: Security testing only requires testing the product.
False. You also need to test all of it's interactions with it's environment (os, gui, file system, etc)
What question can reliability growth models help answer?
When to stop testing
Number of failures per natural time unit
How should you select a reliability model?
There is no universal one. Try many and use the best fit.
What should system test plans address?
1. System test objectives
2. Dependencies and assumptions (resources available, software completed on time, etc)
3. Adopted test strategy (DoE, risk based, etc)
4. Specification of test environment
5. Specification of system test entry and exit criteria (could use a system test readiness assessment)
6. Schedule (tasks, their dependencies, estimated effort, etc)
7. Risk management
Program Evaluation and Review Technique:
Chart of nodes where the nodes are tasks and their edges are dependencies
Critical Path Analysis
Used on a PERT chart to find the critical path in the testing plan (the path with no slack time)
What does overestimating the time for tests lead to?
Inefficient testing and delayed product release
What does underestimating the time for tests lead to?
Lots of overtime, high stress, and likely inefficient testing
T/F: The manager should come up with the schedule and show to the team when finalized.
False. The team should be included in the process so they are committed to it too.
Identifies duration of tasks along with their starting and ending dates. It identifies parallel tasks
Buffer time built into the schedule in case something goes wrong. The confidence of the schedule is directly linked to the amount of contingency time.
What are factors in testing effort?
Size, complexity, scope, technologies used, desired quality, process, will customers be available to talk to, quality of the code, etc
What are causes of inaccurate testing estimation?
1. Misunderstanding of requirements
2. Overlooked tasks
3. Insufficient analysis when developing estimates due to pressure
4. Lack of guidelines for estimating
5. Lack of historical data
6. Pressure to reduce estimates
Steps to estimating testing effort
1. Determine estimation responsibilities
2. Review & clarify testing objectives, deliverables, milestones, and constraints
3. Identify testing tasks (some tasks include: understand requirements, train on tools, test case development)
4. Select appropriate size measure (number of requirements, number of use cases, lines of code)
5. Select size estimation method: (top down or bottom up)
6. Estimate and document size
7. Estimate and document effort
Top down size estimation
Experts develop estimate based on previous data. This may fail for new types of projects
Bottom up size estimation
Break the testing into parts and estimate each part
80% of effort will focus on 20% of the code
Why is risk based testing needed?
Companies often are limited on time and resources and need to prioritize testing
What two things factor into determining the risk exposure?
Likelihood of a failure and severity of a failure
T/F: To assess the consequences of a failure, all you have to do is talk to the customers and developers.
False - although you need to talk to the customers and developers to determine consequences, you should also consider:
Although the best criteria for when to stop testing is when the objectives have been met, what are other typical ways this is decided?
1. Measuring defect density
2. Defect pooling
3. Defect seeding
4. Trend analysis
5. Reliability modeling
Sampling technique to predict how many remaining defects there are, by using two testers who test the system independently and then comparing what they found
Unique defects formula (for defect pooling)
(Defects found by person A) + (Defects found by person B) - (Defects found by both)
Estimated total defects formula (for defect pooling)
((Defects found by person A) x (Defects found by person B)) / (Defects found by both)
Estimated defects remaining formula (for defect pooling)
Estimated total defects - Unique defects
Put defects into the system and see how many the testers catch
Estimated total defects (for defect seeding)
(seeded defects planted / seeded defects found) x normal defects found
Number of defects per 1,000 lines of code
What are reasons for sudden increase is relability?
Changing test effort
What should a test case document include?
2. Test items (references to docs used to produce the test)
5. Environment used
What should test incident reports include?
4. Description (input, expected output/ actual output, date & time, environment, attempts to repeat)
Target audience could be programmers, testers, managers, CCC board -> provide enough information for each group of people to do their job
Customer impact from the error
T/F: All high severity problems are high priority, but not the other way around.
False. Neither implies the other
What questions determine the level of documentation?
Will the document support testing?
Is documentation a deliverable?
Are there regulatory concerns?
Will documentation support tracking activities?
Will it support regression testing?
Technique for tracking both schedule and cost progress. Establishes a relative value for every task and credits that value when complete
Budgeted Cost of Work Scheduled
Budgeted Cost of Work Performed
Actual Cost of Work Performed
What does it mean if BCWP > BCWS?
We are ahead of schedule
What does it mean if BCWP > ACWP?
We are below our budget
A way to prioritize tests based upon 4 factors:
1. Customer assigned priority
2. Developer perceived implementation complexity
3. Fault proneness of the requirement
4. Requirement volatility (if it changes a ton, it's likely defect prone)
Steps for process improvement
1. Characterize the current process
2. Analyze the current process
3. Characterize the target process
4. Process redesign
During the characterizing of the current process, you have to distinguish between:
The perceived process
The official process
The actual process
What are the differences between these three?
The perceived process = what you think you do
The official process = what you are supposed to do
The actual process = what you actually do
Goal question metric:
Derive questions that must be answered
Develop metrics to answer that question
In what phase(s) should metrics be used in the process improvement phases?
When analyzing the current process and when implementing the new process
What are some of the goals when redesigning a process?
- Eliminating, simplifying, or combining activities
- Eliminating rework
- Reducing task variance
What are some factors when deciding to outsource testing?
needing specialized tech, maintenance support, strategic value of the system, cost, strategic alliance between companies, speed of development, desired level of staffing
What are the activities associated with outsourcing testing?
1. Define work to be subcontracted (maximize effectiveness and minimize communication efforts, dependencies and risk)
2. Develop a subcontractor management plan (technical spec, statement of work, risk management, estimate of effort)
3. Select a subcontractor
4. Create a contract
5. Oversee the subcontractors (reviews, metrics, risk management, approve invoices)
6. Acceptance of work
T/F: The subcontractor should create the estimate of effort.
False. You should know before selecting a subcontractor what your estimate is.
What should a statement of work include?
1. All tasks to be performed
2. Maintenance responsibilities
3. Relevant processes to be followed
Who are the two groups of people a test lead needs to negotiate with?
Project management (on schedule and tasks) and developers (on entry criteria)
What 3 things determine success of a team/product?
People, process, and technology
What do inspections require?
1. Advance preparation
2. Utilization of rules and checklists
3. Metrics gathering and analysis to facilitate process improvement
What should system testers inspect?
- Test plan
- Test case
- Test incident report
Seeks to identify the cause of the defects missed by testing in order to eliminate future occurrences
What are the different defect cause categories?
1. Communication failure: lack of information
2. Oversight: Failure to consider all combinations
3. Education: Lack of training of the tools
4. Transcription: Simple mistake
How can you avoid communication failures?
Improve documentation, have liaisons to other groups, have better processes, add in tracking systems
How can you avoid oversight failures?
Use checklists, automate, use templates, and review
How can you avoid education failures?
Just-in-time training, tutorials, proper staffing
How can you avoid transcription failures?
Tools to automate and review
Capability Maturity Model Integration:
1. Initial (adhoc)
2. Repeatable (expertise lies in the individual)
3. Defined (processes are defined and documented)
4. Managed (metrics are used extensively to guide the process)
5. Optimizing (emphasis on continuous improvement)
Like CMMI, but for testing, Testing maturity model integration
1. Adhoc (no specific goals, testing begins after code is written)
2. Phase definition (well defined testing phases, goals, and basic methods used)
3. Integration (trained testers, risk managed, integrated with software devs)
4. Management & measurements (metrics are used to review for efficiency and effectiveness)
5. Optimization, defect prevention, and quality control (root cause analysis, test process improved, statistical quality control)
Which activities are normally a part of verifying a serviceability requirement?
problem reporting, isolation, correction, verification and fix release
OTHER SETS BY THIS CREATOR
CSE 565 Midterm Exam
CSE 355 Midterm 2
cse 355 midterm reew
POS 150 Comparative Politics Midterms #1