bigpo.ru
добавить свой файл
  1 ... 3 4 5 6 7 8 9

Long Answers 20 points each





11. Scenario

Explain how develop

Describe

Explain why good


total




2 pts each good idea

-2 if not for set

max 8

Max 5

(well described, good scenario)

Tie facts of test to stated elements, 2 per element

Max 8





















Imagine that you were testing the feature, Save With Password in the OpenOffice word processor.

  • Explain how you would develop a set of scenario tests that test this feature.

  • Describe a scenario test that you would use to test this feature.

  • Explain why this is a particularly good scenario test.



Grading Notes

Scenario tests:

  • Explain how you would develop a set of scenario tests for this feature. I expect a range of possible ideas

    • Research

      • Customers

      • Competitors

      • In-house documentation for tasks

    • Implementation

Note that we’re talking about a SET, not just one. If the explanation is appropriate only for a single scenario test (not for a set), deduct points.

  • Describe a scenario test you would use. I evaluate it against

    • Realistic

    • Complex

    • Unambiguous

    • Persuasive / credible to stakeholder

  • Why is THIS a particularly good test

    • Tie the facts to the elements



Name

How many

Combs

2

What is all

Pairs table

4

Create

Table

10

Why

Correct

4

Total

20





















We are going to do some configuration testing on the OpenOffice word processor. We want to test it on

    • Windows 95, 98, and 2000 (the latest service pack level of each)

    • Printing to an HP inkjet, a LexMark inkjet, and a Xerox laser printer

    • Connected to the web with a dial-up modem (28k), a DSL modem, and a cable modem

    • With a 640x480 display and a 1024x768 display

      • How many combinations are there of these variables?

      • Explain what an all-pairs combinations table is

      • Create an all-pairs combinations table

      • Explain why you think this table is correct.



Grading notes--

  • Combinations 3x3x3x2=54

  • "explain" and "why correct?" are essentially the same question. The second one is an opportunity for the student to look back and check the work as she starts writing her criterion.

  • Students who blow the combination chart (by missing all pairs) are capped at 50%. This looks harsh, but this is a very easy table and the students have had it for plenty of time. They shouldn’t get this wrong.



Test Plan

Strategy

Q1

Guide 1

Q2

guide 2

Q3

Guide 3

Q4

Guide 4

Q5

Guide 5

Q6

Guide 6

Total

/ 20
















































Imagine that you are an external test lab, and Sun comes to you with OpenOffice. They want you to test the product. How will you decide what test documentation to give them? (Suppose that when you ask them what test documentation they want, they say that they want something appropriate but they are relying on your expertise.) To decide what to give them, what questions would you ask (4 to 6 questions) and how would the answers to those questions guide you?


Grading notes-

[Note: I have refined the wording of this question since grading the exam in which this question appeared. This analysis will be different next time because you won’t be asked for an overall strategy. “How will you decide what test documentation to give them?” is deleted.]

With “typical” points and 4 questions, the student gets 16 plus up to 3 for strategy.

With “typical” points and 6 questions offered, the student can get 24 plus up to 3 for strategy,

Maximum points possible are 39 (1 per question, 5 per guide across 6 questions, plus up to 3 for strategy)

I reserve discretion over “A” and may slightly raise or lower an answer if it is in the 18-20 total range.


Strategy: How will you decide what test documentation to give them? An answer that says, I'll ask them questions, is worth 0 points because I've said to ask questions. On the other hand, if they add extra research ideas beyond asking questions, they can have 1 (typical) to 3 (professional-level) points.

Questions:

  • The question alone gets one point for itself. They got a list of questions in the course, there is nothing original here, just very simple memory work.

  • The question alone gets no guidance points, zero. Guidance should take the form of a specific statement of impact (of the answer) on the content or structure of the test documentation. An exceptionally insightful and useful guidance answer can earn 5 points. An adequate (the typical) answer earns 3 points. A weak answer earns 0-2. For examples of impact discussions, see the test documentation chapter in Lessons Learned in Software Testing. This was required reading in the course.

  • Some people misread the question as calling for an aggregate judgment on the value of the answers. I added back up to 6 points for the aggregate evaluation.

See course slides on requirements questions:

  • Is test documentation a product or tool?

  • Is software quality driven by legal issues or by market forces?

  • How quickly is the design changing?

  • How quickly does the specification change to reflect design change?

  • Is testing approach oriented toward proving conformance to specs or nonconformance with customer expectations?

  • Does your testing style rely more on already-defined tests or on exploration?

  • Should test docs focus on what to test (objectives) or on how to test for it (procedures)?

  • Should control of the project by the test docs come early, late, or never?

  • Who are the primary readers of these test documents and how important are they?

  • How much traceability do you need? What docs are you tracing back to and who controls them?

  • To what extent should test docs support tracking and reporting of project status and testing progress?

  • How well should docs support delegation of work to new testers?

  • What are your assumptions about the skills and knowledge of new testers?

  • Is test doc set a process model, a product model, or a defect finder?

  • A test suite should provide prevention, detection, and prediction. Which is the most important for this project?

  • How maintainable are the test docs (and their test cases)? And, how well do they ensure that test changes will follow code changes?

  • Will the test docs help us identify (and revise/restructure in face of) a permanent shift in the risk profile of the program? Should docs (be) automatically created as a byproduct of the test automation code?



Additionally, we might see the Phoenix questions (these are listed in Thinkertoys) or other context-free questions (see Gause & Weinberg, Exploring Requirements).



Oracle

Hyphenation

8

Footnotes

8

Compare

6

Total

20


















You are using a high-volume random testing strategy for the OpenOffice word processing program. You will evaluate results by using an oracle.

    • Consider testing the hyphenation feature using oracles. How would you create an oracle (or group of oracles)? What would the oracle(s) do?

    • Now consider the placement of footnotes at the bottom of the page. How would you create an oracle (or group of oracles) for this? What would the oracle(s) do?

    • Which oracle would be more challenging to create or use, and why?

Note: If you don’t understand hyphenation, substitute “spell checking” for “hyphenation in this question.


Grading notes.

In 2002, I applied gentle grading because we didn’t spend enough time on this in class for a detailed answer. Additionally, just before the exam, it became clear that several foreign students were befuddled about hyphenation. So, I sent out a note allowing spell checking instead, and included this as one of the easy questions on the exam.

Hyphenation:

  • How would you create an oracle?

    • Use of prior version is worth 4 points (of 8). The problem is that there’s no reason to believe the prior version works. Add points for discussion of this issue.

    • Compare to competitor is worth 4-8 points depending on whether the answer deals with the question of how we know the competitor works

    • We could run both word perfect and word in parallel and raise the flag if they disagree with our result

    • We could build random sentences from a small vocabulary of words that have known hyphenation characteristics, then check whether they were hyphenated properly against a list

    • We might run only a partial oracle that looks at hyphenation under some simple rules.

  • What would it do?

Spell checking

  • How

  • What would it do

Footnotes

  • There are several things to check here--placement on the page, agreement between the reference mark (e.g. footnote number) in the body and the one in the footnote, formatting of the footnote, formatting of the reference mark, break of the long footnotes across pages, formatting of tables in footnotes, etc.

  • How

    • Use of prior version is worth 3 points (of 7). The problem is that there’s no reason to believe the prior version works. Add points for discussion of this issue.

    • Compare to competitor is worth 3-7 points (of 7) depending on whether the answer deals with the question of how we know the competitor works

    • We could run both word perfect and word in parallel and raise the flag if they disagree with our result

    • We might run a partial oracle that looks only at some of the footnoting issues.

    • If you write your own, how do you know it works and how do you decide what to include. Are you designing it in a way that makes it likely to be suitable for high-volume work?

  • What would it do

Which oracle is more challenging and why

  • Footnoting is much more challenging because there are so many variables in play. Consider the problem of placement at the bottom of the page, carrying long notes across pages, formatting of tables and pictures inside footnotes, etc.

  • However, a reasoned argument in favor of the other (hyphenation or spellcheck) will be accepted to the extent that it is credibly argued.



follow-up testing

Steps

3

Options

3

Configs

3

Generality

(3)

Other

(3)

Example1

4

example2

4

Example3

4

Total

20

































Suppose that you find a reproducible failure that doesn’t look very serious.

    • Describe three tactics for testing whether the defect is more serious than it first appeared.

    • As a particular example, suppose that the display got a little corrupted (stray dots on the screen, an unexpected font change, that kind of stuff) in OpenOffice’s word processor when you drag the mouse across the screen. Describe three follow-up tests that you would run, one for each of the tactics that you listed above.



Grading Notes

Describe three tactics for testing: If you list an item without describing it, only 1 point.

Each description is worth 3 points, each example is 4 points. That totals 21, but from 19 to 20 is my discretion

My examples of follow-up tests

  • Tests related to my steps

    • Enter more data into the table

    • Enter data into the table lots of times (repeat same entries)

  • Tests related to the structure of the situation

    • Vary the size of the table

    • Vary the contents of the table

    • Vary the color, alignment, font, line width, etc of the table entries

  • Tests related to the failure

    • ?? what else causes mouse droppings ??

    • print preview the screen

  • Tests related to the persistent variables / options

    • Location of the program within the window

    • Whether the program is maximized

    • Default cell format, such as alignment within the cells, font, line width, etc.

  • Tests related to the hardware

    • Different video resolution

    • Different monitors

    • Different video cards

    • Different OS (check one of the ports, is this unique? If not, then anything specific to windows might be irrelevant)

    • Different mouse / mouse driver

    • Different memory




<< предыдущая страница   следующая страница >>