bigpo.ru
добавить свой файл
  1 ... 5 6 7 8 9

Assignment 2: Replicate and Edit Bugs


The purpose of this assignment is to give you experience editing bugs written by other people. This task will give you practice thinking about what a professional report should be, before you start entering your own reports into this public system.

  • Work with OpenOffice Writer, the word processor.

  • Read the instructions at http//qa.OpenOffice.org/helping.html, and make sure to use the oooqa keyword appropriately. Read the bug entry guidelines at http://www.OpenOffice.org/bugs/bug_writing_guidelines.html.

  • Find 5 bug reports in IssueZilla about problems with OpenOffice Writer that appear to have not yet been independently verified. These are listed in the database as “unconfirmed” or “new”. As of 9/1/2002, there are 927 such reports associated with the “word processor” component. To find lots of bugs, use the search at http://www.OpenOffice.org/issues/query.cgi rather than at http://qa.OpenOffice.org/issuelinks.html.

  • For each report, review and replicate the bug, and add comments as appropriate to the report on issuezilla.

  • Send me an email with the bug numbers and for each bug, with comments on what was done well, what was done poorly and what was missing that should have been there in the bug report.

Assignment Procedure

For each bug:

  • Review the report for clarity and tone (see “first impressions”, next slide).

    • Send comments on clarity and tone in the notes you send me (but don’t make these comments on the bug report itself)

  • Attempt to replicate the bug.

    • Send comments to me on the replication steps (were the ones in the report clear and accurate), your overall impressions of the bug report as a procedure description, and describe any follow-up tests that you would recommend.

  • You may edit the bug report yourself, primarily in the following ways.

    • Add a comment indicating that you successfully replicated the bug on XXX configuration in YYY build.

    • Add a comment describing a simpler set of replication steps (if you have a simpler set). Make sure these are clear and accurate.

    • Add a comment describing why this bug would be important to customers (this is only needed if the bug looks minor or like it won’t be fixed. It is only useful if you clearly know what you are talking about, your tone is respectful).

    • Your comments should NEVER appear critical or disrespectful of the original report or of the person who wrote it. You are adding information, not criticizing what was there.

  • If you edit the report in the database, never change what the reporter has actually written. You are not changing his work, you are adding comments to it at the end of the report

  • Your comments should have your name and the comment date, usually at the start of the comment, for example: “(Cem Kaner, 12/14/01) Here is an alternative set of replication steps:”)

  • Send me an email, telling me that you have reviewed the report and made changes.

A Checklist for Editing Bugs

The bug editor should check the bug report for the following characteristics:

A. First impressions—when you first read the report:

    1. Is the summary short (about 50-70 characters) and descriptive? (see the slide: Important Parts of the Report: Problem Summaries)

    2. Can you understand the report? As you read the description, do you understand what the reporter did? Can you envision what the program did in response? Do you understand what the failure was?

    3. Is it obvious where to start (what state to bring the program to, to replicate the bug)?

    4. Is it obvious what files to use (if any)? Is it obvious what you would type?

    5. Is the replication sequence provided as a numbered set of steps, which tell you exactly what to do and, when useful, what you will see?

    6. Does the report include unnecessary information, personal opinions or anecdotes that seem out of place?

    7. Is the tone of the report insulting? Are any words in the report potentially insulting?

    8. Does the report seem too long? Too short? Does it seem to have a lot of unnecessary steps? (This is your first impression—you might be mistaken. After all, you haven’t replicated it yet. But does it LOOK like there’s a lot of excess in the report?)

    9. Does the report seem overly general (“Insert a file and you will see” – what file? What kind of file? Is there an example, like “Insert a file like blah.foo or blah2.fee”?)

B. When you replicate the report:

    1. Can you replicate the bug?

    2. Did you need additional information or steps?

    3. Did you get lost or wonder whether you had done a step correctly? Would additional feedback (like, “the program will respond like this...”) have helped?

    4. Did you have to guess about what to do next?

    5. Did you have to change your configuration or environment in any way that wasn’t specified in the report?

    6. Did some steps appear unnecessary? Were they unnecessary?

    7. Did the description accurately describe the failure?

    8. Did the summary accurate describe the failure?

    9. Does the description include non-factual information (such as the tester’s guesses about the underlying fault) and if so, does this information seem credible and useful or not?

C. Closing impressions:

    1. Does the description include non-factual information (such as the tester’s guesses about the underlying fault) and if so, does this information seem credible and useful or not? (The report need not include information like this. But it should not include non-credible or non-useful speculation.)

    2. Does the description include statements about why this bug would be important to the customer or to someone else? (The report need not include such information, but if it does, it should be credible, accurate, and useful.)

D. Follow-up tests:

    1. Are there follow-up tests that you would run on this report if you had the time? (There are notes on follow-up testing in the course slides 105-117)?

    2. What would you hope to learn from these tests?

    3. How important would these tests be?

    4. You will probably NOT have time to run many follow-up tests yourself. Don’t take the time to run more than 1 or 3 such tests.

    5. Are some tests so obvious that you feel the reporter should run them before resubmitting the bug? Can you briefly describe them to the reporter?

    6. Some obvious style issues that call for follow-up tests—if the report describes a corner case without apparently having checked non-extreme values. Or the report relies on other specific values, with no indication about whether the program just fails on those or on anything in the same class (what is the class?) Or the report is so general that you doubt that it is accurate (“Insert any file at this point” – really? Any file? Any type of file? Any size? Maybe this is accurate, but are there examples or other reasons for you to believe this generalization is credible?)

GRADING NOTES FOR THE BUG EDITING ASSIGNMENT

Two components for grading the papers –

  1. Comments at Issuezilla (the OpenOffice database)

  2. Editor’s report submitted to us.

I allocated 14 points possible for each bug, but totalled out of 10. That is, if you got a 3/14 for the bug, your score was changed to 3/10. Similarly, 14/14 became 10/10. There were 10 points available for each bug.


COMMENTS ON THE BUG REPORTS THEMSELVES, FILED IN ISSUEZILLA

[NOTE: This was prepared as feedback for students but can be easily turned into a rubric.]


The content of your comments has to vary depending on the problem. The key thing is that the follow-up report has to be useful to the reader.

For example, a simple failure to replicate might be sufficient (though it is rarely useful unless it includes a discussion of what was attempted.) Sometimes, detailed follow-up steps that simplify or extend the report are valuable.

This is worth up to 7 points out of 10







Subcomponents of the comments at Issuezilla

Points possible

1

Report states configuration and build   

+ Up to 1

2

If the report is disrespectful in tone, zero the grade for the report.

0 for the report

3

If you clearly report a simpler set of replication steps

+ Up to 5

4

If you clearly report a good follow-up test

+ Up to 5

5

A follow-up test or discussion that indicates that you don't understand the bug is not worth much.

+ Up to 1

6

If there is enough effort and enough usable information in the follow up test.

+ Up to 3

7

If you make a good argument regarding importance (pro or con)

+ Up to 5

8

If the bug is in fact not reproducible, and the report demonstrates that you credibly tested for reproducibility

+ Up to 5

9

Nonreproducible bug on alternate configuration without discussion

- 1

10

Nonreproducible bug on alternate configuration that was already dismissed

- 2



REPORT TO US
This is worth up to 7 points out of 10
Here, you evaluated the report rather than trying to improve it. I wanted to see details that suggested that you had insight into what makes bug reports good or bad, effective or ineffective. I did not expect you to walk through every item in the checklist and tell me something for each item (too much work, most of it would have wasted your time). Instead, I expected that you would raise a few issues of interest and handle them reasonably well. For different reports, you might raise very different issues.

  1) I was interested in comments on:

  1. What was done well.

  2. What was done poorly.

  3. What was missing that should have been there.

2) In the assignment, the checklist suggested a wide range of possible comments, on

  1. First impressions

  2. Replication

  3. Closing impressions

  4. Follow-up tests

The comments did not have to be ordered in any particular way but they should have addressed the issues raised in the assignment checklist in a sensible order. We credited them as follows:

    Individual issue discussions are worth up to 3 points, but are normally worth 0.5 or 1 point (typically 1 point if well done). An exceptional discussion that goes to the heart of the quality of the report or suggests what should have been done in a clear and accurate way is worth 2 points. An exceptional and extended (long) discussion that goes to the heart of the quality of the report AND includes follow-up test discussion or suggests (well) what should should have been done is worth 3 points.

  1. The primary basis of the evaluation in this section is insight into the quality of the bug report. If the student has mechanically gone through the list of questions, without showing any insight, the max point count is 5. If we see insight, the max point count is 7.
    A discussion that shows that the tester did not understand the bug under consideration is worth at most 5, and was often worth less.

Bug number

Comments at issuzilla

Editor’s report

Total points

1

7

7

14 ≡ 10

2

7

7

14 ≡ 10

3

7

7

14 ≡ 10

4

7

7

14 ≡ 10

5

7

7

14 ≡ 10

GRAND TOTAL

50



Assignment 3: Domain Testing & Bug Reporting

  1. Create between 10 and 20 domain tests. You can stop at 10 if you find (and write up) 2 bugs. You can stop at 15 tests if you find (and write up) 1 bug.

  2. Work in the Word Processing part of OpenOffice.

  3. Pick a function associated with Word Processing. Please run all of your tests on the same function. (If several students are working together, you can pick one function per student.)

  4. Pick one (1) input, output, or intermediate result variable

    • Identify the variable. Stick with that one variable throughout testing.

    • Run a mainstream test (a test that is designed to exercise the function without stressing it). You do tests like this first in order to learn more about the function and the variable’s role in that function.

  5. Identify risks associated with that variable

  6. For each risk, design a test specifically for that risk that is designed to maximize the chance of finding a defect associated with this risk.

  7. Explain what makes this a powerful test. It might help you to compare it to a less powerful alternative.

  8. What is the expected result for the high-power test?

  9. What result did you get?

  10. Report your results in a table format that has the following columns:

      1. Feature or function

      2. Variable name or description

      3. Risk

      4. Test

      5. What makes this test powerful

      6. Expected result

      7. Obtained result

  11. If you find bugs, write up bug reports and enter them into Issuezilla.

  12. I strongly recommend that you pair up with someone and have them replicate your bug and evaluate a draft version of your report before you submit it to Issuezilla. I will evaluate your report against a professional standard of quality (essentially, the same evaluation that you just did in Assignment 2).

  13. Write a summary report that explains what you believe you now know and don’t know about the function, based on your testing. (If your group tested several functions, write up a summary report for each.)

Notes (that I’ll use for grading) on Exercise 3

  • It’s important to answer every section:

    • The table needs 7 columns

    • There should be 10-20 tests and 0-2 bug reports

    • There should be a summary report that explains what you know about the function under test.

  • It’s important to show the domain analysis (or its results)

    • Use boundary values

    • Identify them as bounds and equivalence classes or identify the different sections of the space as you partition it. You might find it useful to start with a boundary analysis (and table).

    • Be specific about risk

    • Be specific about power (compare to others of the same equivalence class)




<< предыдущая страница   следующая страница >>