Tip:
Highlight text to annotate it
X
[Harald] The first short paper is about verification of bug fixes,
by people from Federal University of Bahia, Brazil,
and the speaker is Rodrigo Souza. Go ahead.
[Rodrigo] First of all, this is how a bug report works.
First, someone reports a new bug.
After some discussion, a developer submits a bug fix
and marks the bug as FIXED.
Then, someone else verifies that the bug fix is appropriate
and marks the bug as VERIFIED.
In this exploratory work, we intend to characterize the process of
verification of bug fixes by mining bug repositories.
We investigate three questions:
"When are bug fixes verified?"
"Who verifies the bug fixes?"
and "How are them verified?"
We used data from the previous MSR Challenge
containing about 10 years of bug reports from two open source IDEs,
NetBeans and Eclipse,
and we have analyzed two subprojects for each
So, first, when are bug fixes verified in these projects?
We plotted the accumulated number of verifications
over time for the projects, and in the case of NetBeans,
the verification rate is almost constant,
meaning that bug fixes are verified all the time.
For Eclipse/Platform, however,
verifications are much more frequent just before a release,
which suggests that there is a
verification phase in Eclipse's process.
Next, who verifies the bug fixes?
In some projects, there is a team dedicated to verifications:
the Quality Assurance team, or QA team.
We defined that the QA team is formed by all developers
who perform at least ten times more verifications than bug fixes.
Using this definition, we discovered that, in NetBeans,
the QA team is formed by 20% of its developers,
who perform more than 80% of the verifications.
In the case of Eclipse, no QA team was found.
Finally, how are bug fixes verified?
What techniques are used to verify each bug fix?
We have looked at the comments developers write
when they mark a bug as VERIFIED.
But it appears that most comments just state the obvious:
that the bug fix was verified using some version of the software.
Using regular expressions, we discovered that
less than 4% of the comments refer to some technique, such as
automated testing or code inspection.
Although further research is needed in this part.
We'd also like to share some pitfalls
we've found during this research.
By plotting the accumulated number of verifications
for NetBeans/Platform over its lifetime,
we see what appears to be a huge verification effort,
represented by this big rise in the graph.
By looking at the data, however,
we found that this big rise represents
more than 2 thousand bugs
that were verified in just
5 hours
by only one guy.
[laughs]
Of course, no human being can do that, ok?
The developer was not actually verifying the bug fixes.
It turns out, Superman was just doing some cleanup
by marking old bugs as VERIFIED.
So, be careful about such mass verifications,
because they may represent a large part
of the verifications in the project,
but they are not real verifications and they may bias your analyses.
Also, by reading a few verification comments,
we discovered that, in some projects,
marking a bug as VERIFIED has a special meaning.
for example, in Eclipse Modeling Framework,
it just means that the bug fix was made available in a build
in the website. No actual verification.
Future work? Well,
People say that, if you control your process,
you can control the quality of your product.
So, we want to investigate what features of the verification process
influence, directly or indirectly, the reopening of bugs.
For example, is it more effective to have a verification phase
or to verify bugs all the time?
We intend to build a causal network
to investigate these and other questions,
so we are looking for variables that influence the verification process
or the reopening.
Thank you very much!
[applause]