Resampling methods in software quality classification
by W. Afzal, R. Torkar and R. Feldt
PDF
In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original
data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to
use. This is seen as one of the contributing factors for not being able to draw general conclusions
on what modeling technique or set of predictor variables are the most appropriate. Furthermore,
the use of a variety of resampling methods make it impossible to perform any formal metaanalysis of the primary study results. Therefore, it is desirable to examine the influence of
various resampling methods and to quantify possible differences.
Objective and method: This
study empirically compares common resampling methods (hold-out validation, repeated
random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and nonparametric bootstrapping) using 8 publicly available data sets with genetic programming (GP)
and multiple linear regression (MLR) as software quality classification approaches. Location of
(PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve
(AUC) are used as accuracy indicators. Results: The results show that in terms of the location
of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the
8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no
significant differences between the different resampling methods using GP and MLR.
Conclusion: There can be certain data set properties responsible for insignificant differences
between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data
sets and classification techniques, bootstrapping is a preferred method based on the location of
(PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For
comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.
Bibtex
@Article{Afzal2012IJSEKE,
author = "Wasif Afzal and Richard Torkar and Robert Feldt",
title = "Resampling methods in software quality classification",
year = "2012",
month = "",
journal = "International Journal of Software Engineering and Knowledge Engineering",
volume = "22",
issue = "2",
pages = "203-223",
publisher = "World Scientific Publishing Company",
keywords = "Software quality; Prediction; Resampling; Empirical study",
doi = "",
url = "",
url = "http://www.cse.chalmers.se/~feldt/publications/afzal_2012_resampling_methods.html",
}