Reproducible Research prizes: we have winners!
We received an excellent set of entries to our first Reproducible Research Prize, and we found it harder than expected to evaluate and to choose winners.
With the help of two panels of outside reviewers—thanks to the Software Sustainability Institute for reviewing the sustainability planning of entrants, and to willing external reviewers for judging potential to enable further research—we obtained a shortlist, and then narrowed down the entries to our eventual winners by applying an average of their scores across all criteria. See our page about the evaluation for more details of how we scored the submissions.
The winners are:
Conference paper (published)
Majdak, P., Iwaya, Y., Carpentier, T., Nicol, R., Parmentier, M., Roginska, A., Suzuki, Y., Watanabe, K., Wierstorf, H., Ziegelwanger, H., and Noisternig, M., Spatially Oriented Format for Acoustics: A Data Exchange Format Representing Head-Related Transfer Functions (paper, site, code)
Submitted by Piotr Majdak.
This data exchange format, provided as a specification with software bindings and example data, was described by reviewers as having "potential to have a major positive impact on future research involving HRTFs, both in the UK and internationally" and praised by the sustainability panel for its engagement with standard formats and standards bodies.
Proutskova, P., Rhodes, C., Wiggins, G., and Crawford, T., Breathy or Resonant – A Controlled and Curated Dataset for Phonation Mode Detection in Singing (paper, dataset)
Submitted by Polina Proutskova.
A newly recorded dataset of sung notes in different phonation modes. Reviewers were "pleased to see that the data collection was careful, employing a single singer with a recording environment that does not change" and appreciated the use of Creative Commons licensing.
This work takes an algorithm known to the radar tracking community and adapts it to musical applications. Reviewers appreciated the open publication of source code and high level of replicability of the results in the paper.
Sturm, B. L. and Gouyon, F., Comments on “Automatic Classification of Musical Genres Using Inter-Genre Similarity” (paper in review; code), submitted with Sturm, B. L., On music genre classification via compressive sampling (paper, code) and Sturm, B. L. and Noorzad, P., On Automatic Music Genre Recognition by Sparse Representation Classification using Auditory Temporal Modulations (paper, code)
Submitted by Bob L. Sturm.
These three papers all address limitations of earlier work whose results appeared to be (and were) "too good to be true". Reviewers called this "a strong contribution towards reproducible research" and the panel found a respectable ease of replicability for the figures in the papers.
Giannoulis, D., Stowell, D., Benetos, E., Rossignol, M., Lagrange, M., and Plumbley, M. D., A Database and Challenge for Acoustic Scene Classification and Event Detection (paper in review; site, data, metric functions, baseline system, baseline system)
Submitted by Dimitrios Giannoulis.
This project sets up a challenge in computational auditory scene analysis and presents baseline and metrics code in an open manner. Reviewers appreciated that "this project is a massive undertaking [that] will motivate a lot of work" and the panel noted that "the authors have taken great care in their sustainability planning".
Special prize for a Technical Report
This report aims to reproduce the results of another research group from scratch and, in doing so, to understand and clarify some unstated assumptions in the initial paper. Reviewers thought "that this is valuable work which is likely to have produced a set of software tools that will be of much interest and usefulness to UK researchers in this field" and the panel found that the software "worked as described and reproduced the figures in the paper".