SoundSoftware 2014: Third Workshop on Software and Data for Audio and Music Research

The third SoundSoftware.ac.uk one-day workshop on “Software and Data for Audio and Music Research” will include talks on issues such as robust software development for audio and music research, reproducible research in general, management of research data, and open access.

Details

Programme

10:30 Open, registration and refreshments
11:00 Welcome and Opening Remarks
Mark D. Plumbley
11:15 Oral Session 1

The Digital Music Lab - Developing Methods and Tools for Analysing Big Music Data
Tillman Weyde (City University, London)

How reproducibility tipped the scale toward article acceptance
Bob L. Sturm (Aalborg University, Denmark)

TimeSide, an open web audio processing framework
Guillaume Pellerin (Parisson / Telemeta)

SoundSoftware: Are We Nearly There Yet?
Mark Plumbley and Chris Cannam (Queen Mary University of London)

12:45 Lunch
13:30 Oral Session 2

High Performance Computing for Audio and Music Research
Wadud Miah (Queen Mary University of London)

Learning Web Audio API with G.Hack
Nela Brown and Katerina Kosta (Queen Mary University of London)

Maintaining continuity; Csound6
John ffitch (University of Bath / National University of Ireland, Maynooth)

14:45 Tea
15:15 Oral Session 3

Balancing artistic and scientific concerns in the evaluation of musically metacreative systems
Oliver Bown (Sydney University, Australia) and Toby Gifford (Queensland Conservatorium at Griffith University, Australia)

The Human Harp: Making Music with Bridges
Di Mainstone (Artist) and Alessia Milo (Queen Mary University of London)

Using mobile devices for music research
Dimitrios Bountouridis and Jan Van Balen (Utrecht University, Netherlands)

16:30 Close

Oral session 1

The Digital Music Lab - Developing Methods and Tools for Analysing Big Music Data

Tillman Weyde, City University, London
[ Video ]

Big data poses specific technical and legal challenges to music data analysis. Music data collections are usually of heterogeneous nature, consisting of audio from different sources, MIDI, scores and other symbolic representations, as well as metadata and lyrics. There are methodological, legal and technical issues that need to be addressed, including copyright, distributed processing and statistical approaches to music analysis.

We will describe the AHRC project Digital Music Lab (DML), run by City University, Queen Mary University, UCL, and the British Library with support from I Like Music, which works on a scalable open source software infrastructure, methods and datasets for analysing big music data.

How reproducibility tipped the scale toward article acceptance

Bob L. Sturm, Aalborg University, Denmark
[ Video ]

I discuss a recent episode in which our submission of a negative result article — contradicting previously published work — was favorably reviewed, and eventually published. The review process, and the persuasion of the reviewers, were greatly aided by our efforts at reproducibility. We won a reproducibility prize last year for this work (http://soundsoftware.ac.uk/rr-prize-winner-announcement).

TimeSide, an open web audio processing framework

Guillaume Pellerin, Parisson / Telemeta
[ Video | Slides ]

Initiated by the need of several French research laboratories on Ethnomusicology to manage their digital audio archives through a web platform, the company Parisson is developing Telemeta, an open-source web audio framework.

Of particular interest is the choice of a fully open-source platform in this context where a small company provides an academic institution with a long-term solution to promote and preserve digital cultural heritage together with a tool for researchers to work and collaborate on such archives. Another interesting point in this project is the fact that the Telemeta web platform also integrates audio analysis capabilities through a external component, TimeSide. Thus it makes Telemeta a unique framework for researchers in human science and scientists from the audio and music research community to collaborate.

Mainly written in Python, TimeSide wraps up some state-of-the art open-source audio feature extraction libraries. It therefore provides Telemeta with online and on-demand processing capabilities together with user-friendly visualization of the results through the web interface.

SoundSoftware: Are We Nearly There Yet?

Mark Plumbley and Chris Cannam, Queen Mary University of London
[ Video ]

The EPSRC-funded project SoundSoftware.ac.uk was established to support the sustainable development and use of software and data to enable high quality research in the UK audio and music research community. In this talk we will look at some of our activities, including surveys, code repositories, software projects, training of researchers in Software Carpentry boot camps, and activities to encourage reproducible research. We will also talk about some of the remaining issues and opportunities around data and software still facing the audio and music community, and what actions we can do to tackle these challenges.

Oral session 2

High Performance Computing for Audio and Music Research

Wadud Miah, Queen Mary University of London
[ Video ]

High performance computing (HPC) has changed the landscape of academic research and computational sciences in particular. Powerful HPC supercomputers have enabled researchers to explore scientific areas from climate change, computational chemistry and bioinformatics. This presentation will introduce the components of a HPC cluster and how parallel applications are executed on a cluster. The notion of parallel scalability is introduced and why parallel computing is the way forward for higher performance. The benefits for using HPC for digital music is also discussed with a demonstration comparing the performance of music creation on a desktop to a HPC cluster.

Learning Web Audio API with G.Hack

Nela Brown and Katerina Kosta, Queen Mary University of London
[ Video ]

The talk is focused on how even beginners in programming can familiarize themselves with the building of an audio or music related hack. We present the culture of software and hardware hacking through short demos and examples, before moving on to Web Audio API using the Google Chrome web browser. The Web Audio API is a high-level JavaScript API for processing and synthesizing audio in web applications. It includes capabilities found in modern game audio engines and some mixing, processing and filtering tasks found in modern desktop audio production applications. We show how people can use this API in order to create simple synths and implement effects to audio samples so that they can build the final hack that is presented.

Maintaining continuity; Csound6

John ffitch, University of Bath / National University of Ireland, Maynooth
[ Video ]

Maintaining backward compatability is an important issue in sound software. Without it works of sound-art would disappear. On the other hand software systems need to develop with changes in hardware, software and social environments. Csound developers devote much time and effort in keeping Csound compatible with the original 1980s program and providing support for mobile devices, GPUs and other contemporary ideas.

Oral session 3

Balancing artistic and scientific concerns in the evaluation of musically metacreative systems

Oliver Bown, Sydney University, Australia and Toby Gifford, Queensland Conservatorium at Griffith University, Australia
[ Video ]

The Musical Metacreation (MuMe) Group has organised academic workshops and concerts on the theme of musical metacreation since 2012. Musical metacreation is concerned with designing algorithms and software systems that express some degree of autonomy in the creation of music. It is both a field of scientific research and a lively area of contemporary creative practice. It is important to reconcile the different requirements of these two distinct perspectives, maintaining integrity in both areas, and in this presentation we discuss how this can best be achieved. As part of NIME2014, a concert of musical metacreation at Cafe Oto in Dalston (June 29th) has been organised, presenting a number of systems that perform in live improvisatory performances with instrumental musicians. We draw on observations from this and previous events to guide the discussion with respect to new developments in the field.

The Human Harp: Making Music with Bridges

Di Mainstone, Artist and Alessia Milo, Queen Mary University of London
[ Video ]

The Human Harp is a large-scale sound and music public engagement project inspired by the idea of a suspension bridge as a giant harp. The project has already received major press and media interest, and the next stage will include a residency at the Roundhouse this summer, ahead of a performance in March 2015 for the 150th anniversary of the Clifton Suspension Bridge.

One aim of the project is to ensure the designs, documentation and instructions for construction and performance are recorded, and made available as "open source", to allow other performers, schools, etc to create their own Human Harp beyond the end of the project, and create a sustainable community of users around the world. We will talk about the inspiration behind the project, building the community of people already involved, and the future perspectives for the Roundhouse, Clifton Suspension Bridge and beyond.

Using mobile devices for music research

Dimitrios Bountouridis and Jan Van Balen, Utrecht University, Netherlands

Despite the growing popularity of mobile devices, we believe music (information) research has yet to make use of their potential at full extent. Years of conventional software development can find researchers out of their comfort zone when issues such as user experience and multimodal interaction need to be addressed. We argue that mobile devices offer a novel framework for: crowd-sourcing annotations, dealing with music licensing, efficient user testing, and reaching larger user or expert audiences. We discuss two examples of music annotation applications developed at Utrecht University and University of Amsterdam, covering two types of end users: the general public and the researchers themselves.

Registration

To register please use this EventBrite link.

Workshop Venue

The Workshop will take place in the ArtsOne Lecture Theatre on the Mile End campus of Queen Mary University of London.

Please download the following campus map (pdf): the venue is the building numbered 37, entered from Mile End Road.

For further information on the campus, and how to get to Queen Mary, please follow this link: Mile End Campus.

Support

This workshop is supported by the UK Engineering and Physical Sciences Research Council (EPSRC).