Res, a matter.

Xth Sense > Project Meta-Gesture Music

Some sketches and notes I am currently working on together with Baptiste Caramiaux and Atau Tanaka towards the creation of a corporeal musical space generated by biological gesture, that is, the complex behaviour of different biosignals during performance.

General questions: why to use different biosignal in a multimodal musical instrument? How to meaningfully deploy machine learning algorithms for improvised music?

Machine learning (ML) methods in music are generally used to recognise pre-determined gestures. The risk in this case is that a performer ends up being concerned about performing gestures in a way that allows the computer to understand them. On the other hand, ML methods could be possibly use to represent an emergent space of interaction, that would allow freedom of expression to the performer. This space shall not be defined beforehand, but rather created and altered dynamically according to any gesture.

An unsupervised ML method shall represent in real time complementary information of the EMG/MMG signals. The output shall be rendered as an axis of the space of interaction (x, y, …). As the relations between the two biosignals change, the amount of axes and their orientation change as well. The space of interaction is constantly redefined and constructed. The aim is to perform the body as an expressive process, and let to the machine the duty of representing this process by extracting information that would not be understood without the aid of computing devices.

Each kind of gesture shall be represented within a specific interaction space. The performer would then control sonic forms by moving within, and outside of, the space. The ways in which the performer travel through the interaction space shall be defined by a continuous function derived by the gesture-sound interaction.

Ben Pimlott building, our new home at Goldsmiths, London.

Picture by carolineld.blogspot.com

Here we are. Last post is dated September 2012.
A lot has happened since then, and here is a brief update.

I’ve just moved to London and started working with a newly-formed research team headed by Prof. Atau Tanaka (US/UK), including as of now, Baptiste Caramiaux, Alessandro Altavilla, and myself. We are investigating a broad notion of gesture (musical, physical, and biological gesture) and music performance. The outcome is the creation of new musical instruments that bring together biosensing technologies, spatial sensors, and custom machine learning methods for a corporeal performance of sounds and music. The instruments should be for musicians and non-musicians alike, wearable, and redistributable.

The project is called Meta-Gesture Music (MGM) and it is funded by the European Research Council (ERC). Our team is based at the Computing department, Goldsmiths, University of London, and is part of EAVI, a larger team of investigators, musicians, artists, and coders dedicated to Embodied Audio-Visual Interaction.

The Xth Sense project continues and I will keep posting about new developments. Meanwhile, the MGM research is a great chance to draw from the experience of the XS, and evolve the theoretical and technical framework developed so far.

Regular updates on our work will follow regularly, so feel free to come back and see what we are up to.