Res, a matter.

Xth Sense > Blog

marco-donnarumma_steim-summer-party-2013

At STEIM!

It’s a surprisingly random hot day in Amsterdam. STEIM, the unique Studio for ElectroInstrumental Music, is invaded by sounds coming from all studios while the artists get ready for the big night. Chilled atmosphere, an indian dinner, and doors open! If you missed this Summer Party at STEIM, you missed a lot. Great music and awesome performers packed into one of the most important places for sound and music performance in the past 30 years up to these days .

marco-donnarumma_steim-summer-party-2013_mazen-kerbaj

Mazen Kerbaj solo

marco-donnarumma_steim-summer-party-2013_sonami-fei

Laetitia Sonami & James Fei with custom instruments and a CrackleSynth

I was flattered when invited to perform a 30 minutes concert of biophysical music, including my latest piece Ominous, as I had the chance to share the stage with Laetitia Sonami & James Fei, Mazen Kerbaj, Sybren Danz, and Daniel Schorno. It was a great experience, a delicious auditive journey, and an emotional leap into the feel of being part of such community. So thanks everyone! The staff at STEIM, Esther, Jon, Marije, Kees, Nico, Lucas, and the audience that spread around STEIM and warmly welcomed us all.

marco-donnarumma_ominous_steim2013_by-marije-baalman_500px

Yours truly performing Ominous. Pic by Marije Baalman

Following our previous study on biophysical and spatial sensing, we narrowed down the focus of our research, and constrained a new study to MMI with 2 biosignals only. Namely, we focused on mechanomyogram (MMG) and electromyogram (EMG) from arm muscle gesture. Although there exists research in New Interfaces for Musical Expression (NIME) focused on each of the signals, to the best of our knowledge, the combination of the two has not been investigated in this field. The following questions initiated this study: In which ways to analyse EMG/MMG for complementary information about gestural input? How can musician control separately the two biosignals? Which are the implications at low level sensorimotor system? And how novices can learn to skillfully control the modalities?

Sensorimotor system information flow. EMG and MMG are produced at different stages of a movement.

Our interest in conducting a combined study of the two biosignal lies in the fact that they are both produced by muscle contraction, yet report different aspect of the muscle articulation. The EMG is a series of electrical neuron impulses sent by the brain to cause muscle contraction. The MMG is a sound produced by the oscillation of the muscle tissue when it extends and contracts.

In order to learn about differences and similarities of EMG and MMG we looked at the related biomedical literature, and found comparative EMG/MMG studies in the field of sensorimotor system, and kinesis research. We selected aspects of gestural exercise where exists complementary information of MMG/EMG. For the interested reader, further details will be soon available in our related NIME paper.

Atau Tanaka testing our new musical interface using electrical and biophysical body signals.

We used those aspects of muscle contraction to design a small gesture vocabulary to be performed by non-expert players with a bi-modal, biosignal-based interface created for this study. The interface was build by combining two existing separate sensors, namely the Biomuse for the EMG, and the Xth Sense for the MMG signal. Arm bands with EMG and MMG sensors were placed on the forearm. One MMG channel and one EMG channel each were acquired from the users’ dominant arm over the wrist flexors, a muscle group close to the elbow joint that controls finger movement.

In order to train the users with our new musical interface we designed three sound-producing gestures. The EMG and MMG are independently translated into sound, so that every time a user performs one of the gesture, one or the other sound, or a combination of the two, is heard. Users were asked to perform the gestures twice: the first time without any instruction, and the second time with a detailed explanation of how the gesture was supposed to be executed. At the end of the experiment, we interviewed the users about the difficulty of controlling the two sounds. We studied the players’ ability to articulate the two modalities through video and sound recording and analysed their interviews.

The results of the study showed that: 1) MMG/EMG provide richer bandwidth of information on gestural input; 2) MMG/EMG complementary information vary with contraction force, speed, and angle; 3) novices can learn rapidly how to independently control the two modalities.
This findings motivate a further exploration of a MMI approach to biosignal-based musical instruments. Perspective work should look at the further development of our bi-modal interface by designing discrete and continuous multimodal mapping based on EMG/MMG complementary information. Moreover, we should look at custom machine learning methods that could be useful in representing and classifying gesture via muscle combined biodata, and datamining techniques that could help identify meaningful relations among biodata. Such information could then be used to imagine new applications for bodily musical performance, such as an instrument that is aware of its player’s expertise level.

Pictures by Baptiste Caramiaux, and Alessandro Altavilla.

I kicked off my PhD studies by extending the Xth Sense, a new biophysical musical instrument, based on a muscle sound sensor I developed, with spatial and inertial sensors, namely whole-body motion capture (mocap) and accelerometer. The aim: to understand the potential of combined sensor data towards a multimodal control of new musical interface.

An excerpt from our related paper:
“In the field of New Interfaces for Musical Expression (NIME), sensor-based systems capture gesture in live musical performance. In contrast with studio-based music composition, NIME (which began as a workshop at CHI 2001) focuses on real-time performance. Early examples of interactive musical instrument performance that pre-date the NIME conference include the work of Michel Waisvisz and his instrument, The Hands, a set of augmented gloves which captures data from accelerometers, buttons, mercury orientation sensors, and ultrasound distance sensors (developed at STEIM). The use of multiple sensors on one instrument points to complementary modes of interaction with an instrument. However these NIME instruments have for the most part not been developed or studied explicitly from a multimodal interaction perspective.”

Together with team colleagues Baptiste Caramiaux and Atau Tanaka, we designed a study of bodily musical gesture. We recorded and observed the sound-gesture vocabulary of my performance entitled Music for Flesh II.
The data recorded from the gesture were 2 mechanomyogram signals (MMG or muscle sound), mocap data, one 3D vector from the accelerometer, and 3D positions and quaternions of my limbs.

My virtual body performing Music for Flesh II, as seen by the motion capture system.

We created a public multimodal dataset and performed a qualitative analysis of those data. By using a custom patch to visualise and compare the different types of data (pictured below) we were able to observe complementarity of different forms in the information collected. We noted 3 types of complementarity: synchronicity, coupling, and correlation. You can find the details of our findings in the related Work in Progress paper, published for the recent TEI conference on Tangible, Embedded, and Embodied Interaction in Barcelona, Spain.

The software we developed to visualise the multimodal dataset.

To summarise, our findings show that different type of sensor data do have complementary aspects; these might depend on the type and sensitivity of sensor, and on the complexity of the gesture. Besides, what might seem a single gesture can be segmented into sections that present different kind of complementarity among the different modalities. This points to the possibility for a performer to engage with richer control of musical interfaces by training on a multimodal control of different types of sensing device; that is, the gestural and biophysical control of musical interfaces based on a combined analysis of different sensor data.

Ben Pimlott building, our new home at Goldsmiths, London.

Picture by carolineld.blogspot.com

Here we are. Last post is dated September 2012.
A lot has happened since then, and here is a brief update.

I’ve just moved to London and started working with a newly-formed research team headed by Prof. Atau Tanaka (US/UK), including as of now, Baptiste Caramiaux, Alessandro Altavilla, and myself. We are investigating a broad notion of gesture (musical, physical, and biological gesture) and music performance. The outcome is the creation of new musical instruments that bring together biosensing technologies, spatial sensors, and custom machine learning methods for a corporeal performance of sounds and music. The instruments should be for musicians and non-musicians alike, wearable, and redistributable.

The project is called Meta-Gesture Music (MGM) and it is funded by the European Research Council (ERC). Our team is based at the Computing department, Goldsmiths, University of London, and is part of EAVI, a larger team of investigators, musicians, artists, and coders dedicated to Embodied Audio-Visual Interaction.

The Xth Sense project continues and I will keep posting about new developments. Meanwhile, the MGM research is a great chance to draw from the experience of the XS, and evolve the theoretical and technical framework developed so far.

Regular updates on our work will follow regularly, so feel free to come back and see what we are up to.

I just realised I never posted here the interviews and press articles released recently (there’s been a bit of a hype!), so here’s a brief chronological summary with links. You’ll find me thinking/talking about the Xth Sense, technology and creativity, the sonic body, open practice, and live music. Hope you enjoy the reading:


Here we are!
On a kind invite of Yann Seznec, today I am performing and launching officially the Xth Sense at the Scotland Music Hack Day,… rather thrilling!
Other live acts and presentations include Matthew Herbert, FOUND, Yann and Patrick Bergel.
Now the staff is setting up, and people start coming.
Check the schedule at the Hack Day homepage.

So far I have been the only author and performer of biophysical music using the Xth Sense.
I am, therefore, rather excited to share this video below. It is the premiere of a piece for the Xth Sense, trombone and double bass by Japanese composer Shiori Usui with the Red Note Ensemble, Scotland’s contemporary music ensemble.
The concert was organized in occasion of Inventor Composer Coaction, “a novel project designed to facilitate collaboration between composers and developers of bespoke digital or electronic instruments, for the creation of new music. It will take place at the Department of Music, University of Edinburgh, during the first half of 2012.”

Hi there!
It has been a while since my last post.
Things have been rather hectic; I’ve been caught up among travels, the Xth Sense public release (!), concerts and writing.
At the moment I’m finishing up my MScR final dissertation.

A student interfacing the Xth Sense with SuperCollider.

I’ll update the blog with some new features that have been added to the XS software, more news about the project going live, etc.
By now I thought to wet a bit your appetites with a brief audiovisual report from last amazing concert in Berlin at LEAP, Lab for Electronic Arts and Performance.
The event was organized by the LEAP staff, with guest curator Joao Pais, in collaboration with Create Digital Music and eContact! journal for electroacoustic music research (for which I curated a forthcoming issue on Biotechnological Music Performance).

I taught an XS workshop with a wonderfully enthusiastic class, and played a couple of pieces for the XS, along with Peter Kirn, using a Galvanic Skin Response (GSR) device, and Claudia Robles Angel, controlling videos and sounds with her brainwaves.
We also had the chance to attend a talk by colleague and friend Pedro Lopes. An informative and clear conversation on Human-Computer Interaction with a consistent focus on biotechnologies.
Quite an exciting week!

A set of pictures can be found on Flickr.
Below you can listen to the live recording of the three concerts.

Extended View Toolkit lecture

Thanks to the lovely team at Weimar Bauhaus Universitat, we can view on-line all the talks presented at the recent Pd Convention 2011.
Below my talk “A Pd framework for the Xth Sense: enabling computers to sense human kinetic behaviour”.
Make sure to check the Convention Vimeo Channel, you will find a very good amount of inspiring material, even if you are not into Pd.

Marco Donnarumma – Xth Sense, Biophysical Music from PdCon11 on Vimeo.

Preparing for the travel

From 20 to 24 October I visited Seoul in South Korea to present the Xth Sense and the new model of biophysical music to the Asian academic community within the frame of the SICMF/KEAMS (Seoul International Computer Music Festival and the Korean Electro-Acoustic Music Society). This engagement was endorsed by Creative Scotland, which awarded me a grant through their International Programme.
Besides, a related paper “Xth sense: a biophysical framework for the application of muscle sounds to interactive music performance” will be published in the Computer Music Journal Emile Vol. 9 soon in the upcoming month.

Welcome to Seoul

It was my first journey to an Asian country, although I have participated in other events in Japan, China and India via streaming interventions. Useless to say that the cultural impact was rather strong, the deep differences in society, customs and language are as fascinating as difficult to grasp.

KEAMS poster at the Seoul National University

The event included night concerts hosted by the SICMF and paper presentations organized by KEAMS at the Seoul National University, College of Music. The conference was formerly introduced by Richard Dudas, an American composer which largely contributed to informing today’s computer music practice, and an exquisite person too.

Richard Dudas opening speech

One of the first talks illustrated a site-specific installation project called Tertullia by Nicolas Varchausky. The project consisted of a multi-channel and open air system of loudspeakers located at the Mirogoj Cemetery, Zagreb, Croatia. The work approached “the space as an integrated geography, designing a path for the audience that formally resembled a radio (metaphorically transforming the space into both a receiver and a trasmitter), and creating a real time system for the sound composition.”

The Tertullia Project by Nicolas Varchausky et al.

The night concerts took place at the Jayu Theater, a majestic multi-purpose building provided with state-of-the-art halls and equipment. The organization was neat and succeded in making the night really enjoyable, by creating a cozy and friendly mood.
Although all pieces were executed by excellent performers, only the piece Black Crane for Geomungo positively impressed me, both sonically and structurally. Personally, I believe the other works were surely very good, but perhaps they lacked of that excellence and innovation touch that I always seek in a piece.

The Jayu Theater, Seoul

SICMF Concerts | Black Crane for Geomungo by Donoung Lee

SICMF Concerts | Vpiano by Young Mee Lymn

The second day of the conference offered an exciting schedule. First off were Jieun Oh and Ge Wang from the CCRMA at Stanford University. They presented Converge, “a crowd-sourced audio-visual composition that brings together contextual autobiographical moments, captured and submitted by a group of participants, into a coherent narrative.”

Converge by Jieun Oh and Ge Wang, CCRMA Stanford University

Next, we were delighted by a rather entertaining talk by Clarence Barlow, University of California Santa Barbara. His presentation focused on a detailed excursus through about 25 years of his personal experience with algorithmic composition… mindblowing.

Algorithmic composition by Clarence Barlow, University of California Santa Barbara

Finally, I introduced the Xth Sense technology through an heterogeneous talk which outlined the conceptual and technical developments, along with some aesthetical considerations on the specific sonic forms that can be obtained with the XS biophysical system. I also had the time to perform a short demo.

Xth Sense by Marco Donnarumma, SLE University of Edinburgh

Last but not least, Albert Chao from the University of Buffalo talked about his iMeasure custom headphones. From his website: “iM Headphones is embedded with a set of sonar sensors that take spatial readings of the physical environment and “play” particular sounds in relation to the data.”

iMeasure by Albert Chao, University of Buffalo

Unfortunately, I could stay in Seoul only few days, but the conference and the festival were well worth it. A special thanks goes to Richard Dudas and Ko Pyoung Ryang, who were capable of organizing a lovely and rich event, not to mention the amazing hospitality and the great and inspiring conversations we had. Hope to come back soon, perhaps to perform at SICMF 2012.
Here a little glimpse of Seoul (click to enlarge).

Entering the Royal Residence, Seoul, South Korea