Res, a matter.

In an attempt to map out the origins of the different perspectives on the technological body in the performing arts, I began compiling different timelines, looking at music technology, performance art, sociology of the body, system theory, natural sciences, and history. Now I’m in the process of grasping core theories in those disciplines and contextualising them in the their historical context. The next step is to find links and connections that, by sweeping across disciplines, will reveal the cultural, technological and social modalities by which the body, in the technological performance of art, has formed in the way we know it today. It is a complex research and it is proving difficult to keep focused on the subject matter, yet it is incredibly fascinating to gain a detailed map of this topic, and I’m confident it has the potential to be an interesting contribution both to the community and my own practice.
I’ll be updating the blog with further notes and thoughts as they emerge. Click the image below to enlarge.

marco-donnarumma_biotech-music-performance_body-theory_timeline

bio-theory-data-sensors

This picture pretty much sums it all up. Given the interest of motion-gesture researchers in using mostly cognitive science to understand sound-making motion, there is room to explore a  politically engaged perspective that straight-forwardly address the (player’s) body.

The body is a political entity and body-based music performance produces political messages. A body is political for it touches the aspirations and the “open wounds” of everyone’s cultural and emotional background. Think not only of gender, identity, self-acceptance, who I am, who I want to be, who I pretend I am, but also everyday life as a part of a larger society of bodies. We are bodies that interact with and affect other bodies countless times a day, and together we put forth our world. What a body can do? What multiple bodies can do? How our bodies are modified by the instruments we use? and how this comes down to the design of new musical instruments?

The politics of physical musical performance might not be as evident as for performance art, yet, it is enacted with or without the player will. This is valid for the overall practice of music performance, and is particularly true when it comes to works that augment the body with technology. The question is: How can we make the design of physical musical instrument and performance with (bio)technologies politically significant?

An interesting perspective is forming by drawing upon the notions below. The summer will be the time for me to write a related literature review and thus combine this knowledge into an original form.

effort, a body whose physicality defines and mediates with the musical instrument (Ryan, 1991);
emergence, a body always in process (Massumi, 2002);
enactment, a body that brings forth a world by knowing (Maturana and Varela, 1992);
biomediation, a body reconfigured by media practices that condition its biology (Thacker, 2003).

References:
Maturana, H.R., and Varela, F. J. 1992, The tree of knowledge. Shambala.
Massumi, B. 2002. Parables of the virtual: Movement, affect, sensation. Durham N.C: Duke University Press.
Ryan, J. 1991. Some remarks on musical instrument design at STEIM. Contemporary Music Review, 6(1):3-17.
Thacker, E. 2003. What is Biomedia? Configurations, 11(1):47-49.

marco-donnarumma_steim-summer-party-2013

At STEIM!

It’s a surprisingly random hot day in Amsterdam. STEIM, the unique Studio for ElectroInstrumental Music, is invaded by sounds coming from all studios while the artists get ready for the big night. Chilled atmosphere, an indian dinner, and doors open! If you missed this Summer Party at STEIM, you missed a lot. Great music and awesome performers packed into one of the most important places for sound and music performance in the past 30 years up to these days .

marco-donnarumma_steim-summer-party-2013_mazen-kerbaj

Mazen Kerbaj solo

marco-donnarumma_steim-summer-party-2013_sonami-fei

Laetitia Sonami & James Fei with custom instruments and a CrackleSynth

I was flattered when invited to perform a 30 minutes concert of biophysical music, including my latest piece Ominous, as I had the chance to share the stage with Laetitia Sonami & James Fei, Mazen Kerbaj, Sybren Danz, and Daniel Schorno. It was a great experience, a delicious auditive journey, and an emotional leap into the feel of being part of such community. So thanks everyone! The staff at STEIM, Esther, Jon, Marije, Kees, Nico, Lucas, and the audience that spread around STEIM and warmly welcomed us all.

marco-donnarumma_ominous_steim2013_by-marije-baalman_500px

Yours truly performing Ominous. Pic by Marije Baalman

Some sketches and notes I am currently working on together with Baptiste Caramiaux and Atau Tanaka towards the creation of a corporeal musical space generated by biological gesture, that is, the complex behaviour of different biosignals during performance.

General questions: why to use different biosignal in a multimodal musical instrument? How to meaningfully deploy machine learning algorithms for improvised music?

Machine learning (ML) methods in music are generally used to recognise pre-determined gestures. The risk in this case is that a performer ends up being concerned about performing gestures in a way that allows the computer to understand them. On the other hand, ML methods could be possibly use to represent an emergent space of interaction, that would allow freedom of expression to the performer. This space shall not be defined beforehand, but rather created and altered dynamically according to any gesture.

An unsupervised ML method shall represent in real time complementary information of the EMG/MMG signals. The output shall be rendered as an axis of the space of interaction (x, y, …). As the relations between the two biosignals change, the amount of axes and their orientation change as well. The space of interaction is constantly redefined and constructed. The aim is to perform the body as an expressive process, and let to the machine the duty of representing this process by extracting information that would not be understood without the aid of computing devices.

Each kind of gesture shall be represented within a specific interaction space. The performer would then control sonic forms by moving within, and outside of, the space. The ways in which the performer travel through the interaction space shall be defined by a continuous function derived by the gesture-sound interaction.

Following our previous study on biophysical and spatial sensing, we narrowed down the focus of our research, and constrained a new study to MMI with 2 biosignals only. Namely, we focused on mechanomyogram (MMG) and electromyogram (EMG) from arm muscle gesture. Although there exists research in New Interfaces for Musical Expression (NIME) focused on each of the signals, to the best of our knowledge, the combination of the two has not been investigated in this field. The following questions initiated this study: In which ways to analyse EMG/MMG for complementary information about gestural input? How can musician control separately the two biosignals? Which are the implications at low level sensorimotor system? And how novices can learn to skillfully control the modalities?

Sensorimotor system information flow. EMG and MMG are produced at different stages of a movement.

Our interest in conducting a combined study of the two biosignal lies in the fact that they are both produced by muscle contraction, yet report different aspect of the muscle articulation. The EMG is a series of electrical neuron impulses sent by the brain to cause muscle contraction. The MMG is a sound produced by the oscillation of the muscle tissue when it extends and contracts.

In order to learn about differences and similarities of EMG and MMG we looked at the related biomedical literature, and found comparative EMG/MMG studies in the field of sensorimotor system, and kinesis research. We selected aspects of gestural exercise where exists complementary information of MMG/EMG. For the interested reader, further details will be soon available in our related NIME paper.

Atau Tanaka testing our new musical interface using electrical and biophysical body signals.

We used those aspects of muscle contraction to design a small gesture vocabulary to be performed by non-expert players with a bi-modal, biosignal-based interface created for this study. The interface was build by combining two existing separate sensors, namely the Biomuse for the EMG, and the Xth Sense for the MMG signal. Arm bands with EMG and MMG sensors were placed on the forearm. One MMG channel and one EMG channel each were acquired from the users’ dominant arm over the wrist flexors, a muscle group close to the elbow joint that controls finger movement.

In order to train the users with our new musical interface we designed three sound-producing gestures. The EMG and MMG are independently translated into sound, so that every time a user performs one of the gesture, one or the other sound, or a combination of the two, is heard. Users were asked to perform the gestures twice: the first time without any instruction, and the second time with a detailed explanation of how the gesture was supposed to be executed. At the end of the experiment, we interviewed the users about the difficulty of controlling the two sounds. We studied the players’ ability to articulate the two modalities through video and sound recording and analysed their interviews.

The results of the study showed that: 1) MMG/EMG provide richer bandwidth of information on gestural input; 2) MMG/EMG complementary information vary with contraction force, speed, and angle; 3) novices can learn rapidly how to independently control the two modalities.
This findings motivate a further exploration of a MMI approach to biosignal-based musical instruments. Perspective work should look at the further development of our bi-modal interface by designing discrete and continuous multimodal mapping based on EMG/MMG complementary information. Moreover, we should look at custom machine learning methods that could be useful in representing and classifying gesture via muscle combined biodata, and datamining techniques that could help identify meaningful relations among biodata. Such information could then be used to imagine new applications for bodily musical performance, such as an instrument that is aware of its player’s expertise level.

Pictures by Baptiste Caramiaux, and Alessandro Altavilla.

I kicked off my PhD studies by extending the Xth Sense, a new biophysical musical instrument, based on a muscle sound sensor I developed, with spatial and inertial sensors, namely whole-body motion capture (mocap) and accelerometer. The aim: to understand the potential of combined sensor data towards a multimodal control of new musical interface.

An excerpt from our related paper:
“In the field of New Interfaces for Musical Expression (NIME), sensor-based systems capture gesture in live musical performance. In contrast with studio-based music composition, NIME (which began as a workshop at CHI 2001) focuses on real-time performance. Early examples of interactive musical instrument performance that pre-date the NIME conference include the work of Michel Waisvisz and his instrument, The Hands, a set of augmented gloves which captures data from accelerometers, buttons, mercury orientation sensors, and ultrasound distance sensors (developed at STEIM). The use of multiple sensors on one instrument points to complementary modes of interaction with an instrument. However these NIME instruments have for the most part not been developed or studied explicitly from a multimodal interaction perspective.”

Together with team colleagues Baptiste Caramiaux and Atau Tanaka, we designed a study of bodily musical gesture. We recorded and observed the sound-gesture vocabulary of my performance entitled Music for Flesh II.
The data recorded from the gesture were 2 mechanomyogram signals (MMG or muscle sound), mocap data, one 3D vector from the accelerometer, and 3D positions and quaternions of my limbs.

My virtual body performing Music for Flesh II, as seen by the motion capture system.

We created a public multimodal dataset and performed a qualitative analysis of those data. By using a custom patch to visualise and compare the different types of data (pictured below) we were able to observe complementarity of different forms in the information collected. We noted 3 types of complementarity: synchronicity, coupling, and correlation. You can find the details of our findings in the related Work in Progress paper, published for the recent TEI conference on Tangible, Embedded, and Embodied Interaction in Barcelona, Spain.

The software we developed to visualise the multimodal dataset.

To summarise, our findings show that different type of sensor data do have complementary aspects; these might depend on the type and sensitivity of sensor, and on the complexity of the gesture. Besides, what might seem a single gesture can be segmented into sections that present different kind of complementarity among the different modalities. This points to the possibility for a performer to engage with richer control of musical interfaces by training on a multimodal control of different types of sensing device; that is, the gestural and biophysical control of musical interfaces based on a combined analysis of different sensor data.

Ben Pimlott building, our new home at Goldsmiths, London.

Picture by carolineld.blogspot.com

Here we are. Last post is dated September 2012.
A lot has happened since then, and here is a brief update.

I’ve just moved to London and started working with a newly-formed research team headed by Prof. Atau Tanaka (US/UK), including as of now, Baptiste Caramiaux, Alessandro Altavilla, and myself. We are investigating a broad notion of gesture (musical, physical, and biological gesture) and music performance. The outcome is the creation of new musical instruments that bring together biosensing technologies, spatial sensors, and custom machine learning methods for a corporeal performance of sounds and music. The instruments should be for musicians and non-musicians alike, wearable, and redistributable.

The project is called Meta-Gesture Music (MGM) and it is funded by the European Research Council (ERC). Our team is based at the Computing department, Goldsmiths, University of London, and is part of EAVI, a larger team of investigators, musicians, artists, and coders dedicated to Embodied Audio-Visual Interaction.

The Xth Sense project continues and I will keep posting about new developments. Meanwhile, the MGM research is a great chance to draw from the experience of the XS, and evolve the theoretical and technical framework developed so far.

Regular updates on our work will follow regularly, so feel free to come back and see what we are up to.

I just realised I never posted here the interviews and press articles released recently (there’s been a bit of a hype!), so here’s a brief chronological summary with links. You’ll find me thinking/talking about the Xth Sense, technology and creativity, the sonic body, open practice, and live music. Hope you enjoy the reading:


Here we are!
On a kind invite of Yann Seznec, today I am performing and launching officially the Xth Sense at the Scotland Music Hack Day,… rather thrilling!
Other live acts and presentations include Matthew Herbert, FOUND, Yann and Patrick Bergel.
Now the staff is setting up, and people start coming.
Check the schedule at the Hack Day homepage.

So far I have been the only author and performer of biophysical music using the Xth Sense.
I am, therefore, rather excited to share this video below. It is the premiere of a piece for the Xth Sense, trombone and double bass by Japanese composer Shiori Usui with the Red Note Ensemble, Scotland’s contemporary music ensemble.
The concert was organized in occasion of Inventor Composer Coaction, “a novel project designed to facilitate collaboration between composers and developers of bespoke digital or electronic instruments, for the creation of new music. It will take place at the Department of Music, University of Edinburgh, during the first half of 2012.”