Res, a matter.

I’m experimenting with a prototypical bio-sensing device (described here) I built with the precious help of Dorkbot ALBA at the Edinburgh Hacklab. Although at the moment sensor is quite rough it is already capable of capturing muscles sounds.
As the use of free, open source tools is an integral part of the methodology of this research, I’m currently using the awesome Ardour2 to monitor, record and analyse muscles sounds.

MMG analysis in Ardour2 | 2010

The first problem I encountered was the conductive capability of human skin; when the metal case of the microphone directly touches the skin, body becomes a huge antenna attracting all electromagnetic waves floating around. I’m now trying different ways of shielding the sensor and this helps me to better understand how muscle vibrations are transmitted outside of the body and through the air.
Below you can listen to a couple of short clips of my heartbeat and arm voluntary contractions recorded with the MMG sensor. Audio files are raw, i.e. no processing has been applied to the original sound, their frequency is extremely low and it might not be immediately audible; you will possibly need to turn up the volume of your speakers or wear a pair of headphones.

Heartbeat

Voluntary arm contractions
(sound like a low rumble, or a far thunder; clicks are caused by crackles of my bones)

At this early development stage the MMG sensor capabilities seem quite promising, I can’t wait to plug the sensor into Pure Data and start trying some real time processing.

As mentioned in a previous post, I’ve been learning about the use of mechanical myography (MMG) in prosthetics and Biomedial Engineering generic applications in order to understand how to efficiently capture muscle sounds in real time and use them in a musical performance.
I’ve been studying the work of Jorge Silva at Prism Lab and I had initially planned to implement the same design of bio-sensing device he created and publicly published.
Here’s a schematic diagram of the sensor:

cmasp schematic diagram | Jorge Silva at Prism Lab

Muscle sonic resonance is transmitted to the skin which in turn vibrates exciting an air chamber. These vibrations are then captured by a condenser microphone adequately shielded from noise and interferences by a mean of a silicon case. Microphone is coupled with an accelerometer which is used to filter out vibrations caused by a global motion of the arm, so that muscle signals can be properly isolated.
Such design has been proved effectively functional through several tests and academic reports, however I soon realized its development would take a fairly good amount of time and, above all, it would require some peculiar skills which are not part of my current background. Thus I ordered all the components I needed, but then decided to implement a simpler version of the sensor to get more easily familiar with the technology. Once I’ll know how to master muscle sounds and what I can obtain from them, I believe I could imagine a more complex sensor or implement the one above (if I’d really need to).

Thanks to the awesome folks at Dorkbot ALBA (a special thanks to Tom H and Martin and the Edinburgh Hacklab) I could build a first rough circuit and embedded MMG sensor. Below photos of the circuit and my studio setup including a Focusrite Saffire Pro40 external sound card which I use to convert and amplify the signal.

MMG circuit early implementation | Marco Donnarumma @ Dorkbot ALBA | 2010

MMG circuit and studio setup | Marco Donnarumma | 2010

The flat round object appearing out of focus in the first pic is the electret condenser microphone, actually the same used by Silva in his cmasp design, and it’s the core of the device. Microphone sensitivity ranges from 20Hz up to 16kHz thus it is capable to capture the low resonance frequency of muscles (between 5Hz and 40/45Hz). It is important to note, though, that the biggest part of muscles sounds spectra seems to sit below 20Hz, i.e. muscles produce infrasounds which a human ear cannot perceive but a human body can physically experience. This issue really interests me and it could possibly affect my research, but for now I won’t elaborate on it.

Now that I have a first sensor prototype I’m ready to plug my body and listen!

Some thoughts…

A late night mind map | 2010

While waiting for the components I need to build the bio-sensing device, I started coding a synthesis model for human muscle sounds.
I illustrate below the basis of an exploratory study of the acoustics of biologic body sounds. I describe the development of an audio synthesis model for muscle sounds which offers a deeper understanding of the body sound matter and provides the ground for further experimentations in composition.

I’m working with a machine running a Linux OS and the audio synthesis model is being implemented using the free, open source framework known as Pure Data, a graphical programming language developed by Miller Puckette. I’ve been using Pd for 4/5 years for other personal projects, thus I feel fairly comfortable working in such environment.

Firstly, it is necessary to understand the physical phenomena which makes muscle vibrate and sound.
As illustrated in a previous blog post, muscles are formed by several layers of contractile filaments. Each of them can stretch and move past the other, vibrating at a very low frequency. However audio recordings of muscle sounds show that their sonic response is not constant, but sounds more similar to a low and deep rumble. This might happen because each filament does not vibrate in unison with each other, but rather each one of them undergoes slightly different forces depending on their position and dimension, therefore each filament vibrates at a different frequency.
Eventually each partial (defined here as the single frequency of a specific filament) is summed to the others living in the same muscle fiber, which in turn are summed to the other muscle fibers living in the surrounding fascicle.

Such phenomena creates a subtle, complex audio spectra which can be synthesised using Discrete Summation Formula. DSF allows the synthesis of harmonic and inharmonic, band-limited or unlimited spectra, and can be controlled by an index, which seems perfectly fitting the requirement of this acoustic experiment.
Currently I’m studying the awesome book “Designing Sound” by Andy Farnell (MIT Press) and the synthesis model I’m coding is based on the Discrete Summation Synthesis explained in Technique, chapter 17, pp. 254-256.
I started implementing the basic synthesis formula to create the fundamental sidebands, then I applied DSF to a noise generator to add some light distortion to the sinewaves by means of complex spectra formed by tiny, slow noise bursts. Filter banks have been applied to each spectra in order to refine the sound and emphasise specific harmonics.
Eventually the two layers have been summed, passed through another filter bank and a tanh function, which add a more natural characteristic to the resulting impulse.
Below a screenshot of the Pd abstraction.

Muscle sound synthesis model | 2010

Next, by applying several audio processing techniques to this model I hope to become more familiar with its physical composition and to develop the basis for a composition and design methodology of muscle sounds, which will be then employed with the actual sounds of my body.

According to Wikipedia “Muscle (from Latin musculus, diminutive of mus “mouse”) is the contractile tissue of animals… Muscle cells contain contractile filaments that move past each other and change the size of the cell. They are classified as skeletal, cardiac, or smooth muscles. Their function is to produce force and cause motion. Muscles can cause either locomotion of the organism itself or movement of internal organs.”

Top down view of Skeletal Muscle

Top down view of skeletal muscle | Photo montage created by Raul654 | Wikimedia

What is not mentioned here is that the force produced by muscles causes sound too.
When filaments move and stretch they actually vibrate, therefore they create sound. Muscle sounds have a frequency between 5Hz and 45Hz, thus they can be captured with a highly sensitive microphone.
A sample of sounding muscle is available here, thanks to the Open Prosthetics research group (whose banner reads “Prosthetics shouldn’t cost an arm and a leg”).

In fact muscle sounds have mostly been studied in the field of Biomedical Engineering as alternative control data for low cost, open source prosthetics applications and it’s thanks to this studies that I could gather precious technical information and learn about several designs of muscles sounds sensor devices.
Most notably the work of Jorge Silva at Prism Lab is being fundamental for my research. His MASc thesis represents a comprehensive resource of information and technical insights.
The device designed at Prism Lab is a coupled microphone-accelerometer sensor capable of capturing the audio signal of muscles sounds. It also eliminates noises and interferences in order to precisely capture voluntary muscle contraption data.
This technology is called mechanical myography (MMG) and it represents the basis of my further musical and performative experimentations with the sounding (human) body.

I just ordered components to start implementing the sensor device, so hopefully in a week or two I’ll be able to hear my body resonating.

I just read a paper of friend and talented musician Jaime E Oliver “The Silent Drum Controller: A New Percussive Gestural Interface”, freely available here.
Jaime is a graduate music researcher at CRCA – Center for Research in Computing and Arts, California.

The paper describes the development of the Silent Drum Controller and contains some insightful points, so I took few notes while reading; they might sound obvious, but they helped me understanding better in which ways I could use biological sounds of human body in a musical performance:

  • “…dissociate gesture with sound”
  • control variables obtained through new gestural interface should be richer than what we can achieve with classical acoustic instruments
  • acoustic instruments have direct correspondence between gesture and sound (richer vocabulary?)
  • capture gestures to be utilized immediately sonically or store them for future sound transformations
  • drive the score and produce changes in mapping
  • discrete events can be obtained while controlling continuous events
  • determine latency toleration boundaries in variable contexts and mapping systems

Take few minutes to watch this excerpts from a live performance (Performer, Composer: Jaime E Oliver). Personally I think it is one of the best sounding gestural interface at the moment.

I need to be aware of the current state of research in music performance and analysis of biologic signals of a human body. Unlikely gestural control of music, which is widely explored by most of the worldwide sonic research center programs and international conferences, biological control of music seems to live in an overlooked niche.
I’ve collected several papers concentrating on three main areas of studies: gestural control of music, procedural audio in game sound design, biomedical engineering.
I found very useful such intensive lecture, it’s interesting to notice how many studies investigates a common topic in different context.
Below a non exhaustive list of references.

Gestural control of music:

  • Arfib, D., J. M. Couturier, L. Kessous, and V. Verfaille, ‘Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces’, Organised Sound, 7 (2003) .
  • Doornbusch, Paul, ‘Composers’ views on mapping in algorithmic composition’, Organised Sound, 7 (2003) .
  • Fels, Sidney, Ashley Gadd, and Axel Mulder, ‘Mapping transparency through metaphor: towards more expressive musical instruments’, Organised Sound, 7 (2003) .
  • Goudeseune, Camille, ‘Interpolated mappings for musical instruments’, Organised Sound, 7 (2003) .
  • Hunt, Andy, and Marcelo M. Wanderley, ‘Mapping performer parameters to synthesis engines’, Organised Sound, 7 (2003) .
  • Levitin, Daniel J., Stephen McAdams, and Robert L. Adams, ‘Control parameters for musical instruments: a foundation for new mappings of gesture to sound’, Organised Sound, 7 (2003) .
  • Myatt, Tony, ‘Strategies for interaction in construction 3’, Organised Sound, 7 (2003) .
  • Ng, Kia, ‘Sensing and mapping for interactive performance’, Organised Sound, 7 (2003) .
  • Wessel, David, and Matthew Wright, ‘Problems and Prospects for Intimate Musical Control of Computers’, Computer Music Journal, 26 (2002), 11-22.

Procedural audio

Biomedical engineering:

  • Alves, Natasha, Ervin Sejdić, Bhupinder Sahota, and Tom Chau, ‘The effect of accelerometer location on the classification of single-site forearm mechanomyograms’, Biomedical Engineering Online, 9 (2010), 23 <doi:10.1186/1475-925X-9-23>;.
  • Esposito, Fabio, Emiliano Cè, Susanna Rampichini, and Arsenio Veicsteinas, ‘Acute passive stretching in a previously fatigued muscle: Electrical and mechanical response during tetanic stimulation’, Journal of Sports Sciences, 27 (2009), 1347 <doi:10.1080/02640410903165093>;.
  • Garcia, Marco Antonio Cavalcanti, Cláudia Domingues Vargas, Márcio Nogueira de Souza, Luis Aureliano Imbiriba, and Líliam Fernandes de Oliveira, ‘Evaluation of arm dominance by using the mechanomyographic signal’, Journal of Motor Behavior, 40 (2008), 83-89 <doi:10.3200/JMBR.40.2.83-89>;.
  • Silva, J., and T. Chau, ‘Coupled microphone-accelerometer sensor pair for dynamic noise reduction in MMG signal recording’, Electronics Letters, 39 (2003), 1496-1498 <doi:10.1049/el:20031003>;.
  • Watakabe, M., Y. Itoh, K. Mita, and K. Akataki, ‘Technical aspects of mechnomyography recording with piezoelectric contact sensor’, Medical & Biological Engineering & Computing, 36 (1998), 557-561 <doi:10.1007/BF02524423>;.

My personal inquiry in the fields of live media performance, sound art and new media art has been feeding a growing fascination for responsive computing systems.
I applied such systems to solo audio visual performances, interactive dance/theatre pieces, participatory concerts and networked autonomous artefacts (all can be viewed on-line visiting my portfolio).
The focus of those investigations has always been the augmentation of the human body and its environment – both actual and virtual – in order to explore relations models between men and machines (digital interaction?).
Given a honest and genuine passion for sound and music what most fascinates me though, is the performance of temporary auditive environments (i.e. concerts?); personally speaking, the creation of “music” – defined here as any sonic scape – is one of the most powerful and immediate expressive and cognitive experience both performer and audience can perceive.

Earlier experiments with augmented musical instruments started in 2007-2008 with the coding of a free, open source software (based in Pure Data) which digitally expands classical fretted musical instruments and enables performers to control audio and visual real time processing simply playing their favourite instrument. Following a perhaps natural evolution about one year ago a general interest for the biological body in live media performance had began, but only since 4/5 months I have started to slowly tighten my approach and develop a methodology.

Currently I’m researching the sounding (human) body, attempting to understand how sounds of the biological body can be used musically and how to design sounds of a performer’s body.
If you didn’t already, please read the research brief to get a glimpse of the present stage of the investigation.