Res, a matter.

Xth Sense > Blog

While waiting for the components I need to build the bio-sensing device, I started coding a synthesis model for human muscle sounds.
I illustrate below the basis of an exploratory study of the acoustics of biologic body sounds. I describe the development of an audio synthesis model for muscle sounds which offers a deeper understanding of the body sound matter and provides the ground for further experimentations in composition.

I’m working with a machine running a Linux OS and the audio synthesis model is being implemented using the free, open source framework known as Pure Data, a graphical programming language developed by Miller Puckette. I’ve been using Pd for 4/5 years for other personal projects, thus I feel fairly comfortable working in such environment.

Firstly, it is necessary to understand the physical phenomena which makes muscle vibrate and sound.
As illustrated in a previous blog post, muscles are formed by several layers of contractile filaments. Each of them can stretch and move past the other, vibrating at a very low frequency. However audio recordings of muscle sounds show that their sonic response is not constant, but sounds more similar to a low and deep rumble. This might happen because each filament does not vibrate in unison with each other, but rather each one of them undergoes slightly different forces depending on their position and dimension, therefore each filament vibrates at a different frequency.
Eventually each partial (defined here as the single frequency of a specific filament) is summed to the others living in the same muscle fiber, which in turn are summed to the other muscle fibers living in the surrounding fascicle.

Such phenomena creates a subtle, complex audio spectra which can be synthesised using Discrete Summation Formula. DSF allows the synthesis of harmonic and inharmonic, band-limited or unlimited spectra, and can be controlled by an index, which seems perfectly fitting the requirement of this acoustic experiment.
Currently I’m studying the awesome book “Designing Sound” by Andy Farnell (MIT Press) and the synthesis model I’m coding is based on the Discrete Summation Synthesis explained in Technique, chapter 17, pp. 254-256.
I started implementing the basic synthesis formula to create the fundamental sidebands, then I applied DSF to a noise generator to add some light distortion to the sinewaves by means of complex spectra formed by tiny, slow noise bursts. Filter banks have been applied to each spectra in order to refine the sound and emphasise specific harmonics.
Eventually the two layers have been summed, passed through another filter bank and a tanh function, which add a more natural characteristic to the resulting impulse.
Below a screenshot of the Pd abstraction.

Muscle sound synthesis model | 2010

Next, by applying several audio processing techniques to this model I hope to become more familiar with its physical composition and to develop the basis for a composition and design methodology of muscle sounds, which will be then employed with the actual sounds of my body.

According to Wikipedia “Muscle (from Latin musculus, diminutive of mus “mouse”) is the contractile tissue of animals… Muscle cells contain contractile filaments that move past each other and change the size of the cell. They are classified as skeletal, cardiac, or smooth muscles. Their function is to produce force and cause motion. Muscles can cause either locomotion of the organism itself or movement of internal organs.”

Top down view of Skeletal Muscle

Top down view of skeletal muscle | Photo montage created by Raul654 | Wikimedia

What is not mentioned here is that the force produced by muscles causes sound too.
When filaments move and stretch they actually vibrate, therefore they create sound. Muscle sounds have a frequency between 5Hz and 45Hz, thus they can be captured with a highly sensitive microphone.
A sample of sounding muscle is available here, thanks to the Open Prosthetics research group (whose banner reads “Prosthetics shouldn’t cost an arm and a leg”).

In fact muscle sounds have mostly been studied in the field of Biomedical Engineering as alternative control data for low cost, open source prosthetics applications and it’s thanks to this studies that I could gather precious technical information and learn about several designs of muscles sounds sensor devices.
Most notably the work of Jorge Silva at Prism Lab is being fundamental for my research. His MASc thesis represents a comprehensive resource of information and technical insights.
The device designed at Prism Lab is a coupled microphone-accelerometer sensor capable of capturing the audio signal of muscles sounds. It also eliminates noises and interferences in order to precisely capture voluntary muscle contraption data.
This technology is called mechanical myography (MMG) and it represents the basis of my further musical and performative experimentations with the sounding (human) body.

I just ordered components to start implementing the sensor device, so hopefully in a week or two I’ll be able to hear my body resonating.

I just read a paper of friend and talented musician Jaime E Oliver “The Silent Drum Controller: A New Percussive Gestural Interface”, freely available here.
Jaime is a graduate music researcher at CRCA – Center for Research in Computing and Arts, California.

The paper describes the development of the Silent Drum Controller and contains some insightful points, so I took few notes while reading; they might sound obvious, but they helped me understanding better in which ways I could use biological sounds of human body in a musical performance:

  • “…dissociate gesture with sound”
  • control variables obtained through new gestural interface should be richer than what we can achieve with classical acoustic instruments
  • acoustic instruments have direct correspondence between gesture and sound (richer vocabulary?)
  • capture gestures to be utilized immediately sonically or store them for future sound transformations
  • drive the score and produce changes in mapping
  • discrete events can be obtained while controlling continuous events
  • determine latency toleration boundaries in variable contexts and mapping systems

Take few minutes to watch this excerpts from a live performance (Performer, Composer: Jaime E Oliver). Personally I think it is one of the best sounding gestural interface at the moment.

I need to be aware of the current state of research in music performance and analysis of biologic signals of a human body. Unlikely gestural control of music, which is widely explored by most of the worldwide sonic research center programs and international conferences, biological control of music seems to live in an overlooked niche.
I’ve collected several papers concentrating on three main areas of studies: gestural control of music, procedural audio in game sound design, biomedical engineering.
I found very useful such intensive lecture, it’s interesting to notice how many studies investigates a common topic in different context.
Below a non exhaustive list of references.

Gestural control of music:

  • Arfib, D., J. M. Couturier, L. Kessous, and V. Verfaille, ‘Strategies of mapping between gesture data and synthesis model parameters using perceptual spaces’, Organised Sound, 7 (2003) .
  • Doornbusch, Paul, ‘Composers’ views on mapping in algorithmic composition’, Organised Sound, 7 (2003) .
  • Fels, Sidney, Ashley Gadd, and Axel Mulder, ‘Mapping transparency through metaphor: towards more expressive musical instruments’, Organised Sound, 7 (2003) .
  • Goudeseune, Camille, ‘Interpolated mappings for musical instruments’, Organised Sound, 7 (2003) .
  • Hunt, Andy, and Marcelo M. Wanderley, ‘Mapping performer parameters to synthesis engines’, Organised Sound, 7 (2003) .
  • Levitin, Daniel J., Stephen McAdams, and Robert L. Adams, ‘Control parameters for musical instruments: a foundation for new mappings of gesture to sound’, Organised Sound, 7 (2003) .
  • Myatt, Tony, ‘Strategies for interaction in construction 3’, Organised Sound, 7 (2003) .
  • Ng, Kia, ‘Sensing and mapping for interactive performance’, Organised Sound, 7 (2003) .
  • Wessel, David, and Matthew Wright, ‘Problems and Prospects for Intimate Musical Control of Computers’, Computer Music Journal, 26 (2002), 11-22.

Procedural audio

Biomedical engineering:

  • Alves, Natasha, Ervin Sejdić, Bhupinder Sahota, and Tom Chau, ‘The effect of accelerometer location on the classification of single-site forearm mechanomyograms’, Biomedical Engineering Online, 9 (2010), 23 <doi:10.1186/1475-925X-9-23>;.
  • Esposito, Fabio, Emiliano Cè, Susanna Rampichini, and Arsenio Veicsteinas, ‘Acute passive stretching in a previously fatigued muscle: Electrical and mechanical response during tetanic stimulation’, Journal of Sports Sciences, 27 (2009), 1347 <doi:10.1080/02640410903165093>;.
  • Garcia, Marco Antonio Cavalcanti, Cláudia Domingues Vargas, Márcio Nogueira de Souza, Luis Aureliano Imbiriba, and Líliam Fernandes de Oliveira, ‘Evaluation of arm dominance by using the mechanomyographic signal’, Journal of Motor Behavior, 40 (2008), 83-89 <doi:10.3200/JMBR.40.2.83-89>;.
  • Silva, J., and T. Chau, ‘Coupled microphone-accelerometer sensor pair for dynamic noise reduction in MMG signal recording’, Electronics Letters, 39 (2003), 1496-1498 <doi:10.1049/el:20031003>;.
  • Watakabe, M., Y. Itoh, K. Mita, and K. Akataki, ‘Technical aspects of mechnomyography recording with piezoelectric contact sensor’, Medical & Biological Engineering & Computing, 36 (1998), 557-561 <doi:10.1007/BF02524423>;.

My personal inquiry in the fields of live media performance, sound art and new media art has been feeding a growing fascination for responsive computing systems.
I applied such systems to solo audio visual performances, interactive dance/theatre pieces, participatory concerts and networked autonomous artefacts (all can be viewed on-line visiting my portfolio).
The focus of those investigations has always been the augmentation of the human body and its environment – both actual and virtual – in order to explore relations models between men and machines (digital interaction?).
Given a honest and genuine passion for sound and music what most fascinates me though, is the performance of temporary auditive environments (i.e. concerts?); personally speaking, the creation of “music” – defined here as any sonic scape – is one of the most powerful and immediate expressive and cognitive experience both performer and audience can perceive.

Earlier experiments with augmented musical instruments started in 2007-2008 with the coding of a free, open source software (based in Pure Data) which digitally expands classical fretted musical instruments and enables performers to control audio and visual real time processing simply playing their favourite instrument. Following a perhaps natural evolution about one year ago a general interest for the biological body in live media performance had began, but only since 4/5 months I have started to slowly tighten my approach and develop a methodology.

Currently I’m researching the sounding (human) body, attempting to understand how sounds of the biological body can be used musically and how to design sounds of a performer’s body.
If you didn’t already, please read the research brief to get a glimpse of the present stage of the investigation.