Xth Sense > Project Xth Sense
Why use only the body to control musical performance?
What about dancers in musical performance?
Who used it already? Why? How?
In which ways it would be different from a traditional musical performance?
To which extents resulting music will be different and authentic?
Will it be actually different?
How does the body relate to space?
How does space relate to sound?
How does sound relate to the body?
How does perception relate to the body?
Who did already give answers to this questions?
Have these questions been answered yet?
If so, have all aspects been properly covered?
Is there a gap? In which area?
Picture CC licensed by Veronique Debord.
A little update regarding the hardware.
I’ve been thinking about the final design of the sensor. Although the maximum portability of the device is essential I rather not to work on a wireless solution by now; I believe it’s worth to spend more time improving both the hardware and software prototype, until they will reach a reliable degree of efficiency. So what I need to implement now is a tiny circuit which could be embedded in a small box to be placed on the performer’s arm or in a pocket.
Fortunately the size of the circuit is fairly small, it shouldn’t be too difficult to embed the sensor into a suitable box.
Due to some deadlines I can’t develop such circuit at the moment, but I created a Velcro bracelet for the sensor and implemented a fairly satisfying polyurethane shield along with a soft plastic air chamber for the microphone. The shield will be used to avoid the 60Hz electrical interference which happens when the skin directly touches the microphone and to filter out external noises; the air chamber allows the sonic vibration of the muscles to be amplified before reaching the microphone, beside filtering out some frequencies too.
Eventually I also included a 1/4 mono chassis jack socket, which substitutes the awful output cables I used so far.
Although such setup is much simpler than the one developed by Silva, the results obtained during a first testing session were quite promising. That’s the reason why I opted for the simplest implementation; until the sensor satisfies the project requirements there’s no need to go for something more complex. Besides, the simplicity of the sensor implementation is an integral part of the final project.
Michel Waisvisz was a Dutch composer, performer and inventor of electronic musical instruments. In this video he performs using The Hands, one of the earlier instruments capable of capturing analog sensor data and convert them into MIDI value.
This documentation has been extracted from the VHS archive at STEIM. It’s 1984.
What impresses me most is the not-intermediated interaction between gestures and sound Waisvisz achieved with this instrument. Such degree of immediacy between visual, theatrical performance and sound is, in my opinion, one of the fundamental element to take into account while working on new electronic musical instruments or interfaces.
Michel Waisvisz – Hyper Instruments Part 3 from STEIM Amsterdam on Vimeo.
from the STEIM vhs archive
The hardware prototype has almost reached a good degree of stability and efficiency, so I’m now dedicating much more time to the development of the MMG signal processing system in Pure Data. How do I want the body to sound?
First, I coded a real time granulator and some simple delay lines to be applied to the MMG audio signal captured from the body.
Then I added a rough channel strip to manage up to 5 different processing chains.
Excuse the messy patching style, but this is just an early exploration…
Click to enlarge.
However, once I started playing around with it, I soon realized that the original MMG signal coming from the hardware needed to be “cleaned up” before being actually useful. That’s why I added a subpatch dedicated to the filtering of unneeded frequencies and enhancement of the meaningful ones; at the same time I thought about a threshold process, which could enable Pure Data to understand whether the incoming signal is actually generated by voluntary muscle contractions or it’s a non-voluntary movement or a background noise. This way gesture results far more interrelated to the processed sound.
Eventually, I needed a quick and reliable visual feedback to help me analysing the MMG signal in real time. This subpatch includes a FFT spectrum analysis module and a simple real time spectrogram borrowed from the PDMTL lib.
Click to enlarge.
I’m going to experiment with such system and when it will reach a good degree of expressiveness I’ll record some live audio and post it here.
Here’s another source of inspiration. Atau Tanaka is is Chair of Digital Media at Newcastle University.
BioMuse is probably one of the most notable projects making use of biosignals for musical performance.
However, at my knowledge, Tanaka’s BioMuse uses EMG which provides very different set of control data than MMG – the technology I based my research on.
Atau Tanaka – new Biomuse demo from STEIM Amsterdam on Vimeo.
The new Biomuse.
http://www.ncl.ac.uk/culturelab/people/profile/atau.tanaka
STEIM Micro Jamboree 2008
Session 3: On Mapping – Techniques and Future Possibilities
Dec. 9, 2008
I’m experimenting with a prototypical bio-sensing device (described here) I built with the precious help of Dorkbot ALBA at the Edinburgh Hacklab. Although at the moment sensor is quite rough it is already capable of capturing muscles sounds.
As the use of free, open source tools is an integral part of the methodology of this research, I’m currently using the awesome Ardour2 to monitor, record and analyse muscles sounds.
The first problem I encountered was the conductive capability of human skin; when the metal case of the microphone directly touches the skin, body becomes a huge antenna attracting all electromagnetic waves floating around. I’m now trying different ways of shielding the sensor and this helps me to better understand how muscle vibrations are transmitted outside of the body and through the air.
Below you can listen to a couple of short clips of my heartbeat and arm voluntary contractions recorded with the MMG sensor. Audio files are raw, i.e. no processing has been applied to the original sound, their frequency is extremely low and it might not be immediately audible; you will possibly need to turn up the volume of your speakers or wear a pair of headphones.
Heartbeat
Voluntary arm contractions
(sound like a low rumble, or a far thunder; clicks are caused by crackles of my bones)
At this early development stage the MMG sensor capabilities seem quite promising, I can’t wait to plug the sensor into Pure Data and start trying some real time processing.
As mentioned in a previous post, I’ve been learning about the use of mechanical myography (MMG) in prosthetics and Biomedial Engineering generic applications in order to understand how to efficiently capture muscle sounds in real time and use them in a musical performance.
I’ve been studying the work of Jorge Silva at Prism Lab and I had initially planned to implement the same design of bio-sensing device he created and publicly published.
Here’s a schematic diagram of the sensor:
Muscle sonic resonance is transmitted to the skin which in turn vibrates exciting an air chamber. These vibrations are then captured by a condenser microphone adequately shielded from noise and interferences by a mean of a silicon case. Microphone is coupled with an accelerometer which is used to filter out vibrations caused by a global motion of the arm, so that muscle signals can be properly isolated.
Such design has been proved effectively functional through several tests and academic reports, however I soon realized its development would take a fairly good amount of time and, above all, it would require some peculiar skills which are not part of my current background. Thus I ordered all the components I needed, but then decided to implement a simpler version of the sensor to get more easily familiar with the technology. Once I’ll know how to master muscle sounds and what I can obtain from them, I believe I could imagine a more complex sensor or implement the one above (if I’d really need to).
Thanks to the awesome folks at Dorkbot ALBA (a special thanks to Tom H and Martin and the Edinburgh Hacklab) I could build a first rough circuit and embedded MMG sensor. Below photos of the circuit and my studio setup including a Focusrite Saffire Pro40 external sound card which I use to convert and amplify the signal.
The flat round object appearing out of focus in the first pic is the electret condenser microphone, actually the same used by Silva in his cmasp design, and it’s the core of the device. Microphone sensitivity ranges from 20Hz up to 16kHz thus it is capable to capture the low resonance frequency of muscles (between 5Hz and 40/45Hz). It is important to note, though, that the biggest part of muscles sounds spectra seems to sit below 20Hz, i.e. muscles produce infrasounds which a human ear cannot perceive but a human body can physically experience. This issue really interests me and it could possibly affect my research, but for now I won’t elaborate on it.
Now that I have a first sensor prototype I’m ready to plug my body and listen!
Some thoughts…
While waiting for the components I need to build the bio-sensing device, I started coding a synthesis model for human muscle sounds.
I illustrate below the basis of an exploratory study of the acoustics of biologic body sounds. I describe the development of an audio synthesis model for muscle sounds which offers a deeper understanding of the body sound matter and provides the ground for further experimentations in composition.
I’m working with a machine running a Linux OS and the audio synthesis model is being implemented using the free, open source framework known as Pure Data, a graphical programming language developed by Miller Puckette. I’ve been using Pd for 4/5 years for other personal projects, thus I feel fairly comfortable working in such environment.
Firstly, it is necessary to understand the physical phenomena which makes muscle vibrate and sound.
As illustrated in a previous blog post, muscles are formed by several layers of contractile filaments. Each of them can stretch and move past the other, vibrating at a very low frequency. However audio recordings of muscle sounds show that their sonic response is not constant, but sounds more similar to a low and deep rumble. This might happen because each filament does not vibrate in unison with each other, but rather each one of them undergoes slightly different forces depending on their position and dimension, therefore each filament vibrates at a different frequency.
Eventually each partial (defined here as the single frequency of a specific filament) is summed to the others living in the same muscle fiber, which in turn are summed to the other muscle fibers living in the surrounding fascicle.
Such phenomena creates a subtle, complex audio spectra which can be synthesised using Discrete Summation Formula. DSF allows the synthesis of harmonic and inharmonic, band-limited or unlimited spectra, and can be controlled by an index, which seems perfectly fitting the requirement of this acoustic experiment.
Currently I’m studying the awesome book “Designing Sound” by Andy Farnell (MIT Press) and the synthesis model I’m coding is based on the Discrete Summation Synthesis explained in Technique, chapter 17, pp. 254-256.
I started implementing the basic synthesis formula to create the fundamental sidebands, then I applied DSF to a noise generator to add some light distortion to the sinewaves by means of complex spectra formed by tiny, slow noise bursts. Filter banks have been applied to each spectra in order to refine the sound and emphasise specific harmonics.
Eventually the two layers have been summed, passed through another filter bank and a tanh function, which add a more natural characteristic to the resulting impulse.
Below a screenshot of the Pd abstraction.
Next, by applying several audio processing techniques to this model I hope to become more familiar with its physical composition and to develop the basis for a composition and design methodology of muscle sounds, which will be then employed with the actual sounds of my body.
According to Wikipedia “Muscle (from Latin musculus, diminutive of mus “mouse”) is the contractile tissue of animals… Muscle cells contain contractile filaments that move past each other and change the size of the cell. They are classified as skeletal, cardiac, or smooth muscles. Their function is to produce force and cause motion. Muscles can cause either locomotion of the organism itself or movement of internal organs.”
What is not mentioned here is that the force produced by muscles causes sound too.
When filaments move and stretch they actually vibrate, therefore they create sound. Muscle sounds have a frequency between 5Hz and 45Hz, thus they can be captured with a highly sensitive microphone.
A sample of sounding muscle is available here, thanks to the Open Prosthetics research group (whose banner reads “Prosthetics shouldn’t cost an arm and a leg”).
In fact muscle sounds have mostly been studied in the field of Biomedical Engineering as alternative control data for low cost, open source prosthetics applications and it’s thanks to this studies that I could gather precious technical information and learn about several designs of muscles sounds sensor devices.
Most notably the work of Jorge Silva at Prism Lab is being fundamental for my research. His MASc thesis represents a comprehensive resource of information and technical insights.
The device designed at Prism Lab is a coupled microphone-accelerometer sensor capable of capturing the audio signal of muscles sounds. It also eliminates noises and interferences in order to precisely capture voluntary muscle contraption data.
This technology is called mechanical myography (MMG) and it represents the basis of my further musical and performative experimentations with the sounding (human) body.
I just ordered components to start implementing the sensor device, so hopefully in a week or two I’ll be able to hear my body resonating.