Xth Sense > Blog
Before the holidays break I wanted to set a milestone of the inquiry, so I agreed together with my supervisor Martin Parker to host a short presentation reserved to the research staff of our departments (Sound Design and Digital Composition).
I’ve been setting up the Sound Lab at Alison House since the morning and everything worked fine.
Even though some of the researchers could not make it, I was happy to see Martin, Owen Green and Sean Williams.
Although the piece I performed represented more a first presentation than a proper concert for me, it has definitely caught the attention of the listeners and earned some good feedbacks; however what I was most expecting were constructive critics which could allow me to assume a different viewpoint on the present stage of the project.
In fact I did receive several advices which can be roughly summarized as follow:
I fully agreed with these critics and I actually realized I could have prepared the presentation much better. However it has been important to listen to my fellows’ feedbacks and that night I came back home and worked until late to improve the piece for the forthcoming concert.
Night time was not enough to work on all the issues raised after the presentation, but I was able to better experiment with silence and subtle processing effects, envelope generators (which eventually have not been used) and a MIDI controller. Results seemed very good.
The day after I worked a couple of hours more, then I went to Alison House to set up the equipment for the concert together with my fellows Matthew and Marcin who performed too on the same night. This time I prepared everything professionally, anxious as I was to present the reviewed piece.
The setup consisted of the MMG sensor prototype, a Focusrite Saffire Pro40, a Behringer BCF2000 MIDI controller, a DELL machine running a customized Ubuntu Lucid Lynx with Real Time Kernel, and the Pure Data-based software framework I’m developing.
Audience feedbacks were very good, and I seemed to understand that what most appealed the listeners was an authentic, neat and natural responsiveness of the system along with a suggestive coupling of sound and gestures. Concerts were supposed to be recorded, but sadly they have not.
Although some harmony issues remained, I was also fairly satisfied of the performance outcome. During the winter break I plan to better implement the prototype, possibly making it portable and to refine the software, coding a better control data mapping and fixing the omnipresent bugs.
Why use only the body to control musical performance?
What about dancers in musical performance?
Who used it already? Why? How?
In which ways it would be different from a traditional musical performance?
To which extents resulting music will be different and authentic?
Will it be actually different?
How does the body relate to space?
How does space relate to sound?
How does sound relate to the body?
How does perception relate to the body?
Who did already give answers to this questions?
Have these questions been answered yet?
If so, have all aspects been properly covered?
Is there a gap? In which area?
Picture CC licensed by Veronique Debord.
Recently I’ve been dedicating some good time to the software framework in Pure Data. After some early DSP experimentations and the improvements of the MMG sensor I had quite a clear idea about how to proceed further.
The software implementation actually started some time before the MMG research as a fork of C::NTR::L, a free, interactive environment for live media performance based on score following and pitch recognition I developed and publicly released under GPL license last year.
When I started this investigation I thought to start from the point I left last year, so to take advantage of previous experience, methods and ideas.
Click to enlarge.
The graphic layout has been designed using the free, open source software Inkscape and Gimp.
The present interface consists of a workspace in which the user can dynamically load, connect and remove several audio processing modules (top); a sidebar which enables to switch among 8 different workspaces (top right); some empty space that will be reserved to utilities modules, such as a timebase and monitoring modules (middle); a channel strip to control each workspace volume and send amount (bottom); a squared area used to load diverse modules such as the routing panel that you can see in the image (mid to bottom right). Modules and panels are dynamic, which means they can be moved and substituted dynamically in a click for a fast and efficient prototyping.
Until now several audio processing modules have been implemented:
Another interesting process I could implement was a calibration system; it enables the performer to calibrate software parameters according to the different intensity of the contractions of each finger, the whole hand or the forearm (by now the MMG sensor has been tested for performance only on the forearm).
Such process is being extremely useful as it allows the performer to customize the responsiveness of the hardware/software framework, and to generate up to 5 different control data contracting each finger, the hand or the whole forearm.
The calibration code is a little rough, but it does work already. I believe exploring further this method can unveil exciting prospects.
On the 7th December I’m going to present the actual state of the inquiry to the research staff of my department at Edinburgh University; I will present a short piece using the software above and the MMG sensing device. On the 8th we also arranged an informal concert at our school in Alison House, so I’m looking forward to test live the work done so far.
A little update regarding the hardware.
I’ve been thinking about the final design of the sensor. Although the maximum portability of the device is essential I rather not to work on a wireless solution by now; I believe it’s worth to spend more time improving both the hardware and software prototype, until they will reach a reliable degree of efficiency. So what I need to implement now is a tiny circuit which could be embedded in a small box to be placed on the performer’s arm or in a pocket.
Fortunately the size of the circuit is fairly small, it shouldn’t be too difficult to embed the sensor into a suitable box.
Due to some deadlines I can’t develop such circuit at the moment, but I created a Velcro bracelet for the sensor and implemented a fairly satisfying polyurethane shield along with a soft plastic air chamber for the microphone. The shield will be used to avoid the 60Hz electrical interference which happens when the skin directly touches the microphone and to filter out external noises; the air chamber allows the sonic vibration of the muscles to be amplified before reaching the microphone, beside filtering out some frequencies too.
Eventually I also included a 1/4 mono chassis jack socket, which substitutes the awful output cables I used so far.
Although such setup is much simpler than the one developed by Silva, the results obtained during a first testing session were quite promising. That’s the reason why I opted for the simplest implementation; until the sensor satisfies the project requirements there’s no need to go for something more complex. Besides, the simplicity of the sensor implementation is an integral part of the final project.
Michel Waisvisz was a Dutch composer, performer and inventor of electronic musical instruments. In this video he performs using The Hands, one of the earlier instruments capable of capturing analog sensor data and convert them into MIDI value.
This documentation has been extracted from the VHS archive at STEIM. It’s 1984.
What impresses me most is the not-intermediated interaction between gestures and sound Waisvisz achieved with this instrument. Such degree of immediacy between visual, theatrical performance and sound is, in my opinion, one of the fundamental element to take into account while working on new electronic musical instruments or interfaces.
Michel Waisvisz – Hyper Instruments Part 3 from STEIM Amsterdam on Vimeo.
from the STEIM vhs archive
The hardware prototype has almost reached a good degree of stability and efficiency, so I’m now dedicating much more time to the development of the MMG signal processing system in Pure Data. How do I want the body to sound?
First, I coded a real time granulator and some simple delay lines to be applied to the MMG audio signal captured from the body.
Then I added a rough channel strip to manage up to 5 different processing chains.
Excuse the messy patching style, but this is just an early exploration…
Click to enlarge.
However, once I started playing around with it, I soon realized that the original MMG signal coming from the hardware needed to be “cleaned up” before being actually useful. That’s why I added a subpatch dedicated to the filtering of unneeded frequencies and enhancement of the meaningful ones; at the same time I thought about a threshold process, which could enable Pure Data to understand whether the incoming signal is actually generated by voluntary muscle contractions or it’s a non-voluntary movement or a background noise. This way gesture results far more interrelated to the processed sound.
Eventually, I needed a quick and reliable visual feedback to help me analysing the MMG signal in real time. This subpatch includes a FFT spectrum analysis module and a simple real time spectrogram borrowed from the PDMTL lib.
Click to enlarge.
I’m going to experiment with such system and when it will reach a good degree of expressiveness I’ll record some live audio and post it here.
Here’s another source of inspiration. Atau Tanaka is is Chair of Digital Media at Newcastle University.
BioMuse is probably one of the most notable projects making use of biosignals for musical performance.
However, at my knowledge, Tanaka’s BioMuse uses EMG which provides very different set of control data than MMG – the technology I based my research on.
Atau Tanaka – new Biomuse demo from STEIM Amsterdam on Vimeo.
The new Biomuse.
http://www.ncl.ac.uk/culturelab/people/profile/atau.tanaka
STEIM Micro Jamboree 2008
Session 3: On Mapping – Techniques and Future Possibilities
Dec. 9, 2008
I’m experimenting with a prototypical bio-sensing device (described here) I built with the precious help of Dorkbot ALBA at the Edinburgh Hacklab. Although at the moment sensor is quite rough it is already capable of capturing muscles sounds.
As the use of free, open source tools is an integral part of the methodology of this research, I’m currently using the awesome Ardour2 to monitor, record and analyse muscles sounds.
The first problem I encountered was the conductive capability of human skin; when the metal case of the microphone directly touches the skin, body becomes a huge antenna attracting all electromagnetic waves floating around. I’m now trying different ways of shielding the sensor and this helps me to better understand how muscle vibrations are transmitted outside of the body and through the air.
Below you can listen to a couple of short clips of my heartbeat and arm voluntary contractions recorded with the MMG sensor. Audio files are raw, i.e. no processing has been applied to the original sound, their frequency is extremely low and it might not be immediately audible; you will possibly need to turn up the volume of your speakers or wear a pair of headphones.
Heartbeat
Voluntary arm contractions
(sound like a low rumble, or a far thunder; clicks are caused by crackles of my bones)
At this early development stage the MMG sensor capabilities seem quite promising, I can’t wait to plug the sensor into Pure Data and start trying some real time processing.
As mentioned in a previous post, I’ve been learning about the use of mechanical myography (MMG) in prosthetics and Biomedial Engineering generic applications in order to understand how to efficiently capture muscle sounds in real time and use them in a musical performance.
I’ve been studying the work of Jorge Silva at Prism Lab and I had initially planned to implement the same design of bio-sensing device he created and publicly published.
Here’s a schematic diagram of the sensor:
Muscle sonic resonance is transmitted to the skin which in turn vibrates exciting an air chamber. These vibrations are then captured by a condenser microphone adequately shielded from noise and interferences by a mean of a silicon case. Microphone is coupled with an accelerometer which is used to filter out vibrations caused by a global motion of the arm, so that muscle signals can be properly isolated.
Such design has been proved effectively functional through several tests and academic reports, however I soon realized its development would take a fairly good amount of time and, above all, it would require some peculiar skills which are not part of my current background. Thus I ordered all the components I needed, but then decided to implement a simpler version of the sensor to get more easily familiar with the technology. Once I’ll know how to master muscle sounds and what I can obtain from them, I believe I could imagine a more complex sensor or implement the one above (if I’d really need to).
Thanks to the awesome folks at Dorkbot ALBA (a special thanks to Tom H and Martin and the Edinburgh Hacklab) I could build a first rough circuit and embedded MMG sensor. Below photos of the circuit and my studio setup including a Focusrite Saffire Pro40 external sound card which I use to convert and amplify the signal.
The flat round object appearing out of focus in the first pic is the electret condenser microphone, actually the same used by Silva in his cmasp design, and it’s the core of the device. Microphone sensitivity ranges from 20Hz up to 16kHz thus it is capable to capture the low resonance frequency of muscles (between 5Hz and 40/45Hz). It is important to note, though, that the biggest part of muscles sounds spectra seems to sit below 20Hz, i.e. muscles produce infrasounds which a human ear cannot perceive but a human body can physically experience. This issue really interests me and it could possibly affect my research, but for now I won’t elaborate on it.
Now that I have a first sensor prototype I’m ready to plug my body and listen!
Some thoughts…