Res, a matter.

Setting up for the performance | Suoni Sonori, Milan, Italy, January 2011 | Courtesy Ugo Dalla Porta

Here are some critical keypoints which arised from a joint analysis by my supervisor – Martin Parker – and myself regarding the first public performance of Music for Flesh I:

  • to design intimacy;
  • to create physical spaces with broader body movements;
  • to change gesture patterns according to sound patterns;
  • to reach a broader frequency spectrum, thus more interestingly exciting viewers’ listening;
  • to not let die a single sound scene, but bring it somewhere, build more complex structures around it;
  • to be able to switch in real time between different MMG signal functionalities (sonic source or control signal), while keeping safe the possibility to deploy both function of the signal at the same time.

Overall, what I need to focus the performance design on is a richness of color and a sofistication of form.
I look forward to using two sensors on both arms, as this could significantly enhance the richness of the sonic material, while providing the possibility of polyphonic structures.
On another hand, more exciting sonic forms may also be achieved by means of autonomous processes capable of reacting to the performer’s kinetic behaviour, adding machine state-dependent pattern changes, as also suggested by my friend Jaime Oliver during our last email exchange.

Here’s a first demo presentation of Music for Flesh I, a solo sonic piece for the Xth Sense biosensing wearable device.
This piece is the first milestone of the research, many aspects need to be improved, but I’m quite satisfied of this result and above all composing this piece provided many insights about the further development of the research.

Further information here.

Music for Flesh I – solo piece for Xth Sense biosensing wearable device from Marco Donnarumma aka TheSAD on Vimeo.

Recorded in January 2011 at Suonisonori, Milan, Italy.
Camera and editing Gianluca Messina; audio recording Giuseppe Vaciago.

Here’s the Xth Sense biosensing wearable prototype v1.

MMG wearable sensor hardware v1

MMG wearable sensor hardware v1

Keeping up with the hardware design for a couple of weeks, I eventually felt it was the right time to make it wearable. During the Christmas holidays I came back to Italy where Andrea Donnarumma and Marianna Cozzolino gave me an incredibly precious support both during the design process and the technical development.
Before undertaking the development of the Xth Sense sensor hardware, few crucial criteria were defined:

  • to develop a wearable, unobtrusive device, allowing a performer to freely move on stage;
  • to implement an extremely sensitive hardware device which could efficiently capture in real time and with very low latency diverse muscle sounds;
  • to make use of the most inexpensive hardware solutions, assuring a low implementation cost;
  • to implement the most accessible and straightforward production methodology in order to foster the future re-distribution and openness of the hardware.

So here’s a loose summary of the making of the Xth Sense biosensing hardware wearable prototype v1.
First, we handled the battery issue. We were not sure whether the microphone would have reacted properly and with the same sensitivity using a standalone battery, so we hacked an old portable minicassetter recorder and extracted the battery case. Next we built the circuit on a copper board, included a trimmer to regulate the voltage feeding the microphone and embedded everything in a plastic DV cassette box along with a 1/2 mono chassis jack socket. Then, we used some common wiring cable to connect the hacked battery case to the circuit and the bracelet to the circuit box.

Hacking a radio battery holder

Building the first open circuit

First testing implementation of the open circuit

Preparing the testing circuit case

First wearable MMG sensor hardware prototype

The resulting device was obviously quite unsophisticated, but we simply wanted to make sure the microphone would have maintained the same capabilities while on battery charge. The experiment was successfull and we started planning a more usable and good-looking design.

First of all we looked for appropriate components: black box (3.15 x 1.57 x 0.67), smaller condenser and resistors, 3V coin lithium batteries and their holders (which were not so easy to find as we expected) and a more flexible wiring cable.

Bits and parts

At this point, circuit needed to be used on a smaller copper board, so we slightly changed its design in order to decrease size and make it nicely fit the box.

Embedding the second prototype sensor hardware in a wearable case

Second sensor hardware prototype in a new wearable case

Device worked like a charm, but in spite of the positive results the microphone shield required further trials. The importance of the shield was manifold; an optimal shield had to fit specific requirements: to bypass the 60Hz electrical interference which can be heard when alternating electric current distribute itself within the skin after a direct contact with the microphone metal case; to narrow the sensitive area of the microphone, filtering out external noises; to keep the microphone static, avoiding external air pressure to affect the signal; to provide a suitable air chamber for the microphone, in order to amplify sonic vibrations of the muscles, allowing to capture also deeper muscle contractions.
First, microphone was insulated by mean of a polyurethane shield, but due to the strong malleability of this material, its initial shape tended to undergo substantial alterations. Eventually, sensor was insulated in a common silicon case that positively satisfied the requirements and further enhanced the Signal-to-noise ratio (SNR).

Stamps for the silicon shield

This detail notably increased the sensitivity of the biosensing device, giving a wider range of interaction.

Such result satisfies the present hardware requirements: the device is very small, it can be put in a pocket or hooked to a belt, and it is enough sturdy to be used with no particular care. I’m going to test the performance of the device during a demo presentation at Suonisonori, Milan Italy in few days.

Before the holidays break I wanted to set a milestone of the inquiry, so I agreed together with my supervisor Martin Parker to host a short presentation reserved to the research staff of our departments (Sound Design and Digital Composition).

MMG biosensing device presentation setup | The University of Edinburgh | 2010

I’ve been setting up the Sound Lab at Alison House since the morning and everything worked fine.
Even though some of the researchers could not make it, I was happy to see Martin, Owen Green and Sean Williams.

MMG biosensing device presentation | The University of Edinburgh | 2010

Although the piece I performed represented more a first presentation than a proper concert for me, it has definitely caught the attention of the listeners and earned some good feedbacks; however what I was most expecting were constructive critics which could allow me to assume a different viewpoint on the present stage of the project.
In fact I did receive several advices which can be roughly summarized as follow:

  • harmony, improve the overall harmony of the piece
  • silence, including silence in a musical piece demonstrates braveness and coherence
  • sonic gesture, theatrical gesture is very important in a musical performance of this kind
  • add to the software envelope generators to enhance automation
  • try to use a MIDI device
  • study a better dynamics compression to be applied on the MMG source signal
  • delays and reverbs, if improperly used such effects might destroy the sonic space, instead of creating it
  • the distortion effects I first used can be ambiguous

I fully agreed with these critics and I actually realized I could have prepared the presentation much better. However it has been important to listen to my fellows’ feedbacks and that night I came back home and worked until late to improve the piece for the forthcoming concert.
Night time was not enough to work on all the issues raised after the presentation, but I was able to better experiment with silence and subtle processing effects, envelope generators (which eventually have not been used) and a MIDI controller. Results seemed very good.
The day after I worked a couple of hours more, then I went to Alison House to set up the equipment for the concert together with my fellows Matthew and Marcin who performed too on the same night. This time I prepared everything professionally, anxious as I was to present the reviewed piece.

Concert for Biosensing Techs | Edinburgh University | 2010

Concert for Biosensing Techs | Edinburgh University | 2010

The setup consisted of the MMG sensor prototype, a Focusrite Saffire Pro40, a Behringer BCF2000 MIDI controller, a DELL machine running a customized Ubuntu Lucid Lynx with Real Time Kernel, and the Pure Data-based software framework I’m developing.

Concert for Biosensing Techs | Edinburgh University | 2010

Audience feedbacks were very good, and I seemed to understand that what most appealed the listeners was an authentic, neat and natural responsiveness of the system along with a suggestive coupling of sound and gestures. Concerts were supposed to be recorded, but sadly they have not.
Although some harmony issues remained, I was also fairly satisfied of the performance outcome. During the winter break I plan to better implement the prototype, possibly making it portable and to refine the software, coding a better control data mapping and fixing the omnipresent bugs.

What? | Veronique Debord | 2010

Why use only the body to control musical performance?
What about dancers in musical performance?
Who used it already? Why? How?
In which ways it would be different from a traditional musical performance?
To which extents resulting music will be different and authentic?
Will it be actually different?
How does the body relate to space?
How does space relate to sound?
How does sound relate to the body?
How does perception relate to the body?
Who did already give answers to this questions?
Have these questions been answered yet?
If so, have all aspects been properly covered?
Is there a gap? In which area?

Picture CC licensed by Veronique Debord.

Recently I’ve been dedicating some good time to the software framework in Pure Data. After some early DSP experimentations and the improvements of the MMG sensor I had quite a clear idea about how to proceed further.
The software implementation actually started some time before the MMG research as a fork of C::NTR::L, a free, interactive environment for live media performance based on score following and pitch recognition I developed and publicly released under GPL license last year.
When I started this investigation I thought to start from the point I left last year, so to take advantage of previous experience, methods and ideas.
Click to enlarge.

MMG signal processing framework | v 0.6.1 | 2010

The graphic layout has been designed using the free, open source software Inkscape and Gimp.

The present interface consists of a workspace in which the user can dynamically load, connect and remove several audio processing modules (top); a sidebar which enables to switch among 8 different workspaces (top right); some empty space that will be reserved to utilities modules, such as a timebase and monitoring modules (middle); a channel strip to control each workspace volume and send amount (bottom); a squared area used to load diverse modules such as the routing panel that you can see in the image (mid to bottom right). Modules and panels are dynamic, which means they can be moved and substituted dynamically in a click for a fast and efficient prototyping.
Until now several audio processing modules have been implemented:

  • a feedback delay
  • a chorus
  • a timestretch object (for compression and expansion) based on looped sampling
  • a single side band modulation object (thanks to Andy Farnell for the tip about the efficiency of ssb modulation compared with tape modulation)
  • what I called a grunger, namely a module consisting of a chain of reverb, distortion, bandpass filter and ssb pitch shifting (have to thanks my supervisor Martin Parker for the insight about shiftpitching the wet signal of the reverb)

Another interesting process I could implement was a calibration system; it enables the performer to calibrate software parameters according to the different intensity of the contractions of each finger, the whole hand or the forearm (by now the MMG sensor has been tested for performance only on the forearm).
Such process is being extremely useful as it allows the performer to customize the responsiveness of the hardware/software framework, and to generate up to 5 different control data contracting each finger, the hand or the whole forearm.
The calibration code is a little rough, but it does work already. I believe exploring further this method can unveil exciting prospects.

MMG sensor early calibration system | 2010

On the 7th December I’m going to present the actual state of the inquiry to the research staff of my department at Edinburgh University; I will present a short piece using the software above and the MMG sensing device. On the 8th we also arranged an informal concert at our school in Alison House, so I’m looking forward to test live the work done so far.

A little update regarding the hardware.

Prototyping and composition notes

I’ve been thinking about the final design of the sensor. Although the maximum portability of the device is essential I rather not to work on a wireless solution by now; I believe it’s worth to spend more time improving both the hardware and software prototype, until they will reach a reliable degree of efficiency. So what I need to implement now is a tiny circuit which could be embedded in a small box to be placed on the performer’s arm or in a pocket.
Fortunately the size of the circuit is fairly small, it shouldn’t be too difficult to embed the sensor into a suitable box.

Prototyping and composition notes

Due to some deadlines I can’t develop such circuit at the moment, but I created a Velcro bracelet for the sensor and implemented a fairly satisfying polyurethane shield along with a soft plastic air chamber for the microphone. The shield will be used to avoid the 60Hz electrical interference which happens when the skin directly touches the microphone and to filter out external noises; the air chamber allows the sonic vibration of the muscles to be amplified before reaching the microphone, beside filtering out some frequencies too.
Eventually I also included a 1/4 mono chassis jack socket, which substitutes the awful output cables I used so far.

Although such setup is much simpler than the one developed by Silva, the results obtained during a first testing session were quite promising. That’s the reason why I opted for the simplest implementation; until the sensor satisfies the project requirements there’s no need to go for something more complex. Besides, the simplicity of the sensor implementation is an integral part of the final project.

MMG sensor hardware prototype

Michel Waisvisz was a Dutch composer, performer and inventor of electronic musical instruments. In this video he performs using The Hands, one of the earlier instruments capable of capturing analog sensor data and convert them into MIDI value.
This documentation has been extracted from the VHS archive at STEIM. It’s 1984.

What impresses me most is the not-intermediated interaction between gestures and sound Waisvisz achieved with this instrument. Such degree of immediacy between visual, theatrical performance and sound is, in my opinion, one of the fundamental element to take into account while working on new electronic musical instruments or interfaces.

Michel Waisvisz – Hyper Instruments Part 3 from STEIM Amsterdam on Vimeo.

from the STEIM vhs archive

The hardware prototype has almost reached a good degree of stability and efficiency, so I’m now dedicating much more time to the development of the MMG signal processing system in Pure Data. How do I want the body to sound?

First, I coded a real time granulator and some simple delay lines to be applied to the MMG audio signal captured from the body.
Then I added a rough channel strip to manage up to 5 different processing chains.
Excuse the messy patching style, but this is just an early exploration…
Click to enlarge.

MMG Signal Processing System Pure Data | 2010

However, once I started playing around with it, I soon realized that the original MMG signal coming from the hardware needed to be “cleaned up” before being actually useful. That’s why I added a subpatch dedicated to the filtering of unneeded frequencies and enhancement of the meaningful ones; at the same time I thought about a threshold process, which could enable Pure Data to understand whether the incoming signal is actually generated by voluntary muscle contractions or it’s a non-voluntary movement or a background noise. This way gesture results far more interrelated to the processed sound.

MMG Input Filtering Pure Data | Marco Donnarumma | 2010

MMG Input Filtering Pure Data | 2010

Eventually, I needed a quick and reliable visual feedback to help me analysing the MMG signal in real time. This subpatch includes a FFT spectrum analysis module and a simple real time spectrogram borrowed from the PDMTL lib.
Click to enlarge.

MMG Signal Anlysis Pure Data | 2010

I’m going to experiment with such system and when it will reach a good degree of expressiveness I’ll record some live audio and post it here.

Here’s another source of inspiration. Atau Tanaka is is Chair of Digital Media at Newcastle University.
BioMuse is probably one of the most notable projects making use of biosignals for musical performance.
However, at my knowledge, Tanaka’s BioMuse uses EMG which provides very different set of control data than MMG – the technology I based my research on.

Atau Tanaka – new Biomuse demo from STEIM Amsterdam on Vimeo.

The new Biomuse.
http://www.ncl.ac.uk/culturelab/people/profile/atau.tanaka
STEIM Micro Jamboree 2008
Session 3: On Mapping – Techniques and Future Possibilities
Dec. 9, 2008