Xth Sense > Blog
I just ended a new work residency at Inspace, home to a joint research partnership between the School of Informatics and New Media Scotland.
During the residency, me and Brendan F Doyle (Production Sound Engineering) worked on a participatory Musicircus which included a complex, reactive sonic system for 8 Xth Sense biosensors and 8 audio channels.
The piece was premiered two days ago on 15th March during a public event which gathered quite a good amount of people from different backgrounds, music, sound design, informatics, hackers and general public.
It was awesome to see how the audience positively reacted to this unconventional paradigm of musical performance, namely generating and controlling electroacoustic sounds and music by muscle’s contraction. After some initial – and expected – disorientation, everybody started realizing how to use the sensors and how to interact with the system; only then the actual Musicircus started.
The event was pretty successful, everything worked out fairly smoothly – except the few things which always “have to” get broken just before the opening – and we received several excited and stimulating feedback. Overall a great and satisfying experience.
During the same night, I also performed my new solo sonic piece Music for Flesh II, for augmented muscle sounds and Xth Sense technology. The acoustic capabilities of Inspace were challenging: glass walls all around, an empty floor which acts as a resonant shield, and interesting sonic reflections happening all around the space. I need to thank again my colleague Brendan for his great work with the sound engineering of the venue and his live mixing during my performance.
Picture courtesy by Chris Scott.
My paper “Xth Sense: researching muscle sounds for an experimental paradigm of musical performance” was accepted to the Pure Data mini conference, along with the related sound piece Music for Flesh II.
The Pd mini-con took place in Dublin, at the Trinity College. It was my first academic presentation of the Xth Sense outside of my own department; definitely an exciting chance to get new feedbacks and establish a new group of peers.
The experience was great, and the comments of audience and staff were positive. I also had some small technical problem during the performance… some good food for thought!
A full report of the event is available on my on-line magazine.
I just completed a new composition titled Music for Flesh II.
The control and aural vocabulary are far more complex than the earlier studio presentation; I also tried to enrich the sound spectrum of the Xth Sense exploring dynamic textures utilizing a wider frequency range. However, the most relevant advance was the implementation of two sensors simultaneously. The signal I receive from both sensors can be easily processed as a sound signal (sonic raw material) and/or control data.
I think the video below can well describe the current state of the reseach. The concert was recorded live at ACE, The University of Edinburgh on 1st March 2011. Thanks to Kevin Hay and Brendan F. Doyle for the sound engineering and light design.
While composing Music for Flesh II, the new piece for the Xth Sense, I found difficult to draw the structure of the performance as the overall live act is mostly an improvisation based on a given set of sound-gestures and a fixed timeline. I define as a sound-gesture a given movement of the performer which can produce one and only sound form; this paradigm implies that the reactive computing system interprets each effective gesture according to a given, but still variable, array of sound behaviours, or sound scenes. The several sound-gestures which compose the piece are still experimental and most of them still have to be defined more precisely.
Being that the design of the performance is only based on this idea of sound-gesture, the use of a graphical score seemed to me the best way to go.
Graphical notation is fairly widespread today; since its first appearance in the 1950’s several experimental and avant-garde composers deployed graphical scores to better understand and describe a sound form, among them John Cage, Christian Wolff, Morton Feldman, Karlheinz Stockhausen, Iannis Xenakis and Anthony Braxton.
I’ve been reviewing different approaches to graphical scores, and although there exist digital systems which enable a composer to virtually draw experimental scores, I was still too fascinated by the act of drawing with a pencil on a piece of paper.
Thus, I started with simple exercises, such as drawing a graphical notation of few pieces which had inspired my composition. After some interesting results with Wind Chimes by Denis Smalley, I could draw a first draft of my own piece.
The symbols can be interpreted as an idiosyncratic representation of both the sounds and the kinetic movements to be executed by the performer.
This was my first attempt to draw a graphical score for a piece based on the Xth Sense; I will keep experimenting with this aspect of composition as the paradigm represented by the “sound-gesture”, which is integral to the Xth Sense technology, offers interesting prospects for experimental notation.
Richness of colour and sophistication of form.
How to achieve a seamless interaction with a digital framework for DSP which can guarantee richness and sophistication in real time?
Moreover, the interaction I point at is a purely physical one.
Gesture to sound, this is the basic paradigm which I need to explore.
This is a fairly complex issue, as it requires a multi-modal investigation which encompasses interrelated areas of study. The main key points can be summarized as follow:
Drawing from this observations a roadmap for a further implementation of the Xth Sense framework, I found myself struggling with a practical issue: the GUI I was developing did not fit my needs, thus slowing down the whole composition process.
In order to comfortably concentrate on composition and design of the performance I needed an immediate and intuitive routing system, which would allow me to map any of the MMG control values available to any of the DSP parameter controls in use.
Such router would satisfy a twofold mapping: one-to-one and one-to-many.
In Pure Data control parameters can be distributed in several ways, however the main methods are by-hand chords connection and chord-less communication exploiting [send] and [receive], or OSC objects.
Obviously, I first excluded the implementation of a by-hand connection method, as I need to freely perform several muscle contractions in order to test the features extracted by the system in real time. A chord-less method would have better fit my needs, but still it did not satisfy me completely, as I wanted to route the MMG control values around my DSP framework fast and intuitively.
The sssad lib by Frank Barknecht coupled with the send/receiver [iem_s] and [iem_r] from IEMlib eventually stand out as a good solution.
sssad is a very handy and simply implemented abstraction which handles presets saving within Pd by means of a clear message system. [iem_s] and [iem_r] are dynamically settable sender/receiver included in the powerful iemlib by Thomas Musil at IEM, Graz Austria. I coupled them into 2 new macro send/receive abstractions which eventually allowed me to route any MMG control value to any DSP control parameter with 4 clicks (in Run mode).
Click to enlarge.
Four clicks routing:
1st – click a bang to select which source control value has to be routed.
2nd – activate one of the 8 inputs within the router; it will automagically set that port to receive values from the previously selected source
3rd – touch the slider controlling the DSP parameter control you want to address
4th – activate one of the 8 outputs within the router; it will automagically set that port to send values to the selected parameter.
This system works efficiently both for one-to-one and one-to-many data mapping. The implementation of such data mapping system dramatically improved the composition workflow; I can now easily execute gestures and immediately prototype a specific data mapping.
Besides, as you can see from the image above, I included a curve module, which enable the user to change on the fly the curve to be applied to the source value, before it get sent to the DSP framework.
The mapping abstractions are happily embedded in the major sssad preset system, thus all I/O parameters can be safely stored and recalled with a bang.
Heading picture CC licensed by Kaens.
Click to enlarge.
During my exploration of DSP processes for muscle sounds I eventually realized I could make good use of the interesting Soundhack objects, which have been recently ported to Pd by Tom Erbe, William Brent and other developers (see Soundhack Externals).
Among the several modules now available in Pd, I focused my attention on [+bubbler] and [+pitchdelay].
The former is an efficient multi-purpose granulator, while the latter is a powerful pitch-shifting based delay, which allows a functional delay saturation along with octaves control; both work nicely in real time. More interestingly, performance testing showed that the max mean value (here referred to as a continuous event) produced by muscle contractions can be synchronously mapped to the feedback and loop depth of the [+pitchshift~] algorithm in order to obtain an immediately perceivable control over the sound processing.
Sound-gesture obtained through this mapping system are quite effective because the cause and effect interaction between performer’s gesture and sonic outcome is clear and transparent.
Here’s an audio sample. The perceived loudness of the sound and the amount of feedback is directly proportional to the amount of kinetic energy generated by the performer’s contractions.
Moreover, I included in my wrap-up object also sssad capabilities to be able to save presets within the DSP framework I’m developing.
I need to explore further this algorithm and understand further improvements for a real time performance.
While seeking inspiration and efficient methods to compose a more colorful and sophisticated piece for augmented muscle sounds, I started listening intensively to the work of Swedish composer and media artist Åke Parmerud.
Namely, Jeu D’ombres by Parmerud is an amazing piece of work, I can’t get enough of it. The piece Strings and Shadow, awarded at Prix Ars Electronica 2004, is a beautiful, precise and organic composition which seductively envelope the listener, submerging the hearing with exciting, glassy sonorities, and obscure, introspective moments of void and silence. The structure of the composition flows through one movement to the other with absolute ease, drawing a wide and unconstrained aural space, yet capable of describing a truly human intimacy. A similar, captivating experience can be recalled along the whole album.
Thanks to Martin Parker for pointing me to this music, listening to this sounds is truly inspiring for my own research and composition methods.
Here are some critical keypoints which arised from a joint analysis by my supervisor – Martin Parker – and myself regarding the first public performance of Music for Flesh I:
Overall, what I need to focus the performance design on is a richness of color and a sofistication of form.
I look forward to using two sensors on both arms, as this could significantly enhance the richness of the sonic material, while providing the possibility of polyphonic structures.
On another hand, more exciting sonic forms may also be achieved by means of autonomous processes capable of reacting to the performer’s kinetic behaviour, adding machine state-dependent pattern changes, as also suggested by my friend Jaime Oliver during our last email exchange.
Here’s a first demo presentation of Music for Flesh I, a solo sonic piece for the Xth Sense biosensing wearable device.
This piece is the first milestone of the research, many aspects need to be improved, but I’m quite satisfied of this result and above all composing this piece provided many insights about the further development of the research.
Further information here.
Music for Flesh I – solo piece for Xth Sense biosensing wearable device from Marco Donnarumma aka TheSAD on Vimeo.
Recorded in January 2011 at Suonisonori, Milan, Italy.
Camera and editing Gianluca Messina; audio recording Giuseppe Vaciago.
Here’s the Xth Sense biosensing wearable prototype v1.
Keeping up with the hardware design for a couple of weeks, I eventually felt it was the right time to make it wearable. During the Christmas holidays I came back to Italy where Andrea Donnarumma and Marianna Cozzolino gave me an incredibly precious support both during the design process and the technical development.
Before undertaking the development of the Xth Sense sensor hardware, few crucial criteria were defined:
So here’s a loose summary of the making of the Xth Sense biosensing hardware wearable prototype v1.
First, we handled the battery issue. We were not sure whether the microphone would have reacted properly and with the same sensitivity using a standalone battery, so we hacked an old portable minicassetter recorder and extracted the battery case. Next we built the circuit on a copper board, included a trimmer to regulate the voltage feeding the microphone and embedded everything in a plastic DV cassette box along with a 1/2 mono chassis jack socket. Then, we used some common wiring cable to connect the hacked battery case to the circuit and the bracelet to the circuit box.
The resulting device was obviously quite unsophisticated, but we simply wanted to make sure the microphone would have maintained the same capabilities while on battery charge. The experiment was successfull and we started planning a more usable and good-looking design.
First of all we looked for appropriate components: black box (3.15 x 1.57 x 0.67), smaller condenser and resistors, 3V coin lithium batteries and their holders (which were not so easy to find as we expected) and a more flexible wiring cable.
At this point, circuit needed to be used on a smaller copper board, so we slightly changed its design in order to decrease size and make it nicely fit the box.
Device worked like a charm, but in spite of the positive results the microphone shield required further trials. The importance of the shield was manifold; an optimal shield had to fit specific requirements: to bypass the 60Hz electrical interference which can be heard when alternating electric current distribute itself within the skin after a direct contact with the microphone metal case; to narrow the sensitive area of the microphone, filtering out external noises; to keep the microphone static, avoiding external air pressure to affect the signal; to provide a suitable air chamber for the microphone, in order to amplify sonic vibrations of the muscles, allowing to capture also deeper muscle contractions.
First, microphone was insulated by mean of a polyurethane shield, but due to the strong malleability of this material, its initial shape tended to undergo substantial alterations. Eventually, sensor was insulated in a common silicon case that positively satisfied the requirements and further enhanced the Signal-to-noise ratio (SNR).
This detail notably increased the sensitivity of the biosensing device, giving a wider range of interaction.
Such result satisfies the present hardware requirements: the device is very small, it can be put in a pocket or hooked to a belt, and it is enough sturdy to be used with no particular care. I’m going to test the performance of the device during a demo presentation at Suonisonori, Milan Italy in few days.