This is a long overdue mention of Very Nervous System, a beautiful interactive framework developed over twenty years ago by David Rockeby.
Although VNS is based on video tracking, the approach is very similar to the one behind the Xth Sense biophysical music.
I find his words inspiring (from his website): “Because the computer is purely logical, the language of interaction should strive to be intuitive. Because the computer removes you from your body, the body should be strongly engaged. Because the computer’s activity takes place on the tiny playing fields of integrated circuits, the encounter with the computer should take place in human-scaled physical space. Because the computer is objective and disinterested, the experience should be intimate.”
And he continues referring to the VNS interface, “is unusual because it is invisible and very diffuse, occupying a large volume of space, whereas most interfaces are focussed and definite. Though diffuse, the interface is vital and strongly textured through time and space”
The Xth Sense workshop just ended. It was an exciting experience. We have been hosted by the almighty NK Berlin, an established independent venue for experimental music, DIY electronics and the likes.
During three days we have been building some Xth Sense biosensors from scratch, installing and trying out the Xth software and shortly training on biophysical gestural control of music. Definitely a lot to do for three days only!
It was the first time the Xth Sense was unveiled in such detail; on one hand, the sensor hardware proved to be fairly easy to build, also the participants who had not much prior experience with electronics and soldering successfully built their own sensors; on the other, I realized the software needs to be much more portable; we spent too much time dealing with the paths of the required libraries, and I eventually realized that we were struggling with a known bug!
Too bad. Lesson 1: check bug trackers regularly.
I was pretty satisfied of our workflow, although I realized three days are not enough to achieve the ideal format of such workshop. Eventually everyone built his own sensor and the Xth software was up and running on almost all the machines. The only real problem was to get the software working on two Mac G4.
It seemed that they couldn’t run the software because of too high CPU usage, thus, after some unlucky trials and on-the-run optimization of the software, we had to work on some other laptops.
Lesson 2: although sometimes is cool to be able to use a very old machine, make sure beforehand nobody is using one of them during your workshop.
It was extremely satisfying to see my students playing around with the system on the last day. It was good to confirm that the Xth Sense have an actual potential to extend the portability of biosensing technologies and gestural control to a wider audience. I wish we had more time for the hands-on training. Observing how each individual differently relate to sound had always fascinated me, and a course based on such bodily paradigm provides a transparent insight on the different expectations and skills of each one.
Lesson 3: DIY is good, but remember to leave enough time for the fun.
Want to thank the guys at NK, they have been extremely helpful and lovely companions. I’ll have to come back soon to Berlin for another workshop session.
I started a small tour with the Xth Sense system.
The first event I was invited to was the ImagineCreate Festival, in Derry, Northern Ireland. ImagineCreate is “a digital arts festival which brings together a world-class line-up of talents and minds from the worlds of art, design, new media and software development.”
I gave a talk about the Xth Sense and biophysical music within a presentation titled “Performance and Pure Data” in collaboration with Richard Graham.
Here’s a short video excerpt of the showcase captured by James Alliban.
It was quite good to take part in the event as I’ve received some very good feedback and suggestions. Among the other invited speakers there were (the above mentioned) Richard Graham, James Alliban, Alex Beim (Tangible Interaction), John Crooks, Gregory Taylor (Cycling ’74) and Brendan McCloskey. A full list of speakers is available on-line.
Many interesting issues were raised after my talk so I added some items to my todo list.
First, I plan to implement a tracking of muscle sounds contour. This way I could deploy the contour as dynamic curve for data mapping, obtaining different results than with a preset curve, such as a logarithmic or exponential one.
Secondly, I could compare the incoming data of each arm passing them to a discrete function; the resulting value would represent the ratio between the energy applied to the arms. It might also be interesting to track rest intervals over long period of time.
I just ended a new work residency at Inspace, home to a joint research partnership between the School of Informatics and New Media Scotland.
During the residency, me and Brendan F Doyle (Production Sound Engineering) worked on a participatory Musicircus which included a complex, reactive sonic system for 8 Xth Sense biosensors and 8 audio channels.
The piece was premiered two days ago on 15th March during a public event which gathered quite a good amount of people from different backgrounds, music, sound design, informatics, hackers and general public.
It was awesome to see how the audience positively reacted to this unconventional paradigm of musical performance, namely generating and controlling electroacoustic sounds and music by muscle’s contraction. After some initial – and expected – disorientation, everybody started realizing how to use the sensors and how to interact with the system; only then the actual Musicircus started.
The event was pretty successful, everything worked out fairly smoothly – except the few things which always “have to” get broken just before the opening – and we received several excited and stimulating feedback. Overall a great and satisfying experience.
During the same night, I also performed my new solo sonic piece Music for Flesh II, for augmented muscle sounds and Xth Sense technology. The acoustic capabilities of Inspace were challenging: glass walls all around, an empty floor which acts as a resonant shield, and interesting sonic reflections happening all around the space. I need to thank again my colleague Brendan for his great work with the sound engineering of the venue and his live mixing during my performance.
Picture courtesy by Chris Scott.
My paper “Xth Sense: researching muscle sounds for an experimental paradigm of musical performance” was accepted to the Pure Data mini conference, along with the related sound piece Music for Flesh II.
The Pd mini-con took place in Dublin, at the Trinity College. It was my first academic presentation of the Xth Sense outside of my own department; definitely an exciting chance to get new feedbacks and establish a new group of peers.
The experience was great, and the comments of audience and staff were positive. I also had some small technical problem during the performance… some good food for thought!
A full report of the event is available on my on-line magazine.
I just completed a new composition titled Music for Flesh II.
The control and aural vocabulary are far more complex than the earlier studio presentation; I also tried to enrich the sound spectrum of the Xth Sense exploring dynamic textures utilizing a wider frequency range. However, the most relevant advance was the implementation of two sensors simultaneously. The signal I receive from both sensors can be easily processed as a sound signal (sonic raw material) and/or control data.
I think the video below can well describe the current state of the reseach. The concert was recorded live at ACE, The University of Edinburgh on 1st March 2011. Thanks to Kevin Hay and Brendan F. Doyle for the sound engineering and light design.
While composing Music for Flesh II, the new piece for the Xth Sense, I found difficult to draw the structure of the performance as the overall live act is mostly an improvisation based on a given set of sound-gestures and a fixed timeline. I define as a sound-gesture a given movement of the performer which can produce one and only sound form; this paradigm implies that the reactive computing system interprets each effective gesture according to a given, but still variable, array of sound behaviours, or sound scenes. The several sound-gestures which compose the piece are still experimental and most of them still have to be defined more precisely.
Being that the design of the performance is only based on this idea of sound-gesture, the use of a graphical score seemed to me the best way to go.
Graphical notation is fairly widespread today; since its first appearance in the 1950’s several experimental and avant-garde composers deployed graphical scores to better understand and describe a sound form, among them John Cage, Christian Wolff, Morton Feldman, Karlheinz Stockhausen, Iannis Xenakis and Anthony Braxton.
I’ve been reviewing different approaches to graphical scores, and although there exist digital systems which enable a composer to virtually draw experimental scores, I was still too fascinated by the act of drawing with a pencil on a piece of paper.
Thus, I started with simple exercises, such as drawing a graphical notation of few pieces which had inspired my composition. After some interesting results with Wind Chimes by Denis Smalley, I could draw a first draft of my own piece.
The symbols can be interpreted as an idiosyncratic representation of both the sounds and the kinetic movements to be executed by the performer.
This was my first attempt to draw a graphical score for a piece based on the Xth Sense; I will keep experimenting with this aspect of composition as the paradigm represented by the “sound-gesture”, which is integral to the Xth Sense technology, offers interesting prospects for experimental notation.
Richness of colour and sophistication of form.
How to achieve a seamless interaction with a digital framework for DSP which can guarantee richness and sophistication in real time?
Moreover, the interaction I point at is a purely physical one.
Gesture to sound, this is the basic paradigm which I need to explore.
This is a fairly complex issue, as it requires a multi-modal investigation which encompasses interrelated areas of study. The main key points can be summarized as follow:
Drawing from this observations a roadmap for a further implementation of the Xth Sense framework, I found myself struggling with a practical issue: the GUI I was developing did not fit my needs, thus slowing down the whole composition process.
In order to comfortably concentrate on composition and design of the performance I needed an immediate and intuitive routing system, which would allow me to map any of the MMG control values available to any of the DSP parameter controls in use.
Such router would satisfy a twofold mapping: one-to-one and one-to-many.
In Pure Data control parameters can be distributed in several ways, however the main methods are by-hand chords connection and chord-less communication exploiting [send] and [receive], or OSC objects.
Obviously, I first excluded the implementation of a by-hand connection method, as I need to freely perform several muscle contractions in order to test the features extracted by the system in real time. A chord-less method would have better fit my needs, but still it did not satisfy me completely, as I wanted to route the MMG control values around my DSP framework fast and intuitively.
The sssad lib by Frank Barknecht coupled with the send/receiver [iem_s] and [iem_r] from IEMlib eventually stand out as a good solution.
sssad is a very handy and simply implemented abstraction which handles presets saving within Pd by means of a clear message system. [iem_s] and [iem_r] are dynamically settable sender/receiver included in the powerful iemlib by Thomas Musil at IEM, Graz Austria. I coupled them into 2 new macro send/receive abstractions which eventually allowed me to route any MMG control value to any DSP control parameter with 4 clicks (in Run mode).
Click to enlarge.
Four clicks routing:
1st – click a bang to select which source control value has to be routed.
2nd – activate one of the 8 inputs within the router; it will automagically set that port to receive values from the previously selected source
3rd – touch the slider controlling the DSP parameter control you want to address
4th – activate one of the 8 outputs within the router; it will automagically set that port to send values to the selected parameter.
This system works efficiently both for one-to-one and one-to-many data mapping. The implementation of such data mapping system dramatically improved the composition workflow; I can now easily execute gestures and immediately prototype a specific data mapping.
Besides, as you can see from the image above, I included a curve module, which enable the user to change on the fly the curve to be applied to the source value, before it get sent to the DSP framework.
The mapping abstractions are happily embedded in the major sssad preset system, thus all I/O parameters can be safely stored and recalled with a bang.
Heading picture CC licensed by Kaens.
Click to enlarge.
During my exploration of DSP processes for muscle sounds I eventually realized I could make good use of the interesting Soundhack objects, which have been recently ported to Pd by Tom Erbe, William Brent and other developers (see Soundhack Externals).
Among the several modules now available in Pd, I focused my attention on [+bubbler] and [+pitchdelay].
The former is an efficient multi-purpose granulator, while the latter is a powerful pitch-shifting based delay, which allows a functional delay saturation along with octaves control; both work nicely in real time. More interestingly, performance testing showed that the max mean value (here referred to as a continuous event) produced by muscle contractions can be synchronously mapped to the feedback and loop depth of the [+pitchshift~] algorithm in order to obtain an immediately perceivable control over the sound processing.
Sound-gesture obtained through this mapping system are quite effective because the cause and effect interaction between performer’s gesture and sonic outcome is clear and transparent.
Here’s an audio sample. The perceived loudness of the sound and the amount of feedback is directly proportional to the amount of kinetic energy generated by the performer’s contractions.
Moreover, I included in my wrap-up object also sssad capabilities to be able to save presets within the DSP framework I’m developing.
I need to explore further this algorithm and understand further improvements for a real time performance.
While seeking inspiration and efficient methods to compose a more colorful and sophisticated piece for augmented muscle sounds, I started listening intensively to the work of Swedish composer and media artist Åke Parmerud.
Namely, Jeu D’ombres by Parmerud is an amazing piece of work, I can’t get enough of it. The piece Strings and Shadow, awarded at Prix Ars Electronica 2004, is a beautiful, precise and organic composition which seductively envelope the listener, submerging the hearing with exciting, glassy sonorities, and obscure, introspective moments of void and silence. The structure of the composition flows through one movement to the other with absolute ease, drawing a wide and unconstrained aural space, yet capable of describing a truly human intimacy. A similar, captivating experience can be recalled along the whole album.
Thanks to Martin Parker for pointing me to this music, listening to this sounds is truly inspiring for my own research and composition methods.