Introducing the Xth Sense (XS): What is it and what it can be used for.
Musical applications: How can you make music with it.
Other applications: What else you can do with it.
Framework: Description of the XS software environment.
Step by step tutorial: How to install the XS software, connect the wearable sensors and start making noise.
Troubleshooting: Some solutions to the most common issues.
Xth Sense and its documentation are copyright of Marco Donnarumma, and are licensed under a Creative Commons License. Click here for details.
The XS is composed of biophysical sensors and a custom software.
At the onset of a muscle contraction, energy is released in the form of an acoustic sound. This is to say, similarly to the chord of a violin, each muscle tissue vibrates at specific frequencies and produces a sound (mechanomyogram or MMG). Being that the frequency of muscle sounds sits between 5Hz and 45Hz the MMG is not easily audible to human ear, but it is indeed a sound wave that resonates from the body.
The MMG data is quite different from locative data you can gather with accelerometers and the like; whereas the latter reports the consequence of a movement, the former directly represents the energy impulse that causes that movement. If you add to this a high sampling rate (up to 192.000Hz if your sound card supports it) and very low latency (measured at 2.3ms) you can see why the responsiveness of the XS can be highly expressive.
The XS sensors capture the low-frequency acoustic vibrations produced by a performer’s body and send them to the computer as an audio input. The XS software analyzes the MMG in order to extract the characteristics of the movements, such as dynamics of a single gesture, maximum amplitude of a series of gestures in time, etc.
These are fed to some algorithms that produce the control data (12 discrete and continuous variables for each sensor) to drive the sound processing of the original MMG.
Eventually, the system plays back both the raw muscle sounds (slightly transposed to become better audible, say about 50/60Hz) and the processed muscle sounds.
I called this model of performance biophysical music, in contrast with biomusic, which is based on the electrical impulses of muscles and brainwaves.
From the live sampling of the muscle sounds, through the playback and manipulation of pre-recorded sounds, to the real time processing of traditional musical instruments, the XS is the first musical instrument of its kind to offer such a flexibility at a very low cost and with a free and open technology.
The XS can be played as a traditional musical instrument: acoustic sounds can be produced and modified by contracting the muscles. The XS can also be used as a gestural controller to control audio synthesis or sample processing. The XS can be used in both modes simultaneously, and the data stream can be sent to other software o hardware via MIDI and OSC.
The most interesting performance feature of such system consists of the possibility to expressively control a multi-layered processing of the MMG sounds by simply exerting diverse amounts of kinetic energy. For instance, stronger and wider gestures could be mapped so to generate sharp, higher resonating frequencies coupled with a very short reverb time, whereas weaker and more confined gestures could be deployed to produce gentle, lower resonances with longer reverb time.
The form and color of the sonic outcome is continuously shaped in real time with a very low latency (measured at 2.5ms), thus the interaction among the perceived sonic force and spatiality of the gesture is neat, transparent and fully expressive.
Muscle sounds represent a very accurate control data; their signal-to-noise ratio is higher than many other sensors signals. This means that with a bit of imagination you can use muscle sounds to train people suffering of muscle disease, control prosthetic hands, control live videos or lights, control your music player, turn on your washing machine, make your robot walk, and any other fancy hack you can imagine.
If you are intrigued and want to realize a cool hack or use the XS for your next project but you are not sure how, please get in touch with me.
The XS software is a complete digital framework to work with muscle sounds (mechanomyogram or MMG). The software is a large and multi-layered Pure Data patch. This means that you can use the usual features of Pure Data aka Pd, plus the specific functions of the XS. An XS patch can be saved as a .pd file. The software offers an intuitive Graphical User Interface (GUI).
However, before getting your hands on the XS, I recommend to get familiar with the basics of Pd (adding an object to a patch, making connections, etc.) by using this tutorial.
The software receives the audio input from the XS sensors and operates 3 different functions: data analysis, features extraction and Digital Signal Processing (DSP). The XS receives the muscle sounds produced by your body; it extracts 12 features; allows you to connect these features to audio effect parameters, which you control to process the muscle sounds into music.
When you first open the XS software a small rectangular window with a GUI comes up. The whole XS software is included here. We call this the Control window. Here you can save and load your global presets, monitor the sensor inputs, as well as pop up other modules of the software.
By clicking the related buttons the two main modules pop up: the Deck and the Sequence.
The Deck is the main playground. Here you can create Digital Signal Processing (DSP) chains (arrays of audio effect and other Pd objects) to live sample the muscle sound, monitor the features that are being extracted, and set mapping definitions to control the software. All the parameters that are visible in the Deck can be saved into a global preset, which is called a Scene. You can save up to 20 scenes.
The Sequence module allows the computer to manage the structure in time of your performance or musical piece. By clicking the bang labeled “New”, you can add an Event: this is basically a trigger that tells the computer to load a Scene at a specific point in time. To specify which Scene you want to be loaded, simply click on the black area of the Event object you just created; in the smaller window that pops up you can set the Scene to be loaded, by clicking and dragging the number box.
Before starting, it can save you a lot of headaches to make sure that:
a) the XS sensor is properly connected to your sound card or computer;
b) the battery in the XS sensor box is fully charged;
c) the buttons labeled “Audio.OFF” that you find in the Analysis module and in each Workspace are ON.
There is no sound coming out of the XS software and nothing happens.
Make sure that the Pd audio engine is ON (keyboard shortcut is ctl+.). You should hear sound now, if not find the Media tab, then click Test Audio and MIDI. There you can activate a tone generator to test the audio. If you hear the high pitched tone the audio engine in Pd is active. This means something is wrong within your XS patch. Check that the buttons labeled “Audio.OFF” that you find in the Analysis module and in each Workspace are ON. Double check that you connected properly the chords among the objects you created in the Workspace.
I see number boxes and sliders moving in the XS, but I can’t hear any sound.
The Pd audio engine in this case is active. Check that the buttons labeled “Audio.OFF” that you find in the Analysis module and in each Workspace are ON. If so, there’s something wrong with the way you connected the objects in the Workspace. Double check and make proper connections. Still no sound? Have you increased the volume of the channels you are using by sliding up the faders in the Mixer?
I try to set a mapping definition, but the Router keeps reporting the same parameter, but not the one I chose.
This happens because when the Pd audio engine is ON the control values are continuously active. Solution: simply switch the Pd audio OFF while you set your mapping definitions. When you are done, you can switch the Pd audio ON again.