Res, a matter.

Archive for May, 2015

As my PhD draws close to an end this summer, I’m in studio rehearsing a new performance that will be previewed at NIME, the international conference on New Interfaces for Musical Expression, at the end of the month.

I’ve been working together with Baptiste Caramiaux to create Corpus Nil, a new body performance that wraps up the research I’ve done in the past three years combining physiological computing, performance art and cultural studies of the body to examine physical expression in sound performance.

Here are some rehearsal pictures, visual sketches of the final performance.

Corpus Nil is a performance for reorganised body, octophonic surround sound, interactive lights and biowearable musical instrument.


Through a series of movements that explore the limits of muscular tension, limbs torsion, skin friction, and equilibrium the body is reorganised.


As the body morphs, its muscular force and brain activity are transformed into digital sounds and light patterns.


The body and the machine are configured into one entity. Their relation is not one of control, but one of becoming.


Together, the human body and the machine body become a new body.


A body that is not necessarily human nor cyborg, it’s an expressive body of flesh, circuitry, transducers, sound and lights.

Stay tuned for a video teaser to come out soon.


I’m delighted to announce the publication of two books on art and science to which I’ve contributed two different essays. These draw upon my recent doctoral research on corporeality and computation in sound performance. A multidimensional practice-based research which weaves together resources from cultural studies of the body, performing arts and human-computer interaction.

In “Experiencing the Unconventional: Science in Art”, I propose and discuss the notion of “configuration” of human bodies and machines. I do so by linking philosophy of human individuation, body performativity and the use of sound to mediate human physiology with machine circuitry.


“Do we control our body properties, or are they controlled by the media that condition the body in the first place? In our physical interaction with technology, are our body capacities the object or the subject of the action? The answer lies in understanding human subjectivity and technological individuality as two sides of the same iterative process. Human beings create technological artifacts which influence the formation of human subjectivity. In turn, the shifting characterisation of human subjectivity prompts new directions for technical development. It is a self-organisational feedback loop. From this viewpoint, we can move beyond the idea where identity is controlled, and embrace a notion where human subjectivity emerges through the configuration of human bodies with technological bodies. The term ‘configuration’ will be used in this essay to indicate not a mere pairing of machine and human bodies, but the their arrangement in particular forms and for specific purposes.”

Read the full article or buy the book.


In the second book, “Meat, Metal and Code \ Contestable Chimeras – STELARC”, I examine the practice of pioneering body artist Stelarc. By comparing, through the lens of Deleuze philosophy of sensation, the bodies in Francis Bacon’s painting with those in Stelarc’s performances, I elaborate on the notions of fluid flesh and rhythmic skin that link their works.


“According to Deleuze, when Bacon first draws a head and then scrubs the eyes and the mouth with a brush, he deforms the Figure by exerting directional forces on the canvas. The physical movement of the brush is the force that makes the drawing of the head mutate into something else, something that is not a human nor an animal face. This is the moment when the sensation is brought to light. By looking at the amorphous Figure, one can feel the sensation of those forces, their rhythm. Similarly, in Stelarc’s work, the physical tension of the fish hooks, the ropes and the gravitational field are the forces that make the body mutate into something else, a stretched mesh of skin emptied of its flesh and bones. By looking at Stelarc’s suspended body one can feel the sensation of those forces. The body is motionless to the eyes,
but the rhythm of the forces that pull the body downward and upward is clearly expressed by the deformation of the skin. The skin becomes rhythm.”

Full article can be read here, and the book is available at this page.

Enjoy the reading!


We (the EAVI research group at Goldsmiths, University of London) just got back from the SIGCHI Conference on Computer-Human Interaction in Seoul, Korea. CHI is one of the largest conference in the field, counting this year over 3000 attendees.

The CHI experience is as overwhelming as exciting. With 15 parallel tracks, there’s always something interesting to see and something equally interesting you are going to miss. To add to the thrill, this year the conference was hosted in a massive multipurpose complex, the COEX, which includes a mall, restaurants and other conferences all in the same venue. I leave the rest to your imagination.

My contribution to the conference was twofold. Over the weekend I joined the workshop “Collaborating with Intelligent Machines” and during the week we presented a long paper on our latest research on using bimodal muscle sensing to understand expressive gesture.

Organised by consortium members of the GiantSteps research project, Native Instruments (DE), STEIM (NL) and Dept. of Computational Perception, Johannes Kepler University (AT), the workshop run for a full day and involved several researchers working on embodied musical interaction, music information retrieval and instruments design.


Led by Kristina Andersen, Florian Grote and Peter Knees, we first went through brief presentations of personal research, including a keynote by Byungjun Kwon, then engaged in a brainstorming on the possibilities of future music machines, and finally went on realising (im)possible musical instruments using props, like cardboard, scissors, tape and plastic cups.



Eventually, we closed the workshop discussing the ideas emerged throughout the day, and in the evening we joined local experimental musicians for a sweet concert and some drinks.


On Monday the conference started at full speed. Dodging rivers of attendees, we managed to walk our way into the keynote venue, and started hooking up with colleagues from around the world.


On Thursday, we presented a long paper entitled “Understanding Gesture Expressivity through Muscle Sensing”. The paper, by Baptiste Caramiaux, myself and Atau Tanaka, is actually a journal article which we have published in a recent issue of the Transactions on Computer-Human Interaction (TOCHI).


Our contribution focuses on expressivity as a visceral capacity of the human body. In the article, we argue that to understand what makes a gesture expressive, one needs to consider not only its spatial placement and orientation, but also its dynamics and the mechanisms enacting them.

We start by defining gesture and gesture expressivity, and then present fundamental aspects of muscle activity and ways to capture information through electromyography (EMG) and mechanomyography (MMG). We present pilot studies that inspect the ability of users to control spatial and temporal variations of 2D shapes and that use muscle sensing to assess expressive information in gesture execution beyond space and time.


This leads us to the design of a study that explores the notion of gesture power in terms of control and sensing. Results give insights to interaction designers to go beyond simplistic gestural interaction, towards the design of interactions that draw upon nuances of expressive gesture.



Eventually, we showed a small excerpt from a new performance I’ll be previewing at the upcoming NIME conference in Louisiana (see below, and yes, that’s a sneaky preview!). Here I have implemented the feature extraction system described in the article, modifying and adapting the system to the more fuzzy requirements of a live performance.


The talk was very well received and prompted some interesting questions for future work. Some pointed to the use of our system together with posture recognition systems to enrich user’s input, and others questioned whether subtle tension and force levels can be examined with our methodology. Food for thought!

To conclude, here some personal highlights of the conference:

Useless to say, Seoul was surprising and heartwarming as usual, so… ’til the next time!