We (the EAVI research group at Goldsmiths, University of London) just got back from the SIGCHI Conference on Computer-Human Interaction in Seoul, Korea. CHI is one of the largest conference in the field, counting this year over 3000 attendees.
The CHI experience is as overwhelming as exciting. With 15 parallel tracks, there’s always something interesting to see and something equally interesting you are going to miss. To add to the thrill, this year the conference was hosted in a massive multipurpose complex, the COEX, which includes a mall, restaurants and other conferences all in the same venue. I leave the rest to your imagination.
My contribution to the conference was twofold. Over the weekend I joined the workshop “Collaborating with Intelligent Machines” and during the week we presented a long paper on our latest research on using bimodal muscle sensing to understand expressive gesture.
Organised by consortium members of the GiantSteps research project, Native Instruments (DE), STEIM (NL) and Dept. of Computational Perception, Johannes Kepler University (AT), the workshop run for a full day and involved several researchers working on embodied musical interaction, music information retrieval and instruments design.
Led by Kristina Andersen, Florian Grote and Peter Knees, we first went through brief presentations of personal research, including a keynote by Byungjun Kwon, then engaged in a brainstorming on the possibilities of future music machines, and finally went on realising (im)possible musical instruments using props, like cardboard, scissors, tape and plastic cups.
Eventually, we closed the workshop discussing the ideas emerged throughout the day, and in the evening we joined local experimental musicians for a sweet concert and some drinks.
On Monday the conference started at full speed. Dodging rivers of attendees, we managed to walk our way into the keynote venue, and started hooking up with colleagues from around the world.
On Thursday, we presented a long paper entitled “Understanding Gesture Expressivity through Muscle Sensing”. The paper, by Baptiste Caramiaux, myself and Atau Tanaka, is actually a journal article which we have published in a recent issue of the Transactions on Computer-Human Interaction (TOCHI).
Our contribution focuses on expressivity as a visceral capacity of the human body. In the article, we argue that to understand what makes a gesture expressive, one needs to consider not only its spatial placement and orientation, but also its dynamics and the mechanisms enacting them.
We start by defining gesture and gesture expressivity, and then present fundamental aspects of muscle activity and ways to capture information through electromyography (EMG) and mechanomyography (MMG). We present pilot studies that inspect the ability of users to control spatial and temporal variations of 2D shapes and that use muscle sensing to assess expressive information in gesture execution beyond space and time.
This leads us to the design of a study that explores the notion of gesture power in terms of control and sensing. Results give insights to interaction designers to go beyond simplistic gestural interaction, towards the design of interactions that draw upon nuances of expressive gesture.
Eventually, we showed a small excerpt from a new performance I’ll be previewing at the upcoming NIME conference in Louisiana (see below, and yes, that’s a sneaky preview!). Here I have implemented the feature extraction system described in the article, modifying and adapting the system to the more fuzzy requirements of a live performance.
The talk was very well received and prompted some interesting questions for future work. Some pointed to the use of our system together with posture recognition systems to enrich user’s input, and others questioned whether subtle tension and force levels can be examined with our methodology. Food for thought!
To conclude, here some personal highlights of the conference:
Useless to say, Seoul was surprising and heartwarming as usual, so… ’til the next time!