Here it comes, it’s that moment when you have to write your PhD thesis in less than 300 words. You know, it’s one of those awkward yet incredibly helpful challenges that, as a PhD candidate, you better face sometime sooner than later.
You want your research to be understood by anybody with any background in the shortest amount of time possible (I’m talking about seconds here), and you want them to understand it in detail. You want that the exact meaning of each (precious) term you use is enforced by the context you provide. You want to express clearly what is your concern or subject matter and how you will go about it, and that’s all; you don’t want to mention anything you won’t do. You want to preempt any question that may arise in the mind of your readers. You want those readers not to feel the need to check their Facebook timeline for at least a couple of minutes after reading your abstract because they are actually still thinking about what you wrote. Be clear, rigorous, engaging and a little visionary.
As we all know, any advice is more easily suggested than put into practice, you will be my proofreader. So, after two years of research, here it goes. On a second attempt I made it at 247 words.
“The subject of the research is the human body in the live performance of electronic music. By using notions selected from the field of body theory, I discuss the practice of established performers in the field. I look at how they combined human bodies and technological ones and what aesthetic and technical results their practice has led to. In so doing, I develop an argument for the importance to elaborate a model of human-machine interaction that goes beyond a notion of control. A model where the performer’s physical engagement with the instrument is paramount and the relation between the performer and the instrument is one of mutual dependence.
In order to build such a model, I use an interdisciplinary methodology which investigates the physiological and physical qualities of a perfomer’s motion. Namely, I look at the ways in which those qualities can affect the functioning of a digital musical instrument, and vice versa, how the functioning of a digital musical instrument can affect the physiological and physical qualities of a performer’s motion. This theoretical and technical knowledge will be used to develop performances to be exhibited publicly and evaluated through peer-review, so as to feed back into the research process.
This work, it is hoped, will contribute a methodology and a set of computational tools, which on one hand, produce novel ways to understand and design musical expression with body technologies, and on the other, encourage performance practices where human and technological bodies complement each other’s capabilities.”
The core research questions are:
Q1: Can notions from body theory inform the design of digital musical instruments?
Q2: If so, does the performance of live electronic music change and how?
Q3: Can physiological sensing provide an entry point to a performance model that goes beyond control?
If you have comments, please don’t hesitate to write me, I’m eager to discuss things and thoughts.
“My painting is not violent; it’s life that is violent… Sexuality, human emotion, everyday life, personal humiliation… violence is part of human nature… Even within the most beautiful landscape, in trees, under the leaves the insects are eating each other; violence is part of life.”
My first thought on reading this interview has been that we should take real care of the few artists that are able to reflect about this topic. Because there will be always only few of them, and we need them badly, to remember us about what everybody wants to forget.
Here’s the complete article at Aphelis.net “an iconographic and text archive related to art, communication and technology”.
The following is the introduction of an essay entitled “Taken Apart and Put Together: Human, Machine and Sound Technologies”. The title draws on a passage by Donna Haraway. I’ve just completed the essay for an upcoming book on art science, computation and life. The text is also part of my ongoing research into the combination of body theory and machine learning for biophysical musical instruments and performance strategies. In this sense, it gives a glimpse of the path I’m following. Hope the text can give you some ideas. Enjoy the reading…
UPDATE: Added some useful references at the bottom of the article.
One of the unique characteristics of sound lies in that, among all the energy waves that surround us, acoustic waves actively influence the human body at different levels, from intangible emotion arousal to physical induction of resonances, from pleasure and melancholy, to stillness and dance. From another point of view, for the human being sound is sound only when it is heard. It is in our ears that acoustic waves become a discernible event that we can feel, listen to and define as a sound. Because the experience of sound is intrinsically linked to the human body, the performance of music and the design of musical instrument have, for the most, been grounded in body motion. An obvious example is provided by the centuries-old history of traditional musical instrument making. String, wind and percussion instruments are all based on an interaction model whereby the kinetic energy of the human body excite the body of the instrument which then produces sound. With the arrival of analog synthesisers, the role of the human body in the performance of sound technologies shifted from sourcing acoustic energy to the instrument, to controlling its modulation parameters. Switching knobs, plugging and removing electrical cables, or tapping on a switch became understood as another gesture vocabulary. The idea of the body as a means to control sound technologies was then strengthen by the coming of digital computation, in the form of laptop computers and digital music software. Orchestras of ‘virtual synthesisers’ are operated by simply typing on a keyboard and moving a mouse. The performance of laptop music minimises the impact of the player’s physical effort on sound production in exchange for an easy access to a virtually infinite number of musical parameters and operations. With the increasing accessibility of technologies that gather information on a user’s gestural motion, and with the rise on the market of portable devices, the performance of sound technologies have gone through a re-evaluation of the player’s physical engagement in the production of sound. This is the first core concern of this essay.
How to leverage a player’s physical engagement with sound and computational technologies to enable the creation of new sounds and exciting performance strategies?
In which ways can we move beyond a metaphor of control towards an open-ended and mutual mediation of the player and the instrument?
It would be misleading however to discuss the shifting characterisation of the body in the performance of electronic music without adding to the picture the equally mutable characterisation of the body in the cultural milieu. The human body is cultural and political. It is cultural in that it is the active means by which public cultural artifacts are conceived and made. It is political because it is the subject and object of social strategies involving power, ethics and belief. It follows that the cultural understanding of the human body has important, yet often overlooked, repercussions on the understanding of the player’s body in a public musical performance.
Before the establishment of a dedicated field of studies, the body has been discussed as merely an operational part of the overarching structure of society. This approach has prompted a reductive understanding of the human body that have been rendered most notably through the views of essentialism and social constructivism. The former view, indebted with Descartes philosophy of dualism, sees the human being as an entity split between the higher activity of the mind and the lower fleshly mechanisms of the body. Social constructivism on the other hand, sees human beings as individuals whose subjectivity is constructed through the social making of culture. Thereby social constructivism does not account the body nor the world we live in as means of production of the social, but rather as results of it. Public critical discourses addressing directly the body have first arised from the second feminist wave during the ’60s and the ’70s. The feminist critique has addressed the objectification of the female body by a male-dominated power structure. Specifically, a broad public debate has focused on issues like family, sexuality, workplace conditions, and reproductive rights. Since then, a sociology of the body, or body theory have been established. Sociologists, philosophers, and scientists have investigated the body as the site where and through which identity, gender, religion, and knowledge are nurtured. These perspectives have in turn fostered the cultural analysis of the relation between the machine and the human body. Namely, the humanities have elaborated new definitions of what means to be human in the view of the increasingly intimate integration of the human body and machinic, computational, and biological technologies. This is the second core issue of the present work.
How can we discuss critically the integration of man and machine at a cultural and political level?
How can such understanding inform the way we design technological musical instruments and the related performance strategies?
The term ‘integration’ is used above purposely to indicate not a mere pairing of the machine and the human body, but the intermixing of two things that have been long segregated. The integration of the human and the machine is not intended at a metaphysical level, but at a practical one. Biomedical technologies, or biotechs in short, have provided us an entry point to a still largely unexplored territory where human and technological bodies blend together. DNA cells are used to do computations in test tubes, the entire human genome heritage is stored and categorised in digital database accessible via the Web, artificial organs are used to replace malfunctioning human organs, electrical and mechanical signals produced by physiological processes are channeled through the circuits of robotic prosthesis that enable human beings to restore their bodies. This is the result of a long history of research in the field of biomedical engineering and its branches, which include neural, genetic and tissue engineering, medical implants and prosthetics, and biomedical equipment design. Far from wanting to center this work on biomedical engineering, it will be shown how the knowledge produced in that field has been re-appropriated by performance art practice towards the critical questioning of human physicality and biology.
To summarise, the previous paragraphs have mapped out the broad areas of study that this essay will touch upon, namely a) performance of sound technologies, b) body theory, and c) biomedical engineering. Within those areas we have identified the core point of interest that are i) an open-ended and mutual mediation of a player and the instrument that leverages physical engagement, ii) the cultural and political nuances of the integration of human and machine and its relevance to musical instrument design. In order to elaborate a perspective that embraces such diverse topics we will make use of two key concepts: unfinishedness and biomediation. Unfinishedness relates to the incomplete nature of the human being and the ways such nature make us bound to extend our bodies through technologies. Biomediation refers to the intermix of human and machines at the physical and biological level, and to the modalities whereby this extends the expressive capacity of both the human and the technological bodies. In the remainder of this essay we review the origins and most relevant developments of those two key concepts in philosophy and cultural studies, and then discuss their applications in artistic performance, with a special focus on the role of sound technologies.
Some core references the full essay draws upon:
Bedau, M. and P. Humphreys
2008. Emergence: Contemporary Readings in Philosophy and Science. MIT Press.
2013. The Posthuman. Cambridge, UK, Malden, MA, USA: Polity Press.
Clough, P. T.
2008. The Affective Turn: Political Economy, Biomedia and Bodies. Theory, Culture & Society, 25(1):1–22.
Gray, C. H., H. Figueroa-Sarriera, and S. Mentor
1995. The Cyborg Handbook. New York and London: Routledge.
2003. The Companion Species Manifesto: Dogs, People, and Significant Otherness. Prickly Paradigm Press.
Hayles, K. N.
1999. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University Of Chicago Press.
2002. The Evolutionary Alchemy of Reason. In Parables for the Virtual:
Movement, Affect, Sensation. Durham: Duke University Press.
Maturana, H. R. and F. J. Varela
1980. Autopoiesis and Cognition: The Realization of the Living. Springer.
1996. Heidegger and Being and Time. London and New York: Routledge.
2012. The Body and Social Theory. Third Edition. Sage Publications Ltd.
1992. The Genesis of the Individual. In Incorporations (Zone 6), J. Crary and S. Kwinter, eds., Pp. 297–319. Zone Books.
2002. Towards a Compliant Coupling: Pneumatic Projects, 1998-2001. In The Cyborg Experiments: The Extension of the Body in the Media Age, Pp. 73–77. Bloomsbury Academic
2011. BioMuse to Bondage: Corporeal Interaction in Performance and Exhibition BioMuse. In Intimacy Across Visceral and Digital Performance, M. Chatzichristodoulou and R. Zerihan, eds., Pp. 1–9. Basingstoke: Palgrave Macmillan.
2003. What is Biomedia? Configurations, 11(1):47–79.
Turner, B. S.
1992. Regulating Bodies: Essays in Medical Sociology. Taylor & Francis.
Varela, F. J., E. Rosch, and E. Thompson
1991. The embodied mind: Cognitive Science and Human Experience. MIT Press.
2006a. Panel Discussion moderated by Michel Waisvisz. Manager or Musician? About virtuosity in live electronic music. In International Conference on New Interfaces for Musical Expression, May, p. 415, Paris, France.
The text that follows is the program note for a panel discussion moderated by Michel Waisvisz during NIME 2006. In Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06), Paris, France
Thank you Michel for the ever inspiring words.
Do we operate our electronic systems or do we play them?
enchantment versus interaction
if our goal is musical expression we have to move beyond designing technical systems.
we have to move beyond symbolic interaction.
we have to transcend responsive logic;
engage with the system:
power it and touch it with our bodies,
with our brains.
invent it and discover it’s life;
embrace it as instrument.
an instrument that sounds between our minds.
we will have to operate beyond pushing buttons and activating sensors
beyond isolating gestures and mapping data and parameters
beyond calculating response
beyond assuming that the concept will create music
we should abolish the illusion of ‘control’
merge our intentions into those of the instrument and the audience
get inspired by change, miscalculation, invested instinct, insightful anticipation, surprise and failure
the sensors, the logic, the artistic debate, the technical debate, the circuits, the theories about perception, the new war
driven technologies, the ability or dis-ability to communicate, the conferences, the endless experimentation with system
tweaks, the touch and sound, the reoccurring state of disbelief, the craving for the stage, the difficult and great
collaborations, composing the now, the survival of the electronic music scenes, the nime, the industry, the independents,
the musical fun, the appreciation of difference, the body as source of electrical and musical energy, the bonding of
thinkers, the rapid improvisers,
… just to extrapolate some ingredients and vehicles of our quest.
it might work if we manage to express ourselves musically by moving beyond interaction,
beyond mere technical beliefs and disbelief.
by engaging, by trusting ourselves into the potential of our new instruments,
enchanting our sounds, our audience.
enchantment is not only a state of mind,
it is a technology
designing for new musical expression is casting a spell on instrumental practice.
Michel Waisvisz, Limerlé, May 2006
In an attempt to map out the origins of the different perspectives on the technological body in the performing arts, I began compiling different timelines, looking at music technology, performance art, sociology of the body, system theory, natural sciences, and history. Now I’m in the process of grasping core theories in those disciplines and contextualising them in the their historical context. The next step is to find links and connections that, by sweeping across disciplines, will reveal the cultural, technological and social modalities by which the body, in the technological performance of art, has formed in the way we know it today. It is a complex research and it is proving difficult to keep focused on the subject matter, yet it is incredibly fascinating to gain a detailed map of this topic, and I’m confident it has the potential to be an interesting contribution both to the community and my own practice.
I’ll be updating the blog with further notes and thoughts as they emerge. Click the image below to enlarge.
This picture pretty much sums it all up. Given the interest of motion-gesture researchers in using mostly cognitive science to understand sound-making motion, there is room to explore a politically engaged perspective that straight-forwardly address the (player’s) body.
The body is a political entity and body-based music performance produces political messages. A body is political for it touches the aspirations and the “open wounds” of everyone’s cultural and emotional background. Think not only of gender, identity, self-acceptance, who I am, who I want to be, who I pretend I am, but also everyday life as a part of a larger society of bodies. We are bodies that interact with and affect other bodies countless times a day, and together we put forth our world. What a body can do? What multiple bodies can do? How our bodies are modified by the instruments we use? and how this comes down to the design of new musical instruments?
The politics of physical musical performance might not be as evident as for performance art, yet, it is enacted with or without the player will. This is valid for the overall practice of music performance, and is particularly true when it comes to works that augment the body with technology. The question is: How can we make the design of physical musical instrument and performance with (bio)technologies politically significant?
An interesting perspective is forming by drawing upon the notions below. The summer will be the time for me to write a related literature review and thus combine this knowledge into an original form.
effort, a body whose physicality defines and mediates with the musical instrument (Ryan, 1991);
emergence, a body always in process (Massumi, 2002);
enactment, a body that brings forth a world by knowing (Maturana and Varela, 1992);
biomediation, a body reconfigured by media practices that condition its biology (Thacker, 2003).
Maturana, H.R., and Varela, F. J. 1992, The tree of knowledge. Shambala.
Massumi, B. 2002. Parables of the virtual: Movement, affect, sensation. Durham N.C: Duke University Press.
Ryan, J. 1991. Some remarks on musical instrument design at STEIM. Contemporary Music Review, 6(1):3-17.
Thacker, E. 2003. What is Biomedia? Configurations, 11(1):47-49.
It’s a surprisingly random hot day in Amsterdam. STEIM, the unique Studio for ElectroInstrumental Music, is invaded by sounds coming from all studios while the artists get ready for the big night. Chilled atmosphere, an indian dinner, and doors open! If you missed this Summer Party at STEIM, you missed a lot. Great music and awesome performers packed into one of the most important places for sound and music performance in the past 30 years up to these days .
I was flattered when invited to perform a 30 minutes concert of biophysical music, including my latest piece Ominous, as I had the chance to share the stage with Laetitia Sonami & James Fei, Mazen Kerbaj, Sybren Danz, and Daniel Schorno. It was a great experience, a delicious auditive journey, and an emotional leap into the feel of being part of such community. So thanks everyone! The staff at STEIM, Esther, Jon, Marije, Kees, Nico, Lucas, and the audience that spread around STEIM and warmly welcomed us all.
Some sketches and notes I am currently working on together with Baptiste Caramiaux and Atau Tanaka towards the creation of a corporeal musical space generated by biological gesture, that is, the complex behaviour of different biosignals during performance.
General questions: why to use different biosignal in a multimodal musical instrument? How to meaningfully deploy machine learning algorithms for improvised music?
Machine learning (ML) methods in music are generally used to recognise pre-determined gestures. The risk in this case is that a performer ends up being concerned about performing gestures in a way that allows the computer to understand them. On the other hand, ML methods could be possibly use to represent an emergent space of interaction, that would allow freedom of expression to the performer. This space shall not be defined beforehand, but rather created and altered dynamically according to any gesture.
An unsupervised ML method shall represent in real time complementary information of the EMG/MMG signals. The output shall be rendered as an axis of the space of interaction (x, y, …). As the relations between the two biosignals change, the amount of axes and their orientation change as well. The space of interaction is constantly redefined and constructed. The aim is to perform the body as an expressive process, and let to the machine the duty of representing this process by extracting information that would not be understood without the aid of computing devices.
Each kind of gesture shall be represented within a specific interaction space. The performer would then control sonic forms by moving within, and outside of, the space. The ways in which the performer travel through the interaction space shall be defined by a continuous function derived by the gesture-sound interaction.
Following our previous study on biophysical and spatial sensing, we narrowed down the focus of our research, and constrained a new study to MMI with 2 biosignals only. Namely, we focused on mechanomyogram (MMG) and electromyogram (EMG) from arm muscle gesture. Although there exists research in New Interfaces for Musical Expression (NIME) focused on each of the signals, to the best of our knowledge, the combination of the two has not been investigated in this field. The following questions initiated this study: In which ways to analyse EMG/MMG for complementary information about gestural input? How can musician control separately the two biosignals? Which are the implications at low level sensorimotor system? And how novices can learn to skillfully control the modalities?
Our interest in conducting a combined study of the two biosignal lies in the fact that they are both produced by muscle contraction, yet report different aspect of the muscle articulation. The EMG is a series of electrical neuron impulses sent by the brain to cause muscle contraction. The MMG is a sound produced by the oscillation of the muscle tissue when it extends and contracts.
In order to learn about differences and similarities of EMG and MMG we looked at the related biomedical literature, and found comparative EMG/MMG studies in the field of sensorimotor system, and kinesis research. We selected aspects of gestural exercise where exists complementary information of MMG/EMG. For the interested reader, further details will be soon available in our related NIME paper.
We used those aspects of muscle contraction to design a small gesture vocabulary to be performed by non-expert players with a bi-modal, biosignal-based interface created for this study. The interface was build by combining two existing separate sensors, namely the Biomuse for the EMG, and the Xth Sense for the MMG signal. Arm bands with EMG and MMG sensors were placed on the forearm. One MMG channel and one EMG channel each were acquired from the users’ dominant arm over the wrist flexors, a muscle group close to the elbow joint that controls finger movement.
In order to train the users with our new musical interface we designed three sound-producing gestures. The EMG and MMG are independently translated into sound, so that every time a user performs one of the gesture, one or the other sound, or a combination of the two, is heard. Users were asked to perform the gestures twice: the first time without any instruction, and the second time with a detailed explanation of how the gesture was supposed to be executed. At the end of the experiment, we interviewed the users about the difficulty of controlling the two sounds. We studied the players’ ability to articulate the two modalities through video and sound recording and analysed their interviews.
The results of the study showed that: 1) MMG/EMG provide richer bandwidth of information on gestural input; 2) MMG/EMG complementary information vary with contraction force, speed, and angle; 3) novices can learn rapidly how to independently control the two modalities.
This findings motivate a further exploration of a MMI approach to biosignal-based musical instruments. Perspective work should look at the further development of our bi-modal interface by designing discrete and continuous multimodal mapping based on EMG/MMG complementary information. Moreover, we should look at custom machine learning methods that could be useful in representing and classifying gesture via muscle combined biodata, and datamining techniques that could help identify meaningful relations among biodata. Such information could then be used to imagine new applications for bodily musical performance, such as an instrument that is aware of its player’s expertise level.
Pictures by Baptiste Caramiaux, and Alessandro Altavilla.
I kicked off my PhD studies by extending the Xth Sense, a new biophysical musical instrument, based on a muscle sound sensor I developed, with spatial and inertial sensors, namely whole-body motion capture (mocap) and accelerometer. The aim: to understand the potential of combined sensor data towards a multimodal control of new musical interface.
An excerpt from our related paper:
“In the field of New Interfaces for Musical Expression (NIME), sensor-based systems capture gesture in live musical performance. In contrast with studio-based music composition, NIME (which began as a workshop at CHI 2001) focuses on real-time performance. Early examples of interactive musical instrument performance that pre-date the NIME conference include the work of Michel Waisvisz and his instrument, The Hands, a set of augmented gloves which captures data from accelerometers, buttons, mercury orientation sensors, and ultrasound distance sensors (developed at STEIM). The use of multiple sensors on one instrument points to complementary modes of interaction with an instrument. However these NIME instruments have for the most part not been developed or studied explicitly from a multimodal interaction perspective.”
Together with team colleagues Baptiste Caramiaux and Atau Tanaka, we designed a study of bodily musical gesture. We recorded and observed the sound-gesture vocabulary of my performance entitled Music for Flesh II.
The data recorded from the gesture were 2 mechanomyogram signals (MMG or muscle sound), mocap data, one 3D vector from the accelerometer, and 3D positions and quaternions of my limbs.
We created a public multimodal dataset and performed a qualitative analysis of those data. By using a custom patch to visualise and compare the different types of data (pictured below) we were able to observe complementarity of different forms in the information collected. We noted 3 types of complementarity: synchronicity, coupling, and correlation. You can find the details of our findings in the related Work in Progress paper, published for the recent TEI conference on Tangible, Embedded, and Embodied Interaction in Barcelona, Spain.
To summarise, our findings show that different type of sensor data do have complementary aspects; these might depend on the type and sensitivity of sensor, and on the complexity of the gesture. Besides, what might seem a single gesture can be segmented into sections that present different kind of complementarity among the different modalities. This points to the possibility for a performer to engage with richer control of musical interfaces by training on a multimodal control of different types of sensing device; that is, the gestural and biophysical control of musical interfaces based on a combined analysis of different sensor data.