Res, a matter.

Recently I’ve been dedicating some good time to the software framework in Pure Data. After some early DSP experimentations and the improvements of the MMG sensor I had quite a clear idea about how to proceed further.
The software implementation actually started some time before the MMG research as a fork of C::NTR::L, a free, interactive environment for live media performance based on score following and pitch recognition I developed and publicly released under GPL license last year.
When I started this investigation I thought to start from the point I left last year, so to take advantage of previous experience, methods and ideas.
Click to enlarge.

MMG signal processing framework | v 0.6.1 | 2010

The graphic layout has been designed using the free, open source software Inkscape and Gimp.

The present interface consists of a workspace in which the user can dynamically load, connect and remove several audio processing modules (top); a sidebar which enables to switch among 8 different workspaces (top right); some empty space that will be reserved to utilities modules, such as a timebase and monitoring modules (middle); a channel strip to control each workspace volume and send amount (bottom); a squared area used to load diverse modules such as the routing panel that you can see in the image (mid to bottom right). Modules and panels are dynamic, which means they can be moved and substituted dynamically in a click for a fast and efficient prototyping.
Until now several audio processing modules have been implemented:

  • a feedback delay
  • a chorus
  • a timestretch object (for compression and expansion) based on looped sampling
  • a single side band modulation object (thanks to Andy Farnell for the tip about the efficiency of ssb modulation compared with tape modulation)
  • what I called a grunger, namely a module consisting of a chain of reverb, distortion, bandpass filter and ssb pitch shifting (have to thanks my supervisor Martin Parker for the insight about shiftpitching the wet signal of the reverb)

Another interesting process I could implement was a calibration system; it enables the performer to calibrate software parameters according to the different intensity of the contractions of each finger, the whole hand or the forearm (by now the MMG sensor has been tested for performance only on the forearm).
Such process is being extremely useful as it allows the performer to customize the responsiveness of the hardware/software framework, and to generate up to 5 different control data contracting each finger, the hand or the whole forearm.
The calibration code is a little rough, but it does work already. I believe exploring further this method can unveil exciting prospects.

MMG sensor early calibration system | 2010

On the 7th December I’m going to present the actual state of the inquiry to the research staff of my department at Edinburgh University; I will present a short piece using the software above and the MMG sensing device. On the 8th we also arranged an informal concert at our school in Alison House, so I’m looking forward to test live the work done so far.