Category Archives: performance

Realtime AV using Processing and Tooll3

I thought I better post the outcome of a recent experiment using Processing and Tooll3 together to transform incoming MIDI notes and controller information into realtime visual output.

The MIDI and sounds are generated by hardware synths – Roland SH-4D and Elektron Model Cycles.

Roland SH-4D
Elektron Model Cycles

Processing is used to create and manipulate animated geometric shapes that correspond to notes produced by individual synth tracks, building on previous work undertaken in p5.js. After significant stress testing, it turned out that Java-based Processing is slightly better suited to this purpose than JavaScript-based p5.js, e.g., in relation to response time and overall stability.

Processing forwards the live image using the Spout library to Tooll3, a free VJ platform, which is used to add effects. Tooll3 has amazing visual capabilities although it suffers from a somewhat limited MIDI implementation and is currently PC-only.

Check the results out below.

If you’re interested in more detail, read on.

Processing is used to create block shapes for each MIDI note. Each instrument is assigned to a unique MIDI channel and this distinction is used to create a separate line of shapes for each instrument.

Drum shapes (kick drum, snare etc.) have fixed dimensions, all other shapes are influenced by MIDI note duration and pitch.

The slope of a note shape is controlled by MIDI controller information, for Synth 1 this relates to the frequency cutoff of the sound.

Each note shape has many additional vertices along each side which can be displaced using random values to create a wobbling effect. In this case, Synth 1 shapes wobble in response to MIDI pitch modulation data – e.g., the classic 80’s synth pitch modulation.

Processing is also used to colour the shapes, this is triggered either manually using a MIDI controller or via non-rendering MIDI information. Essentially, I used one of the Elektron sequencer channels to control visual changes matching the current phrase. The MIDI data from this channel did not itself produce shapes.

All the shapes are drawn in 2D with a scale factor derived from notional z axis values using the following simple perspective calculation where fl = represents focal length.

scale = fl / (fl + z)

I had considered that I might change perspective values on the fly but in the end didn’t pursue this.

Processing is great for handling MIDI and drawing shapes but not so good for post-processing which is where an application like Tooll3 comes in handy. It was straightforward enough to pipe the fullscreen Processing image through to Tooll3 using Spout and then apply various image filters as shown below.

Tooll3 applying image effects to the Spout input provided by Processing.

The MIDI visual control track was used to trigger and switch between these various effects. I didn’t get on very well with the MIDI implementation offered by Tooll3 but it is possible this has since improved.

This was a fun project to work on and Tooll3 is quite inspiring, although it doesn’t have anywhere near the level of community support that TouchDesigner does. I’m planning to investigate the latter more thoroughly as a realtime AV platform some time soon.

Introducing Vision Mixers

I am pleased to be working with Cambridge-based art tech producers, Collusion, on a new interactive installation to be exhibited in Cambridge in late 2020. The venue will hopefully be Sook Cambridge.

Sook is an ‘adaptive retail space’ that can be easily customised to suit typical retail media requirements and booked by the hour. They have also experimented with hosting more artsy events. Sook at the Grafton Centre, Cambridge, features a number of grouped screens and addressible lighting. It’s the perfect environment for immersive audio-visual content, but naturally there are issues of interoperability to be addressed when installing a bespoke system that features realtime interaction.

The original project elevator pitch was “a social distancing compatible, two-person installation that makes use of whole-body interaction to explore inter-personal energies and connections within a 360° audio-visual environment”.

Early mock-up.

Since acceptance of the initial proposal, this project has already come a long way with writer, artist and multi-talented creative person Anna Brownsted joining to help develop narrative elements. Anna, myself plus Rachel Drury and Rich Hall from Collusion, all took part in a socially-distanced residency at Cambridge Junction in early August in order to get things moving.

There is a significant creative concept, currently in development, that will drive all narrative components and frame the interactions. I’ll write some more about this in due course, but for now will focus on documenting the residency and discussing key intentions for the project. I should add that this project is part of my ongoing research inquiry into gestural and embodied interaction within public-facing art. Non-technical areas of knowledge that I seek to develop are:

  • the nature of agency within public space and how this is mediated through performative interaction
  • the mechanics of co-interaction, especially the compete-callaborate axis
  • non-contact interaction design, especially in the context of the current Covid-19 crisis
  • the use of narrative to frame interactive experience

The project involves a relatively complex technical set-up with a single Kinect and host computer driving multiple screens and eventually surround audio and lighting, via DMX. The Kinect (v2) runs through Unity and allows pretty robust skeleton tracking, therefore providing the foundation for full-body interactive gestures. I will cover technical aspects more fully at a later date, but of course, expect there to be much learning in this area.

Here are a few photos and a video that illustrate the prototypes that were developed and tested during the residency.

Simple visualisation of users and movement of hands.
When the hands move faster, fireballs are produced!
Lots of fireballs produced by lots of movement.
Testing a user duplication routine.
Picking up ‘leaves’, testing in Sook.

The video shows the fireball mechanic being tested at Sook.

The residency was massively informative in terms of validating the core interactions and kick-starting the project, especially by forming the team. It was a refreshing novelty to work in the same room as others after lockdown, even if we were santising left, right and centre! We also established some key technical parameters. It’s a really exciting, although slightly daunting project, especially as it runs in parallel with my full-time teaching responsibilities! More updates soon.

Hi Siri

Last Friday evening I was working with producer/writer/choreographer Lydia Fraser-Ward and dancer/physical actor Philippa Hambly at Rich Mix, Bethnal Green where Hi Siri was presented as part of the Women of Mass Destruction 3. Hi Siri featured pre-recorded and live interactive visuals created with the latest version of the Kinect sensor which is basically even better all round than the first version, in itself quite excllent.

Hi Siri is about a relationship between a woman, Iris, and her phone, Siri. It begins with Siri being plugged into the ‘flat OS’ thus giving her control of all domestic services, signalling the start of a general takeover bid by Siri upon Iris’ life. The interaction design follows a trajectory from Siri being a jumble of curves and lines to a recognisably human (albeit digital) form.

It was great to work within the performance environment for a change. After a lot of pre-design I only had to press a few buttons on the night, but obviously they still had to be the right buttons in the right order! That 20m USB extension finally came in handy. Thankfully the perfomance went well and congratulations are due all round. A couple of stills follow…

Hi Siri

Hi Siri