Category Archives: performance

360° Audiovisual Performance at Art Tech Play March 2026

On March 18th 2026 I performed an audiovisual work which, although unnamed and effectively still work in progress, is the product of research and development over the previous 6 months or so. The performance took place within the 10-metre diameter, 4-metre high, circular LED screen at Norwich University of the Arts’ Immersive Visualisation and Simulation Lab, as part of the Collusion-led Art Tech Play programme.

Inspired by the classic quote from the Bhagavad Gita, Chapter 11, Verse 32, variously translated.

“I am Time, the mighty cause of world destruction, Who has come forth to annihilate the worlds.”

Translated by Winthrop Sergeant (1984)

The work explores themes of destruction, natural resilience, and human responsibility (or lack of it) and uses TouchDesigner to create visual outcomes that respond in real time to performed electronic music, using both instrument MIDI data and audio data as inputs. Although I have been experimenting with audiovisual performance for some time, this work is somewhat new territory for me as the content is largely videographic.

It is easier than ever to collect libraries of videos, many offered free by individual videographers. Some are highly composed, others more naturalistic. Increased ownership of high quality video cameras and the proliferation of drones are both leading to greater numbers of ‘citizen videographers’ producing compelling video content, for example https://www.pexels.com/.

The principal mechanic of the work is a noise grid used to mix different elements of video that change according to a weighting of the following factors:

  • Random/Noise
  • Musical time (i.e. changing every 2 bars)
  • Response to specific instrument data (e.g. a kick drum)

The graphic below demonstrates the relationship between the noise grid and resulting video collage.

Content is organised into ‘scenes’ which map directly to scene buttons on the Roland MC-707 hardware used to drive the sound. Each scene has a number of parameters including:

  • A folder of video files to feed into the grid
  • A possible overlay layer to show on top of the grid
  • Parameters that relate to the randomisation of video grid divisions (i.e. a minimum and maximum number of columns and rows)

This graphic below demonstartes the relationship between the hardware scene buttons and the scene data stored in TouchDesigner.

For some scenes, a monochrome overlay is shown on top of the video grid.

Incidental performative effects are mapped to audio parameters. For example, modulating the release of the snare envelope (how long or short the drum sounds) results in the video grid cells rotating to the left or right.

A classic perfomance effect within many electronic music genres is ‘beat repeat’ where a small sample of audio is repeated in time with the music. By using MIDI information to trigger audio beat repeats, a visual equivalent can also be triggered. In the case of a 16th note (semiquaver) beat repeat, a visual equivalent builds from left to right every semiquaver. Other repeat intervals are represented similarly, e.g., quaver, triplet etc.

A master audio sweep filter, another common electronic music performance effect, is aligned with post visual filters i.e. final stages of the visual composition process. A black and white edge detection image is used for audio lowpass, high-brightness ‘blow out’ used for audio highpass.

The work was adapted for the 360 screen relatively quickly, so it was very much a test. However, I did have chance to perform the piece twice, each time for around 15-20 minutes which gave me a good opportunity to test out the performance mappings.

Although I don’t have a decent video capture of the event, fellow artist @oculardelusion kindly posted a couple of videos on Instagram – https://www.instagram.com/p/DWFMEk8AqM9/

Feedback from the night was generally positive although I didn’t get chance to talk to that many people in depth as I was busy performing and also co-organising the night! One interesting comment I received was that it was notable and possibly unusual for an av performer to be looking at the large screen rather than the computer screen during a performance. In fact the computer screen had nothing on it! But more importantly, this felt like the right thing to do in a large scale improvised perormance situation.

The research leads me to consider a number of themes for potential future exploration.

Theme #1: Mapping and Translation Strategies

How do different MIDI-to-visual mapping approaches affect both performer agency and audience perception?  E.g., exploration of one-to-one mappings versus many-to-many relationships, examining when literal translation (note-to-shape) versus abstracted relationships work better, potentially developing a framework for “meaningful” versus “arbitrary” mappings based on practical investigation.

Theme #2: Collage Semantics & Meaning Making

What happens when found footage collage meets electronica’s functional dance context? E.g., whether narrative fragments emerge from triggering choices, investigate how juxtaposition of source materials creates meaning (or deliberately avoids it), as well as examining the politics of source material selection – whose images, from what contexts, and how does recontextualization operate?

Theme #3: Dramaturgical Structure

Electronica has inherited DJ culture’s arc of tension/release across a set. How do visuals participate in this dramaturgical structure? E.g., a taxonomy of visual intensities that parallel musical energy (breakdowns, drops, rollers) might be developed, and/or investigate whether visuals can work against the music’s energy to create productive friction.

Realtime AV using Processing and Tooll3

I thought I better post the outcome of a recent experiment using Processing and Tooll3 together to transform incoming MIDI notes and controller information into realtime visual output.

The MIDI and sounds are generated by hardware synths – Roland SH-4D and Elektron Model Cycles.

Roland SH-4D
Elektron Model Cycles

Processing is used to create and manipulate animated geometric shapes that correspond to notes produced by individual synth tracks, building on previous work undertaken in p5.js. After significant stress testing, it turned out that Java-based Processing is slightly better suited to this purpose than JavaScript-based p5.js, e.g., in relation to response time and overall stability.

Processing forwards the live image using the Spout library to Tooll3, a free VJ platform, which is used to add effects. Tooll3 has amazing visual capabilities although it suffers from a somewhat limited MIDI implementation and is currently PC-only.

Check the results out below.

If you’re interested in more detail, read on.

Processing is used to create block shapes for each MIDI note. Each instrument is assigned to a unique MIDI channel and this distinction is used to create a separate line of shapes for each instrument.

Drum shapes (kick drum, snare etc.) have fixed dimensions, all other shapes are influenced by MIDI note duration and pitch.

The slope of a note shape is controlled by MIDI controller information, for Synth 1 this relates to the frequency cutoff of the sound.

Each note shape has many additional vertices along each side which can be displaced using random values to create a wobbling effect. In this case, Synth 1 shapes wobble in response to MIDI pitch modulation data – e.g., the classic 80’s synth pitch modulation.

Processing is also used to colour the shapes, this is triggered either manually using a MIDI controller or via non-rendering MIDI information. Essentially, I used one of the Elektron sequencer channels to control visual changes matching the current phrase. The MIDI data from this channel did not itself produce shapes.

All the shapes are drawn in 2D with a scale factor derived from notional z axis values using the following simple perspective calculation where fl = represents focal length.

scale = fl / (fl + z)

I had considered that I might change perspective values on the fly but in the end didn’t pursue this.

Processing is great for handling MIDI and drawing shapes but not so good for post-processing which is where an application like Tooll3 comes in handy. It was straightforward enough to pipe the fullscreen Processing image through to Tooll3 using Spout and then apply various image filters as shown below.

Tooll3 applying image effects to the Spout input provided by Processing.

The MIDI visual control track was used to trigger and switch between these various effects. I didn’t get on very well with the MIDI implementation offered by Tooll3 but it is possible this has since improved.

This was a fun project to work on and Tooll3 is quite inspiring, although it doesn’t have anywhere near the level of community support that TouchDesigner does. I’m planning to investigate the latter more thoroughly as a realtime AV platform some time soon.

Introducing Vision Mixers

I am pleased to be working with Cambridge-based art tech producers, Collusion, on a new interactive installation to be exhibited in Cambridge in late 2020. The venue will hopefully be Sook Cambridge.

Sook is an ‘adaptive retail space’ that can be easily customised to suit typical retail media requirements and booked by the hour. They have also experimented with hosting more artsy events. Sook at the Grafton Centre, Cambridge, features a number of grouped screens and addressible lighting. It’s the perfect environment for immersive audio-visual content, but naturally there are issues of interoperability to be addressed when installing a bespoke system that features realtime interaction.

The original project elevator pitch was “a social distancing compatible, two-person installation that makes use of whole-body interaction to explore inter-personal energies and connections within a 360° audio-visual environment”.

Early mock-up.

Since acceptance of the initial proposal, this project has already come a long way with writer, artist and multi-talented creative person Anna Brownsted joining to help develop narrative elements. Anna, myself plus Rachel Drury and Rich Hall from Collusion, all took part in a socially-distanced residency at Cambridge Junction in early August in order to get things moving.

There is a significant creative concept, currently in development, that will drive all narrative components and frame the interactions. I’ll write some more about this in due course, but for now will focus on documenting the residency and discussing key intentions for the project. I should add that this project is part of my ongoing research inquiry into gestural and embodied interaction within public-facing art. Non-technical areas of knowledge that I seek to develop are:

  • the nature of agency within public space and how this is mediated through performative interaction
  • the mechanics of co-interaction, especially the compete-callaborate axis
  • non-contact interaction design, especially in the context of the current Covid-19 crisis
  • the use of narrative to frame interactive experience

The project involves a relatively complex technical set-up with a single Kinect and host computer driving multiple screens and eventually surround audio and lighting, via DMX. The Kinect (v2) runs through Unity and allows pretty robust skeleton tracking, therefore providing the foundation for full-body interactive gestures. I will cover technical aspects more fully at a later date, but of course, expect there to be much learning in this area.

Here are a few photos and a video that illustrate the prototypes that were developed and tested during the residency.

Simple visualisation of users and movement of hands.
When the hands move faster, fireballs are produced!
Lots of fireballs produced by lots of movement.
Testing a user duplication routine.
Picking up ‘leaves’, testing in Sook.

The video shows the fireball mechanic being tested at Sook.

The residency was massively informative in terms of validating the core interactions and kick-starting the project, especially by forming the team. It was a refreshing novelty to work in the same room as others after lockdown, even if we were santising left, right and centre! We also established some key technical parameters. It’s a really exciting, although slightly daunting project, especially as it runs in parallel with my full-time teaching responsibilities! More updates soon.

Hi Siri

Last Friday evening I was working with producer/writer/choreographer Lydia Fraser-Ward and dancer/physical actor Philippa Hambly at Rich Mix, Bethnal Green where Hi Siri was presented as part of the Women of Mass Destruction 3. Hi Siri featured pre-recorded and live interactive visuals created with the latest version of the Kinect sensor which is basically even better all round than the first version, in itself quite excllent.

Hi Siri is about a relationship between a woman, Iris, and her phone, Siri. It begins with Siri being plugged into the ‘flat OS’ thus giving her control of all domestic services, signalling the start of a general takeover bid by Siri upon Iris’ life. The interaction design follows a trajectory from Siri being a jumble of curves and lines to a recognisably human (albeit digital) form.

It was great to work within the performance environment for a change. After a lot of pre-design I only had to press a few buttons on the night, but obviously they still had to be the right buttons in the right order! That 20m USB extension finally came in handy. Thankfully the perfomance went well and congratulations are due all round. A couple of stills follow…

Hi Siri

Hi Siri