Category Archives: embodied interaction

TEI’25

I’ve recently returned from the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI’25) hosted by Université Bordeaux. This was my first experience of TEI and it was really insightful and encouraging to meet and hear from international researchers and creative practitioners at the forefront of tangible interaction in its many forms. This annual conference is essentially an international community of talented makers keen to constructively critique one another’s research and practice.

I was very happy to show an installation ‘Suitcase Stories’ which was included as part of the conference Arts and Perfomance track, exhibiting at Capc Contemporary Art Museum Bordeaux. I proposed Suitcase Stories in response to the Art and Performance call which asked artists to propose an art work fitting into a suitcase, somewhat inspired by La Boîte-en-Valise [Box in a Suitcase] by Marcel Duchamp, also responding to the conference theme of sustainability.

A number of themes I had previously been considering fed into the ideation process. Firstly, the 17 United Nations Sustainable Development Goals, many of which are concerned with improving social and personal factors, not just developing infrastructure, although these aspects are evidently interrelated. Secondly, lived experience and personal testimony of climate change – I personally find first hand accounts of subjects such as flooding and drought very moving. Thirdly, bodily interaction with typography as a way to develop a near-physical relationship with narrative.

This led me to develop the following proposal:

The Suitcase Stories installation facilitates embodied interaction with typographic representations of firsthand accounts of climate change. The work aspires to create a powerful and visceral connection between the participant and individuals affected directly by environmental changes, thus developing greater awareness and propensity towards positive action. Participants interact with powerful and often emotive narratives through upper body movement and gesture, discovering and playing with words and phrases. The entire artwork is integrated within a vintage suitcase, adding aesthetic and thematic dimensions to the user experience. Through this installation, the author sets out to test whether interactive typography, controlled through body tracking, can offer a unique platform for disadvantaged voices to be heard in a powerful and immersive way.  

I developed the following lo-fi mock-ups to support the proposal.

Although I had already created bodily-interactive typography some years ago (e.g. Journey Words), this was created using now defunct Quartz Composer patches and recently I have been keen to push the limits of TouchDesigner. So, with just a weekend to create a prototype from scratch in time for the application deadline, I managed to develop the following demo in TouchDesigner.

My proposal was thankfully accepted and I started work in earnest on the installation following the Christmas break. My first task was to find suitable first-hand accounts of climate change that I could legitimately use. After some discourse with organisations such as the Climate and Migration Coalition, it became apparent that several first-hand accounts published by organisations such as Oxfam are already widely shared, with due reference, across media platforms. In the end I re-used and referenced the following accounts:

Extract from the account of Marcos Choque, 67-year-old inhabitant of Khapi, in the Bolivian Andes. Oxfam International. 2009. Bolivia: Climate change, poverty and adaptation. Oxfam International, La Paz, Bolivia. 
Extract from the account of Sefya Funge, 38-year-old farmer in Adamii Tulluu-Jido Kombolcha, Ethiopia. Oxfam International. 2010. The Rain Doesn’t Come on Time Anymore – Poverty, Vulnerability, and Climate Variability in Ethiopia. Oxfam International, Oxford, UK. 
Extract from the account of Tulsi Khara, 70-year-old inhabitant of the Sundarbans, in the Ganges delta, India. Quoted in: WWF. 2005. Climate Witness: Tulsi Khara, India.  WWF. 2005. Climate Witness: Tulsi Khara, India. Retrieved from: https://wwf.panda.org/es/?22345/Climate-Witness-Tulsi-Khara-India 
Extract from the account of Rosalie Ticala, 33-year-old typhoon survivor of Baganga, Mindanao Island, Philippines. Thomson Reuters Foundation. 2013. Typhoon survivor Rosalie tells her story in Philippines. Retrieved from: https://news.trust.org/item/20130214164700-xrwmo 
Extract from the account of Fatay and Zulaikar, husband and wife of a pastoralist family in Badin district, Sindh, Pakistan. Quoted in: Climate & Development Knowledge Network. 2012. Disaster-proofing your village before the floods – the case of Sindh, Pakistan. Retrieved from: https://cdkn.org/story/disaster-proofing-your-village-before-the-floods-%25e2%2580%2593-the-case-of-sindh-pakistan 
Extract from the account of Assoumane Mahamadou Issifou, NGO employee in Agadez, Nigeria. Gavi,  Quoted in: The Vaccine Alliance. 2023. Ten eyewitness reports from the frontline of climate change and health. Retrieved from: https://www.gavi.org/vaccineswork/ten-eyewitness-reports-frontline-climate-change-and-health 
Extract from the account of Parbati Ghat, 58-year-old community health volunteer in Melauli, Baitadi, Nepal.  Quoted in: BBC Media Action, Living Climate Change. 2024. Mosquitoes in the mountains, Nepal. Retrieved from: https://www.bbc.co.uk/mediaaction/our-work/climate-change-resilience/living-climate-change/nepal-malaria 
Extract from the account of Lomilio Ewoi, pastoralist in Kenya. Quoted in: BBC Media Action, Living Climate Change. 2024. The flood that took everything, Kenya. Retrieved from: https://www.bbc.co.uk/mediaaction/our-work/climate-change-resilience/living-climate-change/kenya-mental-health 

The TouchDesigner development process was quite time-consuming although I managed to achieve an outcome by resolving challenges in key stages. My original demo essentially mixed a Kinetic Typography tutorial by bileam tschepe (elekktronaut) with a 3D Grid with Depth Camera tutorial by Motus Art. This resulted in a grid of letters extruded using depth information provided by a RealSense D345 camera. However, I wanted to only show letters extruded by the body and not those on the background grid, but keeping the left to right flow of text, as shown in the diagram below.

I managed to do this by using a threshold filter to isolate image pixels above a certain greyscale level and therefore considered to be part of the body, and then used this information to reorganise the text indices relating to each grid cell so that text only appeared on cells extruded by the body.

With artistic works featuring typography, there is often a tension between aesthetics and legibility. I wanted participants to enjoy playing with the text, forming ‘word sculptures’ with their bodies, but ultimately they needed to be able to read the text. To this end, I developed a phrase cycling function that sequentially picked out individual phrases when the participant stopped moving and after a short delay, showed them as pure text overlayed along the bottom of the screen. Although it perhaps seemed slightly obvious to do so, I wanted to ensure 100% legibility of these important stories.

The fabrication side of the project came together quite serendipitously, I found a medium-sized, yet robust, vintage suitcase in an antique bazaar which just happened to be a perfect fit for an old monitor. I used photography mounting components to hold the lid of the suitcase open and support the weight of the monitor, then created a mount for the RealSense camera in the suitcase lid. After several failed attempts trying to get the RealSense camera working with my MacBook (official RealSense support for Mac is discontinued by Intel), I managed to borrow a gaming PC laptop and house that in the body of the suitcase. The last additions were a fan to cool the laptop down and some suitable material to stretch across the interior of the suitcase (which by now had several holes cut into it). You can see the results in the images below.

I had some really useful and insightful feedback from the conference attendees and also a number of productive conversations about using technology to explore human narratives – essentially Digital Humanities. I’m taking a break from this project for a short while to reflect upon where it might go next. In the meantime, if you’re interested in reading the accompanying ACM short paper, you can find it here: https://dl.acm.org/doi/pdf/10.1145/3689050.3707677

And here is a screen recording demonstrating the phrase selection functionality as it was presented at TEI’25.

Initial testing of The Multitude

The Earth Elemental Scene

Myself and producers Collusion recently tested The Multitude at Cambridge Junction in preparation for exhibition in late June.

Cambridge Junction

The installation requires a technically demanding set-up with multiple screens, speakers and programmable lights.

Routing leads
Checking positioning

We had a couple of technical glitches, as to be expected, but managed to do a few run throughs with testers in the late afternoon.

Our very first testers!
More testers
These two should know what they’re doing (artist and producer)

Generally, the installation worked well, but there are some tweaks to be made, as to be expected.

  • Some of the voices need to be re-scripted slightly and re-recorded
  • The sound design generally works although needs to be completed
  • The aesthetics need developing more for some scenes
  • Object placement needs to be improved to make the most out of the 3 screen set up
  • The core interactions require minor tweaking
  • Secondary interactions need to be added to give players something to do during the character introductions

There is actually quite a long list of minor tasks that need to be completed in the coming weeks, the above are just the highlights based on user feedback.

A side panel from the Air Elemental scene, one of the more visually resolved scenes.

Next steps: more development and testing!

Introducing Vision Mixers

I am pleased to be working with Cambridge-based art tech producers, Collusion, on a new interactive installation to be exhibited in Cambridge in late 2020. The venue will hopefully be Sook Cambridge.

Sook is an ‘adaptive retail space’ that can be easily customised to suit typical retail media requirements and booked by the hour. They have also experimented with hosting more artsy events. Sook at the Grafton Centre, Cambridge, features a number of grouped screens and addressible lighting. It’s the perfect environment for immersive audio-visual content, but naturally there are issues of interoperability to be addressed when installing a bespoke system that features realtime interaction.

The original project elevator pitch was “a social distancing compatible, two-person installation that makes use of whole-body interaction to explore inter-personal energies and connections within a 360° audio-visual environment”.

Early mock-up.

Since acceptance of the initial proposal, this project has already come a long way with writer, artist and multi-talented creative person Anna Brownsted joining to help develop narrative elements. Anna, myself plus Rachel Drury and Rich Hall from Collusion, all took part in a socially-distanced residency at Cambridge Junction in early August in order to get things moving.

There is a significant creative concept, currently in development, that will drive all narrative components and frame the interactions. I’ll write some more about this in due course, but for now will focus on documenting the residency and discussing key intentions for the project. I should add that this project is part of my ongoing research inquiry into gestural and embodied interaction within public-facing art. Non-technical areas of knowledge that I seek to develop are:

  • the nature of agency within public space and how this is mediated through performative interaction
  • the mechanics of co-interaction, especially the compete-callaborate axis
  • non-contact interaction design, especially in the context of the current Covid-19 crisis
  • the use of narrative to frame interactive experience

The project involves a relatively complex technical set-up with a single Kinect and host computer driving multiple screens and eventually surround audio and lighting, via DMX. The Kinect (v2) runs through Unity and allows pretty robust skeleton tracking, therefore providing the foundation for full-body interactive gestures. I will cover technical aspects more fully at a later date, but of course, expect there to be much learning in this area.

Here are a few photos and a video that illustrate the prototypes that were developed and tested during the residency.

Simple visualisation of users and movement of hands.
When the hands move faster, fireballs are produced!
Lots of fireballs produced by lots of movement.
Testing a user duplication routine.
Picking up ‘leaves’, testing in Sook.

The video shows the fireball mechanic being tested at Sook.

The residency was massively informative in terms of validating the core interactions and kick-starting the project, especially by forming the team. It was a refreshing novelty to work in the same room as others after lockdown, even if we were santising left, right and centre! We also established some key technical parameters. It’s a really exciting, although slightly daunting project, especially as it runs in parallel with my full-time teaching responsibilities! More updates soon.