Category Archives: R&D

MIDI > P5

I’ve been re-exploring p5.js recently (a JavaScript library for creative coding) and am pleased to see a growing number of additions that make it look more and more like a fully fledged creative platform.

As a proof of concept I created a simple sketch that visualises basic geometric shapes in response to incoming MIDI notes. I used the excellent p5.scribble.js library and the super useful WEBMIDI.js utility.

Shapes drawn using p5.scribble.js

A sub-optimal directory structure, but good enough for a proof of concept.

WEBMIDI is well documented and easy to use – see the WEBMIDI Basics documentation

Following OOP principles, I created custom drawing classes for the specific shapes that I wanted to visualise, each with its own draw method making use of appropriate p5.scribble functions.

In Ableton, I created a simple drum pattern.

And sent the outgoing MIDI to a virtual port via Loop MIDI.

Back in p5, I added a listener for the incoming MIDI notes, instantiated the relevant drawing class for each one and passed it to the ‘BlobManager’, responsible for managing time-limited visibility for each note shape.

It all went smoothly and resulted in a fairly responsive, if very simple, audio visual proof of concept. Interestingly enough, when I ported the code to good old fashioned Processing, which uses Java rather than JavaScript, I saw no difference in perfomance or latency which makes me think that p5 is just as good a choice as Processing for creative experimentation.

If you are interested in more detail, here is the code, offered without liability, warranty or support! – https://github.com/jamiegledhill/P5-MIDI-notes

Initial testing of The Multitude

The Earth Elemental Scene

Myself and producers Collusion recently tested The Multitude at Cambridge Junction in preparation for exhibition in late June.

Cambridge Junction

The installation requires a technically demanding set-up with multiple screens, speakers and programmable lights.

Routing leads
Checking positioning

We had a couple of technical glitches, as to be expected, but managed to do a few run throughs with testers in the late afternoon.

Our very first testers!
More testers
These two should know what they’re doing (artist and producer)

Generally, the installation worked well, but there are some tweaks to be made, as to be expected.

  • Some of the voices need to be re-scripted slightly and re-recorded
  • The sound design generally works although needs to be completed
  • The aesthetics need developing more for some scenes
  • Object placement needs to be improved to make the most out of the 3 screen set up
  • The core interactions require minor tweaking
  • Secondary interactions need to be added to give players something to do during the character introductions

There is actually quite a long list of minor tasks that need to be completed in the coming weeks, the above are just the highlights based on user feedback.

A side panel from the Air Elemental scene, one of the more visually resolved scenes.

Next steps: more development and testing!

Vision Mixers >> The Multitude

It’s been a while since I last posted about the Vision Mixers project, not least because of two intervening lockdowns in England and general Covid-19 uncertainty. Despite all this and as well as a very busy start to this academic year, I have been progressing steadily with project development.

The good news is that the project, now known as ‘The Multitude’, has just been awarded an Arts Council England grant to support further development, testing and exhibition. This is obviously a welcome development and means that the work will be shown in both Cambridge, Norwich and potentially another city over the coming months.

So, the general aim of the project is to create a ‘playable experience’ for two, lasting 15-20 minutes, where the players step through a series of interactive scenarios. The experience is  framed by a narrative construct that pits the players against ‘a demon’ that has recently cursed humanity, an act allegorical to the current pandemic crisis. Part of my interest in the narrative component comes from the idea of myth making, particularly in the face of danger and crisis. I was moved by the excellent Fairy-Tale Virus by Sabrina Orah Mark, published in the Paris Review, to think about how a modern fairy tale might be constructed to help mediate the current crisis.

The 1954 film Godzilla is an example of a modern fairy tale (they’re often violent and feature monsters!) that was developed in the aftermath of the atomic destruction of Hiroshima and Nagasaki less than a decade earlier.

Godzilla in a scene from the film. © Toho Co. Ltd. ALL RIGHTS RESERVED

In The Multitude, the players cannot defeat the evil demon directly and must instead enlist the help of the four ‘elementals’: Earth, Water, Fire and Air. By fulfilling a task for each of the elementals, which forms the basis of player interaction for each scene, the players hope to awaken a sleeping army of ancient warriors known as ‘the multitude’, hence the name.

The research aims of the project have now been tweaked slightly. They are to investigate:

  • the nature of agency and how this is mediated through performative interaction
  • the mechanics of co-interaction
  • non-contact interaction design, especially in the context of the current Covid-19 crisis
  • the use of narrative to frame interactive experience

The research methodology is essentially to make the initial version of the work, conduct a round of early stage usability testing, iterate and then conduct more extensive audience testing at each exhibition opportunity. Usability testing will cover the nuts and bolts of interaction design and hopefully identify any significant problem areas. Audience testing will gauge sentiment and reaction, looking at for example, the success or otherwise of the narrative-interaction mix.

The project is quite sizeable already and will ultimately amount to many thousand lines of code written. A significant research output will be a technical review, but this will be framed within the context of interaction design rather than the discipline of computer programming. Coding is a means to an end in this case! For information purposes: the platform is Unity and interaction is achieved through the Microsoft Kinect for Windows.

This video gives an overview of the development work completed until November 2020.

Being Human 2019 – Evaluation

The session began with an insightful discussion about museum display item descriptions in general and whether they are read. One opinion was that they are good to have on-hand but often may not actually be read in full. However, the group seemed to agree that it is generally better to have more detail than less.

This reminds me of the classic inverted pyramid (‘Inverted pyramid (journalism)’, 2019) writing technique where the reader delves into as much detail as they desire, assuming that the reach of the pyramid matches the reader’s expectation. Following this structure, the body of writing answers all the key questions in brief within the opening sentence(s) before moving on to incidental/secondary detail that may only be of interest to a particular audience.

The workshop proper began by asking participants to examine key texts and then devise keywords to act as labels for these. In the first pass completed as a group, participants came up with quite different keywords for the same text, perhaps unsurprisingly. A keyword in this context almost certainly denotes personal insight or individual significance to a greater extent than being constructed as a signpost for someone else. To construct signposting keywords for general usage, there would need to be a more systematic approach.

Boudica I

The story of the Queen of the Iceni has fascinated every age from Roman times to the modern day.

She was very tall and severe. Her gaze was penetrating and her voice was harsh. She grew long red hair that fell to her hips and wore a large golden torc and a vast patterned cloak with a thick plaid fastened over it.

Dio Cassio

A Roman writer

An example of a key text.

Participants continued to work through texts individually and associate keywords with chosen samples. They then chose one particular keyword and spent 20 mins developing a ‘creative outcome’ using paper, pre-printed typographic elements, pens, patterned paper etc. Some thoughtful and attractive versions of keywords were produced. This gave rise to a conversation about the typography of a keyword as an ‘affordance’ (Norman, 2013), although the actual term affordance wasn’t used directly, i.e. as a signifier of the nature of the respective content. This is an interesting area of discussion, as originally I had conceived the project as being a kind of test of textual interaction, and to see how much interest could be developed by just working with text as a user interface concept. The text-only constraint was conceived before I started working with the museum as a collaborative partner and was based on the premise of exploring texts that exist in their own right, not relating to a physical object as the museum texts generally are.

In any case, text must be rendered using a character set which effectively establishes ‘typographic voice’, a widely known graphic design principle. So, choice of font will always have some bearing on perceptions of keyword affordance and of course, some fonts are highly embellished with ornamentation and/or feature iconography to the point of being at least as pictorial as they are typographic.

There was some discussion about the information hierarchy relating to museum displays and one suggestion was that it would be great to move from one item to another ‘like using the internet’ which I interpret as the idea of there being an organic-like information structure behind a given keyword. This structure might be more shallow and wide than deep and constrained. This idea chimes with my original concept of being able to traverse an information hierarchy by using a keyword alone. But this scenario requires information design of far greater scope than the current prototype.

Although the workshop participants were quite interested in the LEAP controller, it seemed that this was more of a secondary attraction to the discussion and practical investigation of keywords and associated texts. I did manage to get a number of passers-by to try out the prototype and altogether I gleaned some useful insights. One woman managed to really get the hang of the interaction scheme. Initially the controller didn’t pick up her hands and I suggested that she either roll her sleeves up or take off her coat, which she did, showing good commitment! (I have noticed in previous situations that even slightly overhanging sleeves can affect the LEAP controller’s ability to recognise hand position.) It took her a couple of minutes, but then she was navigating, selecting and closing content quite fluidly. She commented that it was a bit like ‘learning to drive’ and that once you got the hang of it, ‘the muscle memory’ took over. She was the success story of the session and her words resonated with me.

Other participants struggled to lesser or greater extent to master control. A younger child (my son) had a go, but also struggled somewhat and this may be to do with the fact that smaller hands do not seem to register well with the LEAP.

In general, the collaborating partners agreed the Being Human 2019 project was a positive experience that helped to develop a new creative partnership and conduct preliminary investigation into the design of gestural interactive experience with textual elements of the museum collection. Although only one of the two planned workshops actually took place, valuable insights were gained:

Consideration of place –  the nature of a workshop is affected by the physical space it takes place in, particularly if this an art gallery.

The relationship between text and object – where text is derived from an object, especially one that is close by, text accompanied by visual reference to the object makes more sense. Conversely, it makes less sense when the visual reference is missing.

The power of preliminary discussion – there was profound and informative  discussion about experience design and female gaze within the museum at the beginning of the workshop.

The challenge of establishing meaningful workshop outcomes –  the hybrid format of the workshop, physical and virtual, worked to a certain extent but could be developed to guide participants towards achieving a more tangible outcome(s) potentially over a longer period of time.

The inherent challenges of gestural interaction  – some participants found the interaction tricky. It may be that gestural interaction is more suited to a gamified experience where there is a more distinct and fun ‘pay-off’ as a consequence of ‘mastering the moves’. In general, it probably needs to be made simpler for a public-facing experience.

The likely benefit of a longer collaborative cycle – with new technology, the first point of contact is effectively an introduction. Subsequent sessions would be beneficial for participants to have more meaningful input into experience design, including potentially testing work in progress with members of the public.

References

‘Inverted pyramid (journalism)’ (2019) Wikipedia. Available at https://en.wikipedia.org/wiki/Inverted_pyramid_(journalism) (accessed 24/11/19)

Norman, Don (2013) The Design of Everyday Things. Revised and expanded. Basic Books.

Being Human 2019 – First Iteration

After conversations with a number of potential partners, I ended up submitting a proposal to the Being Human Festival 2019 in collaboration with Norfolk Castle Museum and Art Gallery. The idea is to develop an experimental interactive experience, using the LEAP Motion Controller (LMC), that can be used to explore hitherto overlooked texts in the museum archive. ‘Touching the Past’ will initially be run as a closed workshop and then at a follow-up event at Norwich Castle during the week commencing the 18th of November.

“’Touching the Past’ is a series of workshops that encourages young people to explore overlooked stories in Norfolk Museums’ collection through digital experience and gestural interaction.”

Promo Image for ‘Touching the Past’

The picture is purely for promotional purposes as is often the case with events where the promotion takes place before the work has actually been made.

From a production perspective – I chose Unity over Unreal Engine as a Leap-friendly development platform. This is partly due to some prior familiarity with Unity, which I used to make Play Table, and a greater working knowledge of programming with C# over C++. It was relatively easy to get the LEAP sensor up and running with Unity although there are a few gotchas, like LEAP currently requiring an older version of Unity to work. Most of the LEAP examples in the latest version of the Unity Orion library are geared towards VR with a head-mounted sensor. Some adaption is required to work with a table-mounted sensor.

The visual metaphor I have chosen for this early work is that of post-it notes. The idea being that the user selects a post-it note containing a single word which then reveals a more detailed item of text. The detailed view can then be reverted back to its original brief form and another keyword chosen for investigation, by rotating through all available post-it notes.

The interaction metaphor is not fixed completely but I am currently experimenting with the following control scheme:

Extended hand used to rotate through post-its
Extended finger used to select a post-it
Fist used to close an opened detail view

These are a couple of screen shots of the work as it currently stands.

Selecting a post-it
A detail item revealed

Being Human 2019 – Research

After engaging in a period of early ideation, and thinking about the application of gestural interaction within an information discovery experience, I had a look round a couple of museums to take stock of existing interaction design in context. Of particular note was the Museum of London, visited on the 11th of July 2019, which has a large number of interactive displays incorporating touch. I was particularly interested in the touch experience about disease in London, which had some game-like mechanics. It was well designed, with a bespoke screen and appealing interactive elements, however, for one reason or another it was not working properly. Users were becoming frustrated with the lack of responsiveness to touch, which may have been the result of a dirty or misaligned sensor. On my journey back to Norwich, I reflected upon the maturity and ubiquity of touch screen design for museums. This led me to consider gestural interaction though motion tracking as an emergent form of interaction design, specifically the LEAP Motion Controller, or LMC.

The Leap Motion Sensor

This is a small USB device designed to face upwards on a desktop or outwards on the front of a VR headset. It makes use of two infrared cameras and three infrared LEDs to track hand position and movement in 3D space

Finger Detection with Infrared Lights and Stereoscopic Cameras

From a technical perspective, the LMC is relatively mature with several iterations of drivers and accompanying API documentation. One point of note is that since the Orion release in February 2016, the LMC only has vendor-supported API libraries for game development environments Unity and Unreal Engine, meaning that JavaScript and therefore HTML5 as a platform is not natively supported. Neither are ‘gestures’ – the LMC versions of 2D gestures commonly used in conjunction with hand held touch screen devices e.g. swipe. The emphasis is now on the use of LMC, which is by nature a 3D-aware controller, in 3D space with VR as the principle target platform. Despite this repositioning of the LMC, thanks to a well-documented API and active community, there are many experimental applications of the technology outside of games and VR.

A useful primer on the topic of LMC within the context of 3D HCI is the review by Bachmann, Weichert & Rinkenauer (2018)  which notes

  • the use of touchless interaction in medical fields for rehabilitation purposes
  • the suitability of LMC for use in games and gamification
  • the use by children with motor disabilities
  • textual character recognition (‘air-writing’)
  • sign language recognition
  • as a controller for musical performance and composition.

A common concern is the lack of haptic feedback offered by the LMC: “the lack of hardware-based physical feedback when interacting with the Leap Motion … results in a different affordance that has to be considered in future physics-based game design using mid-air gestures.” Moser C., & Tscheligi M., (2015). Seeing as LEAP has recently (May 2019) been acquired by Ultrahaptics, a specialist in the creation of the sensation of touch in mid-air, this situation is likely to change.

About the design of intuitive 3D gesture for differing contexts, Cabreira & Hwang (2016) note that differing gestures are reportedly easier to learn for older or younger users, that visual feedback is particularly important so that the user knows when their hand is being successfully tracked and that clear instruction is imperative to assist learning.

Shao (2015) provides a comprehensive technically oriented introduction to the LMC, including a catalogue of gesture types and associated programmatic techniques.

A Range of Static Hand Gestures

Shao also notes the problem of self-occlusion, where one part of the user’s hand obscures another part, resulting in misinterpretation of hand position by the LMC. (2015).

Self-occlusion

In Bachmann, Weichert, & Rinkenauer (2015), the authors use Fitts’ law to compare the efficiency of using the LMC as a pointing device vs. using a mouse. The LMC comes out worse in this context, exhibiting an error rate of 7.8% vs 2.8% for the mouse. Reflecting upon this issue led me to a report concerning the use of expanding interaction targets (McGuffin & Balakrishnan, 2005) and the general idea of making an interaction easier to achieve by temporarily manipulating the size of the target. In fact, an approach I have subsequently adopted is to make a pointing gesture select the nearest valid target, akin to gaze interaction in VR where the viewing direction of the headset is always known and can be used to manage interaction. In VR, this is often signified by an interactive item changing colour when intersected by a crosshair or target rendered in the middle of the user’s field of vision.

References

Bachmann, D., Weichert, F. & Rinkenauer, G. (2015) Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device. Sensors 2015, 15, 214-233..

Bachmann, D., Weichert, F. & Rinkenauer, G. (2018) Review of Three-Dimensional Human-Computer Interaction with Focus on the Leap Motion Controller. Sensors 2018, 18, 2194.

Cabreira, A., and Hwang, F. (2016) How Do Novice Older Users Evaluate and Perform Mid-Air Gesture Interaction for the First Time? In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16).

McGuffin, M. & Balakrishnan, R., (2005) Fitts’ Law and Expanding Targets: Experimental Studies and Designs for User Interfaces. ACM Transactions on Computer-Human Interaction, Vol. 12, No. 4.

Moser C., & Tscheligi M., (2015) Physics-based gaming: exploring touch vs. mid-air gesture input. IDC 2015.

Shao, L. (2016) Hand movement and gesture recognition using Leap Motion Controller. Stanford EE 267, Virtual Reality,Course Report [online] available at https://stanford.edu/class/ee267/Spring2016/report_lin.pdf (accessed 29/8/19)

Being Human 2019 – Early Stage Ideation

The Being Human Festival 2019 first came to my attention in January. The festival is an umbrella organisation led by the School of Advanced Study, University of London in partnership with the Arts and Humanities Research Council and the British Academy. It has a national remit of promoting public engagement with humanities research and over the last few years has seen a sharp rise in participating academic organisations.

The festival consists of single events and ‘hubs’ hosting multiple events. As an artist-turned academic myself, only just recovering from the baptism of fire of getting a new course up and running, I saw the festival call-out as a good cue to re-establish my neglected practice-based research interests. Having considered the remit of public engagement with humanities research and the Being Human 2019 theme of Discoveries and Secrets, I began thinking about connecting a text-based interactive experience, building on previous works Data Flow and Journey Words, with a hitherto overlooked archival resource.

I started to develop ideas about forms of interaction, initially thinking about touch, while considering a suitable underlying information architecture, knowing that it would be difficult to connect an interactive experience with an existing archive ‘as-is’. I conceived of a simple two-level structure consisting of individual items of content each with one or more associated keywords. The keywords being used to traverse the information hierarchy and ‘discover’ one or more items of content.

As for the interaction design, I was originally thinking about touch screen and how multiple touches might be used to discover and combine keywords.

I started creating a touch demo using HTML5 and JavaScript, thinking about an iPad or similar as an eventual interaction device.

I was quite pleased with the results but decided to pause development until conducting a round of research, more of next.

Virtualisation of Place

I’m currently investigating the ‘virtualisation of place’ – using 360° photography as a basis to create immersive, atmospheric panoramic scenes which might ultimately contain narrative and interactive elements. My first, rather tentative step, is the creation of a fairy glade. As good a starting point as any when it comes to mixing reality with fantasy!

Hi-res 360° image:

Fun at firstsite, the birth of ‘Ricochet’?

As part of the recent Young Art Kommunity (YAK) takeover of contemporary art venue firstsite, Colchester, I installed the work previously known as ‘play table’ at the BYOB (bring your own beamer) event hosted last Friday evening (the 29th of October). Rather than project from above onto a table, in sync with a Kinect camera used to track movement, I chose to present an iPad as the means of interaction, using ‘blob tracking’ sent over a network via UDP. Participants touched and swiped the iPad while looking at the results on screen. Many participants, not all young, got carried away with the interaction and nearly broke the iPad mount!

This approach made the set-up much quicker, allowing me to rig in just about an hour. It also fundamentally changed the mechanics of interaction, allowing one person or possibly two people comfortable with sharing an iPad, to interact more finely with the piece rather than several people more bluntly, as with the table approach. I’m still considering the implications but essentially I’m keen to push the project further in this direction with a view to mounting multiple iPads to allow several participants to play together simultaneously, as was possible with the previous version. Of course, not being projected onto a table rather invalidates the project name ‘play table’ so I have coined the new name ‘Ricochet’ which reflects the dynamic of the piece more accurately in any case. Watch this space!

img_7008

img_7011

img_7023

img_6999