Being Human 2019 – First Iteration

After conversations with a number of potential partners, I ended up submitting a proposal to the Being Human Festival 2019 in collaboration with Norfolk Castle Museum and Art Gallery. The idea is to develop an experimental interactive experience, using the LEAP Motion Controller (LMC), that can be used to explore hitherto overlooked texts in the museum archive. ‘Touching the Past’ will initially be run as a closed workshop and then at a follow-up event at Norwich Castle during the week commencing the 18th of November.

“’Touching the Past’ is a series of workshops that encourages young people to explore overlooked stories in Norfolk Museums’ collection through digital experience and gestural interaction.”

Promo Image for ‘Touching the Past’

The picture is purely for promotional purposes as is often the case with events where the promotion takes place before the work has actually been made.

From a production perspective – I chose Unity over Unreal Engine as a Leap-friendly development platform. This is partly due to some prior familiarity with Unity, which I used to make Play Table, and a greater working knowledge of programming with C# over C++. It was relatively easy to get the LEAP sensor up and running with Unity although there are a few gotchas, like LEAP currently requiring an older version of Unity to work. Most of the LEAP examples in the latest version of the Unity Orion library are geared towards VR with a head-mounted sensor. Some adaption is required to work with a table-mounted sensor.

The visual metaphor I have chosen for this early work is that of post-it notes. The idea being that the user selects a post-it note containing a single word which then reveals a more detailed item of text. The detailed view can then be reverted back to its original brief form and another keyword chosen for investigation, by rotating through all available post-it notes.

The interaction metaphor is not fixed completely but I am currently experimenting with the following control scheme:

Extended hand used to rotate through post-its
Extended finger used to select a post-it
Fist used to close an opened detail view

These are a couple of screen shots of the work as it currently stands.

Selecting a post-it
A detail item revealed

Being Human 2019 – Research

After engaging in a period of early ideation, and thinking about the application of gestural interaction within an information discovery experience, I had a look round a couple of museums to take stock of existing interaction design in context. Of particular note was the Museum of London, visited on the 11th of July 2019, which has a large number of interactive displays incorporating touch. I was particularly interested in the touch experience about disease in London, which had some game-like mechanics. It was well designed, with a bespoke screen and appealing interactive elements, however, for one reason or another it was not working properly. Users were becoming frustrated with the lack of responsiveness to touch, which may have been the result of a dirty or misaligned sensor. On my journey back to Norwich, I reflected upon the maturity and ubiquity of touch screen design for museums. This led me to consider gestural interaction though motion tracking as an emergent form of interaction design, specifically the LEAP Motion Controller, or LMC.

The Leap Motion Sensor

This is a small USB device designed to face upwards on a desktop or outwards on the front of a VR headset. It makes use of two infrared cameras and three infrared LEDs to track hand position and movement in 3D space

Finger Detection with Infrared Lights and Stereoscopic Cameras

From a technical perspective, the LMC is relatively mature with several iterations of drivers and accompanying API documentation. One point of note is that since the Orion release in February 2016, the LMC only has vendor-supported API libraries for game development environments Unity and Unreal Engine, meaning that JavaScript and therefore HTML5 as a platform is not natively supported. Neither are ‘gestures’ – the LMC versions of 2D gestures commonly used in conjunction with hand held touch screen devices e.g. swipe. The emphasis is now on the use of LMC, which is by nature a 3D-aware controller, in 3D space with VR as the principle target platform. Despite this repositioning of the LMC, thanks to a well-documented API and active community, there are many experimental applications of the technology outside of games and VR.

A useful primer on the topic of LMC within the context of 3D HCI is the review by Bachmann, Weichert & Rinkenauer (2018)  which notes

  • the use of touchless interaction in medical fields for rehabilitation purposes
  • the suitability of LMC for use in games and gamification
  • the use by children with motor disabilities
  • textual character recognition (‘air-writing’)
  • sign language recognition
  • as a controller for musical performance and composition.

A common concern is the lack of haptic feedback offered by the LMC: “the lack of hardware-based physical feedback when interacting with the Leap Motion … results in a different affordance that has to be considered in future physics-based game design using mid-air gestures.” Moser C., & Tscheligi M., (2015). Seeing as LEAP has recently (May 2019) been acquired by Ultrahaptics, a specialist in the creation of the sensation of touch in mid-air, this situation is likely to change.

About the design of intuitive 3D gesture for differing contexts, Cabreira & Hwang (2016) note that differing gestures are reportedly easier to learn for older or younger users, that visual feedback is particularly important so that the user knows when their hand is being successfully tracked and that clear instruction is imperative to assist learning.

Shao (2015) provides a comprehensive technically oriented introduction to the LMC, including a catalogue of gesture types and associated programmatic techniques.

A Range of Static Hand Gestures

Shao also notes the problem of self-occlusion, where one part of the user’s hand obscures another part, resulting in misinterpretation of hand position by the LMC. (2015).

Self-occlusion

In Bachmann, Weichert, & Rinkenauer (2015), the authors use Fitts’ law to compare the efficiency of using the LMC as a pointing device vs. using a mouse. The LMC comes out worse in this context, exhibiting an error rate of 7.8% vs 2.8% for the mouse. Reflecting upon this issue led me to a report concerning the use of expanding interaction targets (McGuffin & Balakrishnan, 2005) and the general idea of making an interaction easier to achieve by temporarily manipulating the size of the target. In fact, an approach I have subsequently adopted is to make a pointing gesture select the nearest valid target, akin to gaze interaction in VR where the viewing direction of the headset is always known and can be used to manage interaction. In VR, this is often signified by an interactive item changing colour when intersected by a crosshair or target rendered in the middle of the user’s field of vision.

References

Bachmann, D., Weichert, F. & Rinkenauer, G. (2015) Evaluation of the Leap Motion Controller as a New Contact-Free Pointing Device. Sensors 2015, 15, 214-233..

Bachmann, D., Weichert, F. & Rinkenauer, G. (2018) Review of Three-Dimensional Human-Computer Interaction with Focus on the Leap Motion Controller. Sensors 2018, 18, 2194.

Cabreira, A., and Hwang, F. (2016) How Do Novice Older Users Evaluate and Perform Mid-Air Gesture Interaction for the First Time? In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ’16).

McGuffin, M. & Balakrishnan, R., (2005) Fitts’ Law and Expanding Targets: Experimental Studies and Designs for User Interfaces. ACM Transactions on Computer-Human Interaction, Vol. 12, No. 4.

Moser C., & Tscheligi M., (2015) Physics-based gaming: exploring touch vs. mid-air gesture input. IDC 2015.

Shao, L. (2016) Hand movement and gesture recognition using Leap Motion Controller. Stanford EE 267, Virtual Reality,Course Report [online] available at https://stanford.edu/class/ee267/Spring2016/report_lin.pdf (accessed 29/8/19)

Being Human 2019 – Early Stage Ideation

The Being Human Festival 2019 first came to my attention in January. The festival is an umbrella organisation led by the School of Advanced Study, University of London in partnership with the Arts and Humanities Research Council and the British Academy. It has a national remit of promoting public engagement with humanities research and over the last few years has seen a sharp rise in participating academic organisations.

The festival consists of single events and ‘hubs’ hosting multiple events. As an artist-turned academic myself, only just recovering from the baptism of fire of getting a new course up and running, I saw the festival call-out as a good cue to re-establish my neglected practice-based research interests. Having considered the remit of public engagement with humanities research and the Being Human 2019 theme of Discoveries and Secrets, I began thinking about connecting a text-based interactive experience, building on previous works Data Flow and Journey Words, with a hitherto overlooked archival resource.

I started to develop ideas about forms of interaction, initially thinking about touch, while considering a suitable underlying information architecture, knowing that it would be difficult to connect an interactive experience with an existing archive ‘as-is’. I conceived of a simple two-level structure consisting of individual items of content each with one or more associated keywords. The keywords being used to traverse the information hierarchy and ‘discover’ one or more items of content.

As for the interaction design, I was originally thinking about touch screen and how multiple touches might be used to discover and combine keywords.

I started creating a touch demo using HTML5 and JavaScript, thinking about an iPad or similar as an eventual interaction device.

I was quite pleased with the results but decided to pause development until conducting a round of research, more of next.

C is for Collaboration

Speaking at the recent Plugin Symposium hosted by Signals Media of Colchester, has given me a reason to reflect upon my own creative practice and in particular my urge to investigate the collaborative potential of digital art.

Professional self-reflection is a great activity, particularly when enhanced by the perspective of time: not just a little so that events are still fresh and perhaps too recent to view holistically, but neither too much so that memory is impinged upon by the distance of age! Of course, regular self-reflection is often portrayed as the saviour of creative professionals but ocassional, yet timely, self-reflection is certainly better than none!

Recently, in reflecting upon my own practice, I began by reclaiming the personal manifesto of seeking to create artistic works that are public, participatory and playful – the 3 p’s I set out to explore and draw connections between but a few years ago. I realised that my developing interest in collaborative art can essentially be articulated by a series of questions and corresponding creative responses.

Question: “How do I construct interactive experiences that are a pleasure to interact with and encourage co-interaction?”

Creative Response: Luminescence

Question: “How can I provoke emergent behaviour whereby multiple participants, who may never meet normally, compete or collaborate through the medium of an interactive art work?”

Creative Response: Play Table

Question: “How can I design playful, collaborative interactive experiences that have a positive impact?”

Creative Response: Portals for Mortals

For the sake of a more meaningful presentation at Plugin – I tried to pull together some of the elements that have shown themselves to be of special importance in the process of creating collaborative digital art. So here they are in slightly jumbled form, I hope they may be of some  use to you, dear reader…
T is for Technology.

There is a seemingly insatiable appetite for new tech. It’s like a magnet that draws people in… so lets use it to do just that. Like the pathological urge to open Pandora’s Box. As an artist, one must position one’s box of tricks strategically with the metaphorical lid slighty ajar…

P is for the Phenomenon of Play.

Play is a magic circle entirely of our own making. Rules can be made, rules can be broken. Transgressions can be made in perfect safety. The willingness to participate is all it takes…therefore the invitation to play is of particular importance.

F is for Facilitation.

An Artist often plays the role of Facilitator. Collaborative digital art in itself is a facilitation. Some people very much like to be shown what to do – helping them understand how to get involved is an important aspect of facilitation. In certain situations, participants exiting an art experience themselves can become ambassadors to the next group of participants. Facilitation can go viral!

D is for Design.

If Art is about asking questions and opening up possibilities… design in the service of art solves problems and brings the art to life. From tech to user experience, there are many dimensions of design. It is an iterative process and can always be improved – so improve it!

A is for Audience.

Who is the collaborative experience aimed at? If its ‘aimed at everyone’ – that’s a tough call to get right, reword: that’s impossible! By figuring out who collaborative experiences are for, we can make them better, just the same as for any other service or product design – know thy user!

C is for Collaboration.

A truly multi-faceted word. Generally a force for good within the arts. But let’s consider what we mean by it when we use the word. For me it implies creating something new or finding new ways of working together where agency and creativity are bounced around the court of collective imagination. It can be question and response, it can synchronised, multilateral creativity. Whatever form it takes, collaboration can produce amazing results is to be recommended.

R is for Risk

Perhaps a certain amount of risk is inherent in collaboration and maybe that’s why it might feel unsafe and uncertain at times. It won’t work everytime either! One thing is for sure: the best creativity does not occur within a nice padded comfort zone!

 

 

Plug In Symposium

Pleased to be sharing experiences and insights at the forthcoming Plug In Symposium, hosted by Signals Media, Colchester. Also very pleased to be sharing a platform with some great names 😉

Here’s how my talk is billed:

Collaborative Interaction: How Playful Technology Can Be Used To Mediate The Space In Between Us

Jamie Gledhill
Digital Artist and Computer Sciences lecturer, Norwich University of the Arts

East Anglia-based artist and educator, Jamie Gledhill, will present and discuss elements of his digital arts practice including interactive installations and public art commissions. A recurring theme is the design of experiences that create connections between friends and strangers alike through the medium of technology-enabled play. Jamie will share the knowledge he has developed in this area and how this might be applied in wider contexts.

More here: http://www.signals.org.uk/event/plug-in-symposium/

New Job

I am very pleased to announce that I have been appointed as Lecturer: BSc Computer Sciences at Norwich University of the Arts (NUA) where I shall be delivering the new Games Development, Interaction Design and User Experience Design BSc degree programmes. This is an exciting opportunity for me to draw upon many years’ experience working within both arts and commercial environments, producing a broad range digital outcomes. I expect to be rather busy in my first year getting up to speed as a teacher, but am sure the journey will be both positive and interesting. So, there may be less of my own creative activities to post about in the short term, but in the mid-long term I hope to remedy that! Watch this space….

http://www.nua.ac.uk/creative-science/

Tree Face

A test projection onto a conveniently located tree, hot off the press, so to speak. I’ve been meaning to do this for some time and have at last found the time. I’m pleased with the result which has already generated positive feedback and interest.

Virtualisation of Place

I’m currently investigating the ‘virtualisation of place’ – using 360° photography as a basis to create immersive, atmospheric panoramic scenes which might ultimately contain narrative and interactive elements. My first, rather tentative step, is the creation of a fairy glade. As good a starting point as any when it comes to mixing reality with fantasy!

Hi-res 360° image:

Stolen Art

Originally published on Facebook last year:

Missing the Missing Art

Sadly, a small number of sculptures have been stolen from the Harlow public art collection over the years, thankfully none recently. For most people it’s not a case of missing something that has been taken away but rather never knowing works that should still be available for all to enjoy. The stolen sculptures are listed on the Sculpture Trail pages of www.visitharlow.com which is a good way to promote awareness of them. They are:

Boy Eating Apple, anon, 1930s, bronze cast
Lion, Antoine-Louis Barye, circa 1833, bronze cast
Self Encounter and Sower, anonymous, 1960, bronze

One can only imagine where they have ended up, perhaps in some criminal hideout along with other stolen art works? In any case, they are truly missed.

360° image:

Gibberd Virtual Residency 360° Videos

The following three 360° videos represent the final creative outcome of my virtual residency at the Gibberd Gallery, Harlow, which ran from September to December 2016. I was asked to ‘reframe’ the town’s post-war art collections in the context of the new town legacy. I was particularly interested in attitudes towards the town’s sculptures, being highly visible symbols of Harlow’s unique heritage.

Study #1

Locations of the nine most popular Harlow sculptures, as captured by Amanda Westbury in 2012, are juxtaposed with monumental renderings of key economic statistics published by the Office for National Statistics. Original footage was shot over a two hour period from the top of Terminus House, the joint tallest building in Harlow.

Study #2

Ryan Karolak talks about growing up in Harlow, the new town design legacy and recalls the sculpture Solo Flight when it was located at the Harvey Centre.

Study #3

Imagining the sculpture ‘Screen’ by Gerda Rubinstein as a tower which can be climbed up, one floor at a time. Commentary by Jenny Lushington.