Mental Dance (2021)
Project Summary
Mental Dance began as an open-ended enquiry into how neuroscientific concepts can inform and be integrated into a creative process. A collaboration with choreographer Carol Brown and neuroscientist Marta Garrido, it took as inspiration Marta’s research into neurodiverse cognition and research technologies used in the lab such as EEG, MEG and fMRI (Harris et al. 2018). In contrast to the diagnostic language surrounding neurodiversity, we also looked at the human impact of living with mental illness – in particular, dancers Vaslav Nijinsky and Lucia Joyce, whose dance careers and personal freedoms were both prematurely cut short when they were institutionalised (Nijinsky 1999) (Shloss 2005).
Our original intention was to develop a choreosonic work for a live performance, where two dancers would use wearable sensors (Inertial Measurement Units or IMUs) to manipulate and control sound. Our interest in interactivity with semi-improvised movement was spurred by an interest in exploring how auditory and kinaesthetic senses work together, creating a feedback loop between movement, sound and back to movement.
​
Neurobiological research showing that the mind is not free of the flesh but a result of highly complex and relational processes between the brain, body and environment, inspired us to explore porosity and mutual-informing between movement, sound and technology.
However, due to repeated lockdowns in Melbourne throughout 2020 and 2021, we could no longer collaborate in the same physical space nor use wearable sensors as the dancers did not have access to the required computer software nor hardware. Therefore, to enable ongoing development of the work, we had to find a way to use accessible technology that everyone could easily access. All collaborators were familiar with video conferencing apps such as Zoom, so using the video feed from video conferencing apps for movement-tracking was identified as the easiest way to build the interactive system.
Although intended only to be a temporary stop-gap measure for rehearsals, this telematic way of working gave rise to so many interesting questions regarding the remote body, the use of screenic space in choreography and collaborative methodology that we decided to hold live online performances using this workflow. Two online performances were held, one for an invited audience on 4 October 2021, and one for the Changing Perspectives on Performance: Interrogating Digital Dimensions and New Modes of Engagement online international symposium on 9 October 2021.
The work resulted in published papers for ISEA2022 and the Body, Space and Technology Journal.
​
System Overview
Python is high-level, general-purpose programming language used commonly in machine learning applications.
Real-time video feed of the dancers in their individual homes from video conferencing apps (Zoom in the case of first performance and Microsoft Teams for the second performance) were screen-captured through node-based programming software TouchDesigner. A Python script was created in TouchDesigner to implement an open-source AI-trained pose estimation pipeline from Google called MediaPipe ("MediaPipe: Live ML anywhere" 2020) to track 33 landmarks of the body.
33 body landmarks provided by MediaPipe ("MediaPipe: Live ML anywhere" 2020)
Body movement data was then output from TouchDesigner to MaxMSP via OSC messages for further processing with the interactive sound system.
The interactive sound was built primarily around live vocals which were processed and manipulated by the dancers’ movements in real-time. The vocals would be streamed to me using audio streaming app Audiomovers, sent for processing in MaxMSP, and the final sound output streamed out again to the video conferencing app for the dancers, collaborators and audience using audio-routing app Loopback. The diagram below shows the system workflow:
System workflow in Mental Dance
Unfortunately, due to the unavailability of our vocalist for the live performances, we used a pre-recorded version of their voice. However, the voice was input into the system in its raw, unprocessed form, so that we could swap this with a live voice at any time in future performances without requiring any chance to the interactive system design.
Sound and Interactive Design
As a work built on neuroscientific concepts and mental illness, the question of what is human was a central concern. We also had many textual sources as references, including diagnostic manuals, research articles, biographies and autobiographies. Accordingly, we decided to work with a vocalist so we could use spoken or sung word. Countertenor Austin Haynes was chosen to be our collaborator due to the unique nature of their voice, which reinforced the idea of the "outlier". Their classical Baroque training also provided a counterpoint to the scientific and technological focus, providing a sense of temporal and contextual displacement. The relative scarcity of choreosonic design using acoustic sound sources rather than digitally-synthesised sound created challenges in finding novel ways to combine classical music theory with new technology.
Lyrics and text used in the vocal score were generated using Markov chain analysis to combine text derived from interviews with Marta Garrido, DSM5 diagnostic criteria of schizophrenia and Nijinsky’s diaries (Nijinsky 1999). This produced unexpected combinations of words relevant to the Mental theme and productive of new meaning, yet non-conforming to grammatical structures.
Markov chains describe the probabilities of states transitioning to a different state. In this case, a generator analysed chosen texts, coming up with probabilities of words being followed by another word.
As our vocalist was classically-trained, I provided a notated score with flexibility to adapt in real-time to the dancers and emergent sound. The work was broken up into 4 sections:
1. Neural Networks
Neural Networks was the only movement that did not use real-time vocals. Rather, a library of 8 disparate sampled sounds was assembled, linked either conceptually or aesthetically to the Mental theme. This included recordings of Austin’s singing, the mechanical hum of an fMRI machine, ratchets, glass shards, bird wings, breath, radio static and recordings of Robert Schumann’s Ghost Variations for piano (1854), his last composition before being institutionalised for mental illness. The 8 samples were divided into 2 different granular synthesisers for each dancer.
The sonic quality for this movement was inspired by John Cage’s Roaratorio (1986), itself a work based on Finnegans Wake and its associations with Lucia Joyce and her schizophrenia. A feeling of rapid change, or sharp swings in mood and attention, was created by using ML to map movement to sound. The 3 co-ordinates of the left wrist together with the velocity of each coordinate of the right wrist (6 inputs per dancer) were fed into an unsupervised learning algorithm, which mapped the inputs into parameters of a granular synthesiser which controlled the sample sound chosen, its start time, duration and pitch using 2 ML algorithms – a neural net classifier to identify novel feature data from the movement, and a second network to interpolate and map the feature data to the synth parameters. This meant that every performance or run would result in a different movement-sound mapping, as it would depend on which wrist movements were made initially and the nuances of the movement.
​
The unsupervised learning algorithm used the ml.star machine learning library for MaxMSP created by Benjamin D. Smith. Specifically, a Fuzzy Adaptive Resonance Theory Neural Network was used to classify gestures, and a Multi-Layer Perceptron Neural Network was used to learn associations between inputs and outputs. See (Smith and Garnett 2012) .
In this section, the dancers were seated facing the camera, using only their upper body. They were blindfolded and thus could only "hear" each other while simultaneously learning the sound of their own movement at the same time that the ML algorithm was learning their movement to map to sound. Deprived of sight and with no fixed duration, the dancers had to navigate when to end the section by feeling each other across the remote network. The ML process resonated with Marta’s research into "predictive coding" in the human brain, where our ability to comprehend the future is shaped by the experiences of our past in a probabilistic, Bayesian computation.
2. Noisy Voices
​In Noisy Voices, Austin was provided with a text to read out in a detached, clinical tone. This was then put through a granular synthesiser with grain size and grain interval modified over 5 different sections relating to different movement states, resulting in different textures from sparse, staccato sounds to layered, long syllables. Various movement-sound mappings were implemented such as velocity to grain size and grain interval, and body angles to lowpass filters and pitchshifting delays. These mappings were distributed between both dancers, resulting in the final sonic output being affected by a combination of their movement at the same time.
3. Lucia
​In this section for solo dancer, a deliberately melodic score based on the Dorian mode was created for Austin to sing. We imagined bringing to life the silenced voice of Lucia Joyce, whose letters and medical records were purposefully destroyed. The Dorian mode was chosen not only because it is one of the most common modes in Irish music, but to ensure a consistent harmonic base when the sung melodic phrase was combined with granular synthesis. A rhythmic component added interactivity, as wrist distance and velocities affected the speed of the rhythm and coefficients of a biquad filter to change its timbre. A short sample of Stravinsky’s iconic polychord from the Rite of Spring was used as an accent and triggered whenever either hand was raised in a sudden movement.
Lucia vocal score
4. In My Head
In this section, a feeling of stasis was created by a vocal score using repeated, long-held notes. A harmoniser automatically re-pitched and layered Austin’s voice to create chords that cycled continuously between the tonic minor and dominant major with every breath intake. The relative gain of the harmonised pitches and their reverb were mapped to the tilt of the first dancer’s head to create the feeling of thoughts and memories trapped inside the head.
In My Head vocal score
As the section progressed, tremolo strings were added to create a sense of tension, followed by a concatenative synthesiser built with samples of bird wings. These were varied and made interactive with movement using lowpass filters, pitchshifting delays and changes in amplitude.
Screenshot from In My Head movement in Mental Dance
Evaluation
Mental Dance was a journey into a new mode of collaboration, marked by exploration into new technologies and ways of attuning to each other at a time of unprecedented disruption and isolation. Artistic practice became a window of connection and conversation about mental wellbeing.
For the dancers, the remote collaboration required them to perceive not only their physical bodies, but how that translated into a virtual body on a flat, 2-D screen. With vision hampered by looking into a screen for feedback, the dancers had to rely heavily on the sound to find synchronicity between performing agents. Reflecting on the experience, Luigi writes:
"As movements and moments aligned, this both challenged our expectations and revealed the collective conscious we were building as collaborators, raising the notion of an embodied empathy not hindered by lack of proximity. Perhaps this proposes a re-thinking of locality – to be digitally local."
Audience feedback for the online performances was overwhelmingly positive, with many remarking on the ability of remote bodies to interact and work together. Despite the online delivery, some audiences commented that it was very engaging due to the live nature of the interaction, rather than something that was pre-recorded. Many were also interested in "trying to figure out" the mapping design and the potential for further exploration of the system, both in performance and pedagogy.
In terms of mapping design, while it was more conceptually interesting to have multiparametric mapping from both dancers affecting the same sound, this often resulted in less legibility for the audience, as well as a reduced sense of agency for the dancers, particularly as they were often unable to see the other dancer from the small computer screen. In further development, higher-level movement analysis such as overall intensity and overall volume or smoothness could be implemented rather than lower-level features. More interesting sonic relationships could also be further explored from the relative orientation of each dancer to each other on the 2D screen.
Significance of Research
This project with its twists and turns forced by the pandemic was critical in shaping my research towards a more networked approach to the body. It commenced my investigation into the use of computer vision for participatory interaction and provided the base from which further works such as Musical Lunch and Electromagnetic Room would explore technologically-mediated connection through sound-making. It also established many of the technical discoveries in effective gesture-sound mapping. The use of multiple disparate sound sources, recombination of text and sudden shifts between them reflected not only the mental states that we were referencing, but the fragmentary nature of contemporary culture.
Credits
Creative concept: Carol Brown and Monica Lim
Choreography: Carol Brown
Interactive and sound design: Monica Lim
Dancers: Jordine Cornish and Luigi Vescio
Vocalist: Austin Haynes
Neuroscientist: Marta Garrido
Documentation: Monica Lim, Patrick Hartono and Patrick Telfer
Supported by: University of Melbourne Creativity and Wellbeing Research Initiative and Science Gallery Melbourne