Electromagnetic Room (2022)
Project Summary
Electromagnetic Room was a 45-minute performance held at the Kenneth Myer Auditorium, Ian Potter Southbank Centre on 5 May 2022 as part of the New Music Studio series at the Faculty of Fine Arts and Music, University of Melbourne. My collaborators were David Shea, who performed on the Electromagnetic Piano (EMP), and Patrick Telfer as sound engineer. I came up with the creative concept, systems design, sound design and coding.
In conceiving the work, I imagined a soundscape that would be simultaneously controlled by multiple people at the same time, regardless of musical expertise. Drawing inspiration from emergence theory (Soler-Adillon 2015), no single person’s actions would determine the final sonic outcome; rather, they would affect each other, thereby rendering a certain level of indeterminacy to the work even while the movement-sound mappings were determinate. By using microphone capture of an acoustic grand piano as the singular sound source, the final sonic outcome was also affected by the room acoustics and movement of people within the room – hence the name Electromagnetic Room. The use of the acoustic grand piano continued my interest in combining traditional instruments with contemporary contexts and technologies.
​
Electromagnetic Piano
The EMP is a hybrid acoustic grand piano with electromagnets attached to each string. This allows MIDI instruments to excite the strings in addition to the usual techniques of playing the piano, resulting in a sustained tone with almost no attack. The EMP was a joint invention by Mirza Ceyzar, myself and David Shea (Crombie 2021).
System Overview
Electromagnetic Room utilised Google MediaPipe’s AI hand tracking ("MediaPipe: Live ML anywhere" 2020).
21 hand landmarks provided by MediaPipe ("MediaPipe Hand landmarks detection guide" 2023)
Gestural control was chosen so that the level and length of participation could be easily managed by the audience themselves without any need for invigilators or additional equipment to be attached to the body. Hand gestures were chosen as the primary method of interaction as they are relatively intuitive to understand and control for most people and can easily be captured through webcams without any need for additional cameras. Whole body-tracking would have resulted in the participant being too far away from the screen and could also have been a barrier to participation due to self-consciousness. Lastly, the use of hands for sound-making reflected the way in which a piano is traditionally played to make sound. Visual cues were implemented for participants as they faced the screen.
Visual cue of hand movement tracking provided to participants.
3 participant stations were created consisting of an iMac on a plinth. TouchDesigner was used to run the MediaPipe Python script and to output the selected hand gestural information via OSC to a MacBook for processing through a local server. A DPA piano microphone captured the acoustic sound of the Electromagnetic Piano played by David Shea together with any sounds made by the electromagnets. MaxMSP was used to receive and process all hand gestural information from the iMacs, which were then mapped to MIDI outputs for triggering the electromagnets, as well as to control various digital audio effects on the microphone input.
Systems diagram for Electromagnetic Room
Sound & Interactive Design
The performance was divided into 5 sections inspired by Yukio Mishima’s Decay of the Angel (1975) and follows the 5 stages through which a Deva loses their purity – Decay, Diaphoresis, Dirt, Dissatisfaction and Darkness. Rather than a deterministic compositional approach, I created 5 different states which utilised different audio effects, MIDI-triggered patterns as well as gesture-sound mappings which could respond to David Shea’s playing. All piano performed by David Shea used an improvised approach based on a combination of listening and response as well as a loose pre-agreed structure which set out different textural and dynamical targets for each section. A printed diagram was provided at each station to indicate to the participants the gesture-sound mapping for each section.
Gesture-sound mapping map provided to participants
The principles which guided the choice of mappings were:
-
Balanced distribution between parameters mapped to each participant to create interdependent relationships between all participants and performers;
-
Perceptually relevant mappings (e.g. horizontal axis for pitch, replicating the way many acoustic instruments are designed); and
-
Favouring of low-level gesture controls rather than higher-level analysis to minimise computation load and increase transparency for participants.
1. Electromagnetic Piano MIDI
Station A triggered midi notes played by the electromagnets. The note pitches, duration, velocity and rhythm were dependant on each section as follows:
​
-
Decay – a set melodic pattern based on the Dorian mode, with randomised start and stop times triggered whenever one or both hands become visible. As this section utilised extensive use of long delays, the Dorian mode ensured less dissonance.
-
Diaphoresis – MIDI notes triggered whenever the hands were visible, with note pitches mapped to the horizontal x-axis of the left index finger. Note durations were fairly long (200 milliseconds) to give the impression of gliding pitches.
-
Dirt – to create a sparser texture, short-duration (50 milliseconds) notes were triggered very quickly whenever either hand was visible. Note pitches were mapped to the horizontal x-axis of the left finger, with increased hand distance increasing the range of possible pitches.
-
Dissatisfaction – no EMP.
-
Darkness – long drone of either D1 or D#1 whenever either hand was visible. The low pitch was chosen to create a dense and dark sound.
​2. Digital Audio Effects
Station B and C both affected audio effects as follows. In all sections, a default value was determined for all parameters if no hands were visible (i.e. no participants were at the station). The overall digitally processed sound was also spatialised to the horizontal x-axis of the right and left index fingers at Station B and C respectively.
-
Decay – visibility of either hand at both stations would trigger a ping-pong delay of the microphone input, with the amount of feedback determined by the distance between the hands.
-
Diaphoresis – the horizontal x-axis of the left hand and right hand respectively on each station created a pitchshift of the microphone input. Combined with the EMP output, this created a sense of ascending and descending gliding tones throughout the section. In addition, the distance between the hands at Station C was mapped to the amount (wet mix) of reverb.
-
Dirt – the microphone input was fed into a granular synthesis patch, with the grain size and interval mapped to the distance between the hands at Station B. The grains were then passed to a pitch-shifting delay where pitch was mapped to the horizontal x-axis of the right index finger at both stations. The amplitude of the pitch-shifting delay was mapped to the openness of each palm at Station B.
-
Dissatisfaction – this section utilised a sampler-based wavetable that captured the microphone input when its amplitude crossed a certain threshold. Random playback speeds were used to create variable repetitive glitch sounds from the microphone input. The amplitude of these glitches was controlled by the vertical y-axis of the left hand at Station B. A pitch-shifting delay continued to be implemented in this section. In addition, a distortion effect was added to the overall sound, mapped (wet mix) to the aggregated horizontal movement of both hands at Station C.
-
Darkness – this section utilised the granular synthesis, pitchshifting delay and distortion as described in previous sections.
​
Evaluation
Despite the use of low-level gesture controls and favouring transparency over complexity, the distribution of effects and triggers across the different participant stations often resulted in participants feeling a lack of agency or control over the interactivity. Some feedback from participants also indicated a need for more verbal explanation prior to the performance. In addition, we observed that the printed diagram may have been more misleading rather than helpful as the illustrations did not adequately describe the required hand gestures.
To a certain extent, this perception of lack of control by individual participants was inevitable because of the conceptual intent of the work to create an interdependent, emergent sound from multiple networked agents and to disrupt the conventional performance format. In this sense, the work achieved its aim. We found willing participants in the audience who freely came up to the stations to interact with the sound, and the atypical setup of the room encouraged the audience to move freely, with some lying down on the floor to feel the sound vibrations. The lines between composer, performer, improviser and audience blurred as listening and making became interdependent.
However, any future design to explore distributed networks could be improved by utilising a different mode of embodied interaction. Rather than hand gestures, which predispose audience expectations towards a controllable instrument and individual experience, it might be preferable to use a more abstract embodiment that frees up participants to interact with other people and instruments without the need to cognitively decode their actions. For example, whole room tracking of all bodies in the space could be used, or a multi-touch interface. In this way, more emphasis could be given to the social experience of the audience and performers in relation to each other rather than the individual impact of each participant.
Significance to Research
The new discoveries forced by Covid isolation during Mental Dance created questions regarding its application to live performance. Despite the shortcomings outlined above, this work was important to my research as it enabled exploration of distributed computing concepts to the body within a localised space. This allowed for the subversion of traditional social structures in music-making and performance. It also opened up possibilities for participatory approaches within a traditional piano performance.
Credits
Creative concept, coding and sound design: Monica Lim
Piano performance: David Shea
Sound engineer: Patrick Telfer
Video documentation: David Collins
Supported by: New Music Studio, Faculty of Fine Arts and Music, University of Melbourne