Analysis of the Field
and Terms Used
#TheBodyInInteractiveSoundMaking
As the body reclaimed its importance in the past century, technological advances simultaneously provide a proliferation of new ways to entangle the body with technology. New tools and applications emerge to control sound with the body, often created by individual artists for their own performances and practices and at times leading to commercialisation for sale to other artists or free, open-source distribution.
Notable examples of early adopters of gestural control systems for sound performance include Sonami with her Lady’s Glove (Barrett 2023), and Waisvisz’s The Hands (Torre, Andersen, and Baldé 2016). Both these artists used wearable sensors such as flex sensors, accelerometers and gyroscopes in self-made gloves to track intricate finger movement and gestures, which were then custom-mapped to various sound controls to explore as a performative instrument over many years. Similarly, Heap developed the MiMU gloves for her own performances, later commercialising the gloves for sale as "the world’s most advanced wearable musical instrument, for expressive creation, composition and performance" ("MiMU Gloves"). At the more budget-conscious end, a variety of other controllers such as the Leap Motion ("Leap Motion Controller 2"), Genki Wave Ring ("Wave"), Air Sticks (Ilsar) and MUGIC sensors ("MUGIC" 2020) provide customisable MIDI or Open Sound Control (OSC) information which can be used to control sound using acceleration or gyroscopic bodily movement. Increasingly, free and accessible gestural control interfaces are also being developed which utilise movement sensors in mobile devices such as tablets and mobile phones ("MiMU Gliss") ("GyrOSC" 2010) and AI-powered computer vision technology (Lim 2022) ("Unhands: a browser-based, gesture-driven MIDI controller"). Examples of sound works using movement sensors in mobile phones include Imaginary Berlin and Hyperconnected Action Painting (Xambó and Roma 2020).
Aside from movement data, biosignals have also been extensively used as a controller or source of sound, from Lucier’s seminal Music for Solo Performer (1965) where brain waves were used to excite percussive instruments, to Tanaka’s contemporary research into new biosensor instruments (Tanaka 2015) (Tanaka and Dubost 2002) and biosignal sound performances ("VNM Festival 2019 - Resonances - Atau Tanaka" 2020). Biosignals have also been used in contemporary dance performance to generate sound using dancers’ bodies, often using machine learning methods, such as Signals from Life (Van Nort 2015) and Donnarumma’s Corpus Nil (2016). Signals from Life, in particular, is an example of a distributed system where the sound of muscle contractions from multiple dancers are used to generate a musical composition, in a system described by its composer as a "collective instrument".
The field of dance is particularly relevant to any study of the body as a participatory interface and has been the subject of extensive research in Human Computer Interaction (HCI) (Zhou et al. 2021), as kineasthetic awareness and proprioception is integral to dance (Ehrenberg 2015) (Hsueh, Alaoui and Mackay 2019). Many frameworks developed to analyse, categorise and describe movement qualities come from dance research, including Laban Movement Analysis, which has more recently been combined with machine learning approaches for real-time automatic categorisation of movement qualities (Camurri et al. 2004) (Françoise et al. 2014).
One of the earliest interactive works using digital technology is Variations V (1965) by Cunningham and Cage, where dancers’ bodies triggered sound using light beams and radio antennas.
Within the field of interactive dance-sound art or what dance researcher Brown terms "choreosonics" (Brown and Lim 2023), the work of Alaoui is notable. In Self and Self-Portraits (2015), she builds an interactive installation where a dancer plucks stretched piano strings arranged around a cube in the configuration of Laban’s kineasphere. The acoustic sound of the strings was manipulated by the dancer’s movement through granular synthesis to become a digital soundscape. The sounds become a sort of digital memory of the dancer’s embodied exploration of personal space.
Self and Self-Portraits (Alaoui and Nesbitt 2015)

The atypical use of acoustic instruments in choreosonic performance was extended by Instituto Stocos in Piano & Dancer (Palacio and Bisig 2017), where an electromechanical piano was triggered by a dancer’s movements. The piano became a second body to the dancer, with physical presence and mutually interdependent movements. Notably, higher level movement analysis was undertaken, which sought to track expressive movement qualities such as energy, weight, smoothness and dynamic symmetry rather than basic kinematic data such as position, velocity and acceleration.
This approach aligns with multilayered frameworks in music information retrieval (MIR), where low-level signal descriptors (such as pitch, timbre, and spectral features) are distinguished from higher-level features (such as musical emotion, articulation, or style). Similarly, in movement analysis, higher-level descriptors capture gestural intent and expressive nuances rather than just raw biomechanical data using techniques such as machine learning models, dynamical systems theory and embodied cognition frameworks.
​
Algorithmic intermediate layers were then mediated between movement and symbolic musical structure (i.e. scored notation), allowing an aesthetic and metaphorical transformation of expression between movement and sound.
Aside from Piano & Dancer, Instituto Stocos have created many other choreosonic works with the latest in interactive technologies and AI, including Embodied Machine (Stocos 2022) and The Hidden Resonances of Moving Bodies (Stocos 2019), where interactive choreosonics was paired with live percussion to examine the common principles of bodily consciousness that underlie both instrumental performance and dance.
Beyond performances in "in-real-life" environments, a new form of online interactive choreosonics has started to emerge, using AI-trained pose estimation technologies such as PoseNet or Google MediaPipe to track body movement using simple webcams. In particular, a number of dance works that were originally intended for live performance were forced by the pandemic lockdowns of 2020-2022 to pivot to online formats, but nonetheless retained their use of body movement for interactive sound-making through such technology, including my own work Mental Dance (Brown and Lim 2022). A parallel work that was originally conceived as a live choreosonic performance using wearable sensors by Masu et al. (2022) around the same time was adapted to a browser-based installation, allowing participants to use their bodies over the browser to trigger visualisations and sounds, giving rise to a new type of remote embodiment.