by Patrick Hartono
The total length of the proposal will be between 1500-2500 words
HCI, Audiovisual, Interaction design, Interactivity, NIME, doctoral, consortium, Wayang Kulit, audiovisual composition, gesture, interactive system
This research aims to challenge the artistic possibility of human-computer interaction (HCI) and computer game technology through the adaptation of the "Sabetan" technique of Wayang Kulit to address the research questions as follows:
How do the Sabetan techniques contribute to the interactive design and musical aspects?
Which interaction modes should be implemented to achieve dynamic interactiveness between the performer and the system?
How could compositional and performative methods emerge in conjunction with interactive system development? What are the significant differences with the conventional methods?
What are the most effective data processing methods to configure the audio and visual parameters in real-time?
These questions served as guidelines to execute the research activities that incorporate field and studio experiments, focusing on investigating the HCI frameworks, data collection, and analysis. Henceforward, the claims of originality and contribution to knowledge are demonstrated through the original creative works (Candy, 2006).
Before getting deeper into the discussion, the definition of the audiovisual composition should be addressed in advance to anticipate any misapprehensions concerning the study's objectives and clarify the relevant topics.
The definition of audiovisual composition is entrenched in Grierson's notion proposed in his thesis, i.e., a specific form of artwork where the audio and visual entities are derived from digital synthesis procedures, aiming to explore the Chion concept of added value. Furthermore, he adds that the audiovisual work is more than a combination of its components parts nor simply audio and visual production, but rather an embodiment of a process of composing audiovisual works which exploit the Added Value (Grierson, 2005).
In other words, the audiovisual composition is understood as a form of musical work consisting of audio and visual entities that explores the reciprocal relationship among them. And through these entities' simultaneous presence, a greater effect is created instead of their existence individually (Chion, 1994).
Sabet-(an) is a specific technique of Indonesian Wayang Kulit that controls the puppet's movements and serves as the puppeteer's expressive means to interpret the puppet's character, embodied through its locomotion during the show (Murtiyoso, 2004).
In this research, the performative movement and concept from the Sabetan techniques are adopted to construct the interactive system and hand gesture score that functions as the primary gestural procedure to interact with the system. Thereby, the performer's interaction with the system during the show is no longer merely comprehended as an improvisational act but as an embodiment of performative gestures that delivers the puppeteer's expression.
The leap motion device is employed along with a specific machine learning model designed to fit the research objectives to acquire the gestural information. Once completed, this information is assigned to control the parameters of all the continuous audio and visual processes, such as sound synthesis and spatialisation, including the real-time animated movement of virtual hands.
This project offers new ways of composing and performing audiovisual composition over kinesthetic hand gestures to embody the audiovisual interaction in a 3D virtual space.
Upon re-investigating the research's objectives and challenges, the interactive system development is focused on the musical usabilities by implementing the articulative freedom and expressivity principles (Tanaka and Knapp, 2022), which aligns with the twofold functionality of the Sabetan Technique—performative methods and expressive agents.
This research aims to provide more spaces for balanced dialogues between artistic and technical discourses by comprehending the interactive system as an interactive mechanism and as an expressive agent to embody musical ideas.
Afterward, the research centered on three case studies of audiovisual works created using the same blueprint systems but continuously evolved in conjunction with the artistic and technical concepts applied in each case study.
For the NIME 2022 Doctoral Consortium, the first case study, “Morphology,” is discussed. The artistic exploration of this work is located within the development of the interactive audiovisual system, inspired by the notion of Morphology and Added Value.
In an attempt to execute these concepts, the interactive system is specifically designed to simultaneously generate audio and visual entities based on the incoming real-time gestural information. It is perpetrated by assigning the real-time data to configure the morphing parameters that alter the entity's character from its original form/timbre toward the abstract audiovisually manifestation.
Moreover, since most audio and visual processes depend on the performer's gestures, this system allows the spectator to witness all interactions in both spaces lively, which indirectly engenders a unique viewing experience that was previously not achievable in the traditional form of audiovisual performance.
A mixed-method approach under the umbrella of Practice-Based methodology is adopted to undertake the research. It incorporates Motion Analysis, User-Centered Design (UCD), and Observation methods to establish a research framework. These methods serve as practical procedures for specific research activities that align with their usabilities areas.
The practice-based is utilized to execute and elucidate any practical research activities that involve various experimentation with HCI technology (software/hardware) by implementing the UCD methods into the interactive system development process. These empirical investigations aim to uncover the research findings through the continuous artistic processes that will lead to the creation of case studies.
The UCD method is an iterative design framework in which the users' character and usabilities are the core of the design processes (IDF, 2005). Therefore, any musical-based activities within the audiovisual composing and performing contexts can be used as a design subject to achieve specific functionalities in expanding the conventional form of audiovisual composition and performance methods.
Subsequently, motion analysis and observatory methods are employed to examine the puppeteer's gesture when controlling the puppet's skeleton hands. It uses the Motiongrams and a custom Motion-Tracking to acquire, analyze and visualize the information (Alexander, 2006). This applied-quantitative process is performed based on a mixed-method approach that focuses on musical usabilities and functions to filtrate and determines which gestural movement can be implemented further according to the technical limitation of the HCI and computer game technologies used.
Lastly, the filtrate gestures from the previous data operations are utilized to develop the interactive system and a gestural score that functions as the primary procedure to interact with the system.
The knowledge and creative artifacts that emerge throughout the studies are intended to answer the research questions and contribute to interdiciplinary communities of audiovisual and HCI practices that are centralized into the three research findings areas as follows:
Practical design solutions for the interactive musical system development focus on the performative, technological aspects.
Adaptation and analysis methods for non-western performative techniques and concepts (Sabetan Technique) emphasize musical and performative factors.
The novel compositional methods and performative strategies of audiovisual composition.
All the research outcomes are elaborate textually within the doctoral thesis and performatively (performance lecture) at computer music, NIME, or similar events internationally.
Chion, Michel, 1947-. Audio-Vision: Sound on Screen. New York: Columbia University Press, 1994.
Chion, Michel." he Audiovisual Contract: Projections of Sound on Image." In Audio-Vision: Sound on Screen. New York: Columbia UP, 1994.
annotation by Deborah Wolfson (Theories of Media, Winter 2003
Di Donato, Balandino. (2020). Designing embodied human-computer interactions in music performance. 10.13140/RG.2.2.26027.36640.
Edward C Van Ness, Prawirohardjo Shita, Javanese Wayang Kulit : An Introduction, Oxford in Asia Paperbacks (Kuala Lumpur ; New York: Oxford University Press, 1980), 1.
Jensenius, Alexander. (2006). Using Motiongrams in the Study of Musical Gestures. 499-502. 10.13140/2.1.1895.7124.
Yong Leng, Hoo & Norowi, Noris & Atan, Siti & Jantan, Azrul & Wirza, Rahmita. (2018). Designing a Natural Musical Interface for a Virtual Musical Kompang using User-Centered Approach. CHIuXiD “18 Proceedings of the 4th International Conference on Human-Computer Interaction and User Experience in Indonesia, CHIuXiD “18 38-42. 10.1145/3205946.3205949.
Kindly download using the link below: