Skip to main content
SearchLoginLogin or Signup

Sonification of Ecological Environment through Electronic Musical Instruments

Published onJun 22, 2022
Sonification of Ecological Environment through Electronic Musical Instruments

Sonification of Ecological Environment through Electronic Musical Instruments


Acoustic Ecology, Electronic Musical Instrument, Biophilia, Anthropocentric, Non-anthropocentric


Research Questions

  • How do we revitalize acoustic ecology in our contemporary music scene?

  • How do we adapt electronic musical instruments to acoustic ecology?

  • How do we advocate the theoretical, empirical, and practical principles related to human body perception on ecology, instrumental performance, and presentation?

  • How does machine learning facility electronic musical instruments in the practices of creating artworks for acoustic ecology?



The aim of the traditional meaning of Acoustic Ecology is to “bring together the scientific, sociological and aesthetic aspects of the sonic environment”1, and inspire the attention to not only the natural sounds, but also the industrialized environment and human-effect sounding world. With the technological advancement in feature extraction, data retrieval, metadata elaboration, and ubiquitous computing2, the methodology of Acoustic Ecology could be extended to biodata detection, analysis, manipulation, and sonification. Therefore, it is proposed that Acoustic Ecology should be revitalized and re-defined according to contemporary knowledge in music technology after re-thinking the history of technology-based music. I propose that Acoustic Ecology shall not only include the acoustic properties of the soundscape, but also including biodata sonification. Through measuring electrical conductivity, galvanic conductance, electromagnetic signals, or expressing biological structure of organisms in tailor-made electronic musical instruments, the signals convey the energy through algorithmic computation into sounds, which is different from the conversion between analog to digital signals in the field recordings of acoustic landscape. Biodata sonification echoes with the fundamental and traditional purpose of creating collective commemoration from the public, manifesting the sense of identities, consequently evoking environmental consideration, and allowing audiences to experience the “unheard sounds” from the data. The paper shall demonstrate an example of ecological genomics 3 through bioinformatics to potentially express the interaction of genetic mechanisms and ecological environment, to navigate the responses of organisms; and to explain how human beings adopt electronic musical instruments as a medium of expression of the biological and genetic aspects, which contributes to ecological genomics.


1.     Context/Theory



In the traditional convention of Acoustic Ecology, data analysis usually takes place in the acoustic environmental sounds to assess the health of the biosphere. The sounds of today might become an “acoustic heritage” of tomorrow since ecological health changes from time to time; and the physical properties changes in a landscape leading to the change of the acoustic and non-acoustic features of environment. Biodata sonification adopts data extraction and analysis from the environment or organisms and potentially gives useful environmental, ecological, or biological metadata for further exploration in acoustic ecology. 


1.2. Anthropocentric and Non-anthropocentric View

The methodology of biodata sonification would be investigated from a non-anthropocentric view, which is different from the anthropocentric approach in the traditional meanings of acoustic ecology. The traditional purpose of acoustic ecology usually aims to realize the effects of human activities (technophony or anthropophony) on ecological environment in both geophony and biophony, and “prioritizes community engagement, exploration, and experience of the sounding world, driven by a desire to build stewardship and agency for change management in the community”4. Hence, human beings could potentially find solutions for the problems, respond to the situation as a community, or at least realize the changes we posed to our environment, which is often more anthropocentric and human-centered because the thought processes are from/for human beings. The evolutionary non-anthropocentric approach proposed in the paper not only gives a more precise and thorough data analysis by expanding the methodology, but also allowing us to re-imagine our acoustic environment from data perspectives. Apart from only aware of the acoustic properties of landscapes, consciousness of hearing the conventionally “unheard” sounds could be inspired and learnt by instinct and observation, which echoes with the traditional purpose of acoustic ecology.


1.3. Human Interactive Body-based Technologies and Somaesthetics

To elaborate the concept of biophilia, the electronic music instruments shall adopt human interactive body-based technologies by navigating and exploring the intrinsic features of our embodied ways of engaging the environment and transforming it through bodily motions and movements.  The concept of somaesthetics is suggested in the navigation of biodata sonification and it could “provisionally defined as the critical, meliorative study of the experience and use of one’s body as a locus of sensory-aesthetic appreciation (aesthesis) and creative self-fashioning.”5 The instrument shall provide experiential skills for users to experience pleasurable experience through the navigation.


The electronic music instruments are used as a medium to navigate the biodata and the process of navigation is defined as biodata sonification, which shall extend human perception on acoustic ecology. Biodata creates a system that expresses the non-anthropocentric views and human beings navigates the system to understand the views with categorized gestures/motions. Therefore, the electronic music instruments could serve as a platform initiating dialogues between biophony and anthropophony, reflecting biophilia and offering chances for the biophilic interaction. Both anthropocentric and non-anthropocentric views shall be coined to fully present complete representation of acoustic ecology.



Figure 1: Data and Signal Flow Graph

The project makes a demo model, COVID-19 Genomic Navigator, a motion sensing glove that sonify the protein genomic data of COVID-19, which was extracted from The National Center for Biotechnology Information (NCBI) and Protein Data Bank (PDB). The PDB files show the structure of the proteins at the atomic and amino acid scale, which contributes to the multi-dimensional parameter mapping of the synthesized sounds. Three sets of data were selected in the project, including 7B3C, 7D4F, and 7KRP, which are all RNA-dependent RNA polymerase (RdRp) related. It was formatted into prn or csv files and was read in Max patches, which sends OSC message to Kyma (Symbolic Sound), where performs audio signal processing in multiple synthesizers through parameter mapping. The three protein data sets distribute evenly in space while the motion sensing glove navigates and sonifies the data in both physical space and time synchronously. The motion sensing glove captures the data of hand gestures and movement, then sends from the Arduino to Max patch through OSC messages, which would be used to control the graphical user interface (GUI) in Max that manages the parameters of the synthesizers in Kyma through OSC messages. 

Gesture recognition and dynamic time warping (DTW)6 from Wekinator of the motion sensing glove are used to perform machine learning and to catalyze the sonification of the interaction between the different protein data. The motion sensor data is sent from Max to Wekinator for data analysis. The output result of gesture recognition from Wekinator sends back to Max and trigger programming events to manifest the ‘musical characteristics’ of the proteins through the control parameters of the synthesizers in Max. Both the protein genomic data and gestural data are reformatted and programmed to run into several synthesizers for audification and sonification. The data of five categorized gestures are labelled, recorded, analyzed, and trained in the machine learning model before the performance. Non-categorized gestures would also be used in musical expression alongside with the five categorized gestures to build the growth and development of the piece. 

Through informative data retrieval of the raw data from our environment, real-time sonification techniques can be used to re-imagine the sounds that might be endangered to engender new elements to the natural environment. Learning the environmental sounds through machine learning and conducting data analysis helps us understand the fundamental and microscopic characteristics of ecology, which could be used to determine the functionality of modeling tools, that would adapt from the environment and create music as a response. 


The project aims to build an organic and self-sustained interaction of the model by having the music output learnt and fed back to the machine learning model mechanism ultimately. The electronic musical instrument would control the timing for the feedback, the machine learning threshold and network; and the time index of the data sonification. By using electronic musical instruments, “human beings’ innate affinity for the natural world could be explored.”7 The biodata sonification from the electronic musical instruments aims to engage deep listening8 from the audiences and maximize their consciousness in sensing the properties of biodata by instinct. Hence, it could educate the public and enhance environmental awareness. 


The demonstration is an in-progress work, and it shall be further developed in the precision and complexity of knowledge expression through biodata sonification to achieve the theoretical approach in the theory section. In the future, the adaptation of electronic musical instrument to acoustic ecology shall be elaborated through enhancing the functionality of modeling tool in data classification by ubiquitous computing.


Expected Outcome from the Demonstration

COVID-19 Genomic Navigator is an electronic musical instrument that adopts a motion sensing glove to perform the sonification of COVID-19 protein genomic data. It uses categorized and non-categorized gestures to demonstrate the story of COVID-19. The categorized gestures are used in gesture recognition while the non-categorized gestures are used for musical expression and structural development of the project.

7B3C shows the structure of elongating SARS-CoV-2 RdRp with Remdesivir while 7D4F shows the structure of COVID19 RdRp bound to suramin. Both Remdesivir and suramin are drugs that inhibit enzymatic activity of the protein and the replication and transcription of the RNA; whereas 7KRP is the structure that shows the backtracking and transcription of RNA in coronavirus. The contrasting role of the ‘protagonist’ and ‘antagonist’ is expected to demonstrate through the project because the differences in functions of the proteins builds up climactic contrast.

The sonification process of the “protagonist” and “antagonist” proteins are used to portray the story of the struggle and reflects the contemporary life of everyone. The contrast of the role would build up musical growth and development and trigger musical events through gesture recognition in machine learning model.

In the demonstration, these three protein genomic data are the data source, and a research purpose is reached based on the functions of the proteins. The system built in the project allows different protein genomic data (in PDB format) to be loaded in the system as a data input, so as to generate the corresponding music based on the unique data structure. Therefore, users could choose any genomic data of their own preferences and compare them in the sonification system. 

PDB file shows the structural significance of the RdRp at the atomic and amino acid levels, which contributes to the multi-dimensional parameter mapping of the synthesizing sounds. Three protein data distribute evenly in space while the motion sensing glove navigate and sonify the data in the synchronization of physical space and time dimensions. 

The 3D structure of the atoms is mapped with the spherical binaural panner to recreate the ‘3D images’ of sounds, while the atoms were concatenated and sonified one by one along the time Index. Therefore, audiences could navigate the position of the atoms in three-dimensional space as if the proteins are painted sonifically. 

Through biodata sonification, it expresses the genetic mechanisms through data storage, analysis, and manipulation in bioinformatics; and its computational capabilities could be optimized through ubiquitous computing. Through research analysis of the interaction of genetic mechanisms and ecological environment, we could contribute to ecological genomics, which shall become one of the criteria evaluating the scientific, sociological, and aesthetic aspects of the environment (i.e. enzymatic responses between the proteins through sonification and its effect in the community). The methodology of designing electronic musical instruments as a medium of data representation and musical expression of the biological and genetic aspects of ecological genomics through sonification shall include and express the conceptual framework of somaesthetics, advocate the theoretical, empirical, and practical principles related to human body perception on ecology, instrumental performance, and presentation.



Biological significance of the environment shall reflect the knowledge of biophony phenomena with visceral connections to the data. Therefore, data visualization would be vital in the representation of biodata sonification; and these aspects shall be developed. In the consortium, methodology of biodata sonification through electronic musical instruments would be explored and discussed.


The concept of ubiquitous computing was introduced in machine learning model of data sonification and the methodology of refining ubiquitous computing in the application of biodata sonification could be discussed. What approaches shall be attempted in machine learning of ubiquitous computing for musical purposes? How shall we perform optimization in the algorithmic capabilities of the sonification system?


Ethics Statement

The project is supported by the Louisiana State University and is one of the research projects the author does in her PhD degree. The project has observed and followed the principles and codes of practice in designing new musical interfaces. The demo in the proposal used bioinformatics from open-sourced platform and the data does not use unethical extraction methodology. It is guaranteed that no violation on the principle of accessibility, environmental and inclusion in the project. No potential conflicts of interest occur in the project.

Since the project adopts open-sourced genomic data and the music materials are based on a large database, it would create a sustainable artwork and other users could use the system with different set of genomic data to generate their desirable outcome.

The author would promote ethical research environment and uphold the standard in the statement.



Thank you all the inspiration and support from Carla Scaletti, Kurt J.Hebel, Teddy Garcia Aroca and the Louisiana State University.


Drossos, K., Floros, A., & Kanellopoulos, N.-G. (2012). Affective acoustic ecology: Towards emotionally enhanced sound events. In Proceedings of the 7th Audio Mostly Conference: A Conference on Interaction with Sound (pp. 109–116).

Hogg, B. (2013). When violins were trees: Resistance, memory, and performance in the preparatory experiments for landscape quartet, a contemporary environmental sound art project. Contemporary Music Review32(2–3), 249–273.

Paine, G. (2017). Acoustic Ecology 2.0. Contemporary Music Review36(3), 171–181.

Radford, L. (2001). Handbook of Acoustic Ecology. Computer Music Journal25(1), 93–94.

Ouzounian, G. (2017). Rethinking Acoustic Ecology. Evental Aesthetics6(1).

Wrightson, K. (2000). An introduction to acoustic ecology. Soundscape: The Journal of Acoustic Ecology1(1), 10–13.


Demonstration of COVID-19 Genomic Navigator

Supervisor’s Recommendation Letter


No comments here

Why not start the discussion?