Skip to main content
SearchLoginLogin or Signup

Rethinking networked collaboration in the live coding environment Gibber

Examining the performance practice of three ensembles in Gibber, a networked live coding environment

Published onJun 16, 2022
Rethinking networked collaboration in the live coding environment Gibber
·

ABSTRACT

We describe a new set of affordances for networked live coding performances in the browser-based environment Gibber, and discuss their implications in the context of three different performances by three different ensembles at three universities. Each ensemble possessed differing levels of programming and musical expertise, leading to different challenges and subsequent extensions to Gibber to address them. We describe these and additional extensions that came about after shared reflection on our experiences. While our chosen design contains computational inefficiencies that pose challenges for larger ensembles, our experiences suggest that this is a reasonable tradeoff for the low barrier-to-entry that browser-based environments provide, and that the design in general supports a variety of educational goals and compositional strategies.

Author Keywords

NIME, proceedings, PubPub, template

CCS Concepts

•Applied computing → Sound and music computing; Performing arts;

•Human-centered computing→ Collaborative and social computing

1. Introduction

Over the past decade, live coders have expressed increasing interest in ensemble performance, as evidenced both by greater numbers of such performances and the continued development of live coding environments that support them. Gibber, a browser-based live coding environment with support for both music and graphics programming [1], initially added elements for networked live coding performance in 2015; however, during a rewrite of Gibber we discarded the prior system in favor of a new design and implementation that takes inspiration from other recently authored live coding environments with collaborative capabilities. Our goal was to provide enough flexibility to support ensembles of different sizes and levels of experience, a claim we will begin to address in this paper.

We begin this paper with an overview of systems that support collaborative performance in live coding, and then describe how these systems influenced the addition of collaborative features to Gibber. We then examine how three different ensembles made use of these features in a variety of contexts, and critically evaluate what features proved essential to performance and what features were superfluous or required improvement. We used four lenses to frame our analysis: communication, roles and group dynamics, computational efficiency, and extensibility. Our hope is that this research can encourage other ensembles to experiment with collaborative performances using Gibber, which is open-source software and freely available in the browser.

2. Related Work

As noted by Sang and Essl [2], there are a variety of models for enabling networked live coding performances. Temporal synchronization is especially important in many physically co-located performances, so that performers feel they are playing in time with each other. Many music programming languages provide mechanisms for temporal synchronization, from MIDI Clock messages to more recent technologies like Ableton Link [3]. In the live coding community, such technologies are sufficient for syncing a variety of environments including TidalCycles [4], Sema [5], and others. The original collaborative features of Gibber also contained the ability to synchronize via a proportional controller, while Sema uses a Kuramoto oscillator, and TidalCycles can synchronize via Ableton Link, OSC messages, or MIDI Clock messages. Gibberwocky [6] is an example of a live coding environment that controls an external application (Ableton Live or Max/MSP/Jitter); this enables it to offload temporal sync between ensemble members to the external application under control1 .

But syncing time is not the only possibility. Live coding environments frequently also have the ability to synchronize source code, with a variety of different algorithms to choose from as discussed in Section 3. If source code is both shared and executed across all computers in a networked ensemble, users across a remote network can play together even if they are not synced temporally, as every participant is running everyone else’s code in addition to their own. This can also be a successful strategy in a co-located performance if only one ensemble member connects their audio output to speakers. In settings like this the other members are effectively running remote code editors connected to one computer feeding sound reinforcement. The new iteration of Gibber adopts this model, as discussed in Section 3. We were inspired to focus our collaborative features on code sharing as opposed to time sharing after watching performances given using both Troop [7] and Estuary [8], which use a similar approach. QuaverSeries [9] and its recent successor Glicol [10] are two other browser-based live coding environments that also use this model. In addition, QuaverSeries and Gilcol provide an audience view of the code, enabling audience members to receive and execute the code from onlineperformances live on their personal devices instead of watching a video stream, requiring much less bandwidth from audiences members and incurring no audio or video compression artifacts.

In addition to time and code, other data can be shared between connected clients. For example, both Impromptu [11] and the original version of Gibber support sharing tuple-based data between ensemble members. This type of networked data sharing is particularly useful when code is only run on individual computers that are temporally synced, and can be used to share information about rhythm, harmony, and musical pattern.

Finally, communication between ensemble participants is also an important concern. To the best of our knowledge there are no live coding systems reported in the literature over the last decade that provide any additional mechanisms for communication, such as live video, audio, or text chat; the exceptions to this are Gibber and Lich.js [12] (a prototype version of EarSketch was also created that included chat capabilities for purposes of conducting an experiment, as described in [13]). While text chat can be approximated by code comments / non-executed lines of code, it is arguably difficult to keep track of all the code in a given multi-user session and see such comments, particularly when multiple editors are involved, as is the case with systems like Estuary and Flok2. LOLC [14] is an earlier live coding system that does provide text chat between ensemble members; this chat was often projected to audience members interspersed with code during performances, in the style of some networked music performances by the Hub.

In Table 1 we provide an overview of the collaborative differences discussed here. We drew inspiration from many of these environments when re-designing the collaborative affordances of Gibber, as described in the next section.

Table 1

Name

# of Editors

Code Share

Temporal Sync

Other Notable Features

Gibber

Single or Multi

Yes

No

chat, audio statistics for aiding visualization, shared data

Troop

Single

Yes

coarse,
1 Hz

highlights code according to user who last edited it, shared data

Estuary

Multi

Yes

No

multi-language support

Flok

Multi

Yes

No

multi-language / system support

LOLC

Single

No

Yes

visualization for audience, shared patterns between performers

Sema

Multi *

No

Yes

multi-language support

Impromptu

Single

No

Yes

shared arbitrary data

TidalCycles

Single

No

Yes

wide variety of sync methods available [15]

Lich.js

Multi

Yes

No

text chat

QuaverSeries & Glicol

Single

Yes

No

live code streaming to audience members

Republic

Single

Yes

Yes

Distributed code execution

A high-level overview of collaborative features across a selection of live coding environments.
* Sema provides multiple text-editing areas that control different aspects of language design and musical performance; however, it does not provide collaborative views of the code of other ensemble members.

3. Collaboration in Gibber

We conducted our initial development work on networked performance back in 2015 with a system named Gabber [16]. This system contained features that are rarely found in live coding environments. For example, while many live coding environments support the display of multiple editors showing code from other performers, Gabber also optionally supported performers editing the code of other performers, and then selectively executing code on specific computers. Having the ability to target specific computers for code execution enabled ensembles to use their laptops as a distributed speaker array, a concept (perhaps) first explored by the group PowerBooks Unplugged using the SuperCollider programming language and Republic extension [17]. In addition to syncing code and providing unique affordances for code execution, Gabber also synced connected clients temporally via a hybrid proportional controller model.

However, after watching numerous performances by groups such as TYPE3 using Troop or SuperContinent [18] using Estuary, we wanted to implement a simpler model for networked performance. We decided that temporal sync was mostly unnecessary and focused on sharing code. Code was previously shared using operational transforms [19] to sync networked code editors. But over time we observed that while in general performance was acceptable, there were occasional glitches in sync resulting in garbled text. Conflict-free replicated data types (CRDT) [20] have since emerged as an alternative to operational transforms. While both operational transforms and CRDTs enable divergent state spread across multiple nodes to converge, CRDTs can work with peer-to-peer connections (no centralized server is required, except perhaps to make initial connections) and often perform better under poor network conditions.

Gabber originally supported a multi-editor view, where each performer had their own code editor and every performer could see every editor; in our refactoring we initially decided to instead implement a shared buffer mode, where all performers edit the same single buffer of text. At the request of FaMLE, we added a an additional mode where each user controls their own personal text buffer, as seen in Figure 3. Which editor mode is employed is determined when a new virtual “room” for a performance is created via the GUI of Gibber as shown in Figure 1.

The Anon interface, with a short code example and the interface for joining a networked performance displayed.
Figure 1

Interface for joining a collaborative session in Gibber. When the “share editor” option is checked, a single shared buffer is used by all participants for editing.

While these decisions (abandoning temporal synchronization, moving from operational transforms to CRDTs, supporting both single and multi-buffered editing) were the biggest changes to Gabber, numerous smaller changes were also implemented in response to feedback from performers. These will be discussed in the next section, where we discuss how three different ensembles at three different universities used Gibber for networked performances in a variety of different configurations and contexts.

Gibber / Gabber in performance

In this section we begin by describing ensemble configurations and performance settings for three different performances by three ensembles. We then examine how Gibber was used in all three performances using four different lenses: Ensemble Communication, Roles and Group Dynamics, Computational Efficiency, and Extensibility.

Performances

Performance #1 - Online / Duo

This online performance by the duo Chith featured Charlie Roberts, who programmed music, and Gillian Smith, who created audio-reactive visuals using a mix of p5.js and Hydra running inside of Gibber. The performance itself was part of the TOPLAP 20th anniversary series; it lasted about twenty minutes and was streamed live to Twitch and YouTube. The performers used a single shared text editing buffer as depicted in the video below.

Chith - TOPLAP 20th Anniversary Peformance - https://vimeo.com/699646133

Performance #2 - In person / Thirteen member ensemble

This performance was by the Miami University Laptop Ensemble (MULE). MULE is new to Miami University and open to any major, though for the inaugural semester nearly all members were undergraduate students from the Music and Emerging Technology in Business and Design departments. None had significant programming experience prior to this performance.

Gabber was used on a twenty-minute long structured improvisation that closed the concert (see video below). The performance plan was developed through multiple group rehearsals and discussions during the ensemble’s weekly meetings. A shared code buffer was executed on an auxiliary laptop connected to a projector and large speakers. The concert took place in a large ensemble rehearsal space with approximately sixty audience members in attendance.

Miami University Laptop Ensemble (MULE) - Gibber/Gabber Improvisation - https://youtu.be/wjFmljD7ngY

Figure 2

Snapshot taken during MULE’s performance, featuring live visuals generated with Hydra, a shared code editor, and engagement with audience members.

Performance #3- Online / Six member ensemble

This performance was by FaMLE, the MIT laptop ensemble. The ensemble consisted of five students plus the ensemble director, but only three students and the director took part in the compositions using Gibber. Previously the ensemble had used other live coding environments, in particular the MiniTidal environment in Estuary, and two of the students who performed with Gibber had participated in those performances. In addition, several members of the ensemble (although none of those who performed with Gibber) had developed live coding languages themselves; one of the other pieces on the concert described here was performed in one of these languages.

Ensemble members are required to take responsibility for a composition in FaMLE performances, and the concert included four short compositions performed in Gibber. One performer in particular wanted to create their own minimalist API for their piece, necessitating changes as described in lens 4 below.

The concert took place over Zoom, with all performers participating remotely. The ensemble director performed as part of the ensemble, and provided the audio and video feed for the performance. The various compositions all used the multi-buffer (non-shared) mode.

FaMLE Global Scope Gibber excerpts - https://youtu.be/H7pm_y8cSmA

Lens 1: Communication

We use the lens of communication to examine how ensemble members communicated with each other, and how the software itself communicated with the ensemble.

Although supportive of showing their text chats to make process more visible to audiences, Chith didn’t feel that Gabber’s text chat was fluid enough to help make important decisions and transitions throughout the performance. Instead, they ran a private voice chat via Discord and used that to communicate. Having a single person in charge of music and a single person in charge of graphics perhaps makes for more frantic coding than would be needed in a larger ensemble, and using voice chat enables concurrent coding and communication.

FaMLE also ran a voice chat via Discord, and primarily used comments in the code editor to communicate with the audience during the performance. At times the voice chat caused a disconnect for the audience as they could see performers speaking over Discord in the Zoom video stream, but wouldn’t hear anything as they might expect. This is primarily an issue for remote concerts as there is no visual indication of whom the speaker is addressing; in a live context communication between performers would likely more obvious via orientation of bodies and other physical gestures. In previous performances using other environments text chat was used to communicate performance instructions, which would also be displayed to the audience. For this performance, the director felt that managing both text chat and the Gibber interface was a cognitive burden.

MULE regularly used text chat in Gibber to coordinate actions during rehearsals. However, in the exhilaration of live performance, this communication was limited mostly to expressions of approval and encouragement (e.g. comments like “woo!” and “let’s go!”). Notable exceptions were occasional suggestions from the ensemble director to the “MC” for top-level musical changes (e.g. tempo or key), as described in the lens on roles and group dynamics.

Communicating About Mappings

Early on during practice sessions by Chith, the visual performer found it was difficult to decide what instruments they wanted to map their visual effects to. In many live coding systems this is not a concern, as visuals are instead mapped to the output of an FFT analysis of the entire musical work; this is because audio and visuals are often generated in separate environments (for example, TidalCycles + Hydra) and the visual system has no access the audio output of individual instruments in the musical system. While using an FFT has its benefits, it often precludes accurately mapping particular visual effects to the output of specific instruments, as frequency spectra from many instruments overlap; mapping visuals to specific instruments can arguably be used for more dramatic effect, or, at the very least, for different effects4. Since Gibber runs both audio and visuals in the same language (JavaScript) and memory space, it does afford mapping visual elements to the output of specific instruments, even though visuals generated in Gibber can come from the libraries for p5.js [21], Hydra, Marching.js [22], or any combination of the above.

In order to aid communication of which instruments are currently active, and give some approximation of the dynamic range of envelope followers placed on them5, we added the ability to dynamically display the current envelope output of each instrument next to the name of the variable that contains it, as shown in the video of Chith’s performance. The visuals performer found this visualization critical to her performance. Being able to see the names of instruments as they were added not only made it possible to tie visual effects to specific instruments, but also made it much easier to interpret the audio performance as someone untrained in digital audio. This additional information channel also reduced the burden on voice communication, and removed the risk of transcription errors in copying instrument names.

Potential Communication Improvements

Meta-discussion of Gabber’s communication affordances revealed that the chat could potentially be more useful with the incorporation of some options for visual customization. For example, since chat text is overlaid on top of any generated visuals (without the benefit of the black background behind characters in the code window), providing a means to enlarge text or change background color or opacity may help make it easier to read and follow more complex message chains. Additionally, separating the chat window from the main editor window may encourage novice users to put more detailed instructions and suggestions in the chat as it could be excluded from projected or streamed visuals. Since Discord has already proven useful in performances by both Chith and FaMLE, adding basic voice chat functionality to Gabber via WebRTC may be more convenient than configuring a second application solely for communication while helping to reduce the mental load for novices occupied primarily with typing code. A recent study compared both text and voice chat for live coding collaboration, and found that while both have areas in which they excel, voice chat proved more useful for realtime coordination during performance [23]. However, as noted above voice chat in remote performances may be confusing for the audience, and in colocated performance may not be viable.

Lens 2: Roles and Group Dynamics

Ideally, Gabber would be able to accommodate a variety of different group dynamics, which in turn would accommodate a greater number of compositional / improvisational choices. In the lens of roles and group dynamics we examine the structure of our three performances, and how the affordances of Gabber afforded or hindered them.

Chith had perhaps the simplest performance in this regard, with one member in charge of music and the other in charge of graphics. Transitions, beginnings, and endings were all negotiated via discussion during the performance. However, even with this collaboration, tensions arise. For example, the performers in Chith used a single shared buffer for code editing, and the stream came from Roberts’ computer. As the code for the performance could not fit into the onscreen editor, audience members were left watching the code editing actions of Roberts, while the editing actions of Smith went mostly unobserved, much further down in the buffer. In future performances, Chith will most likely use dedicated buffers for each performer instead of the single shared buffer, however, one drawback of having multiple editors onscreen is that there would be more occlusion of the audio-reactive visuals.

MULE designed their performance with pre-assigned roles and groups. This was partially done to help focus the efforts of less-experienced ensemble members, but also became a necessity after the disorganization of early rehearsals frequently resulted in cacophony. For example, before role assignments, ensemble members would often initiate multiple instances of instruments and sequences, resulting in dozens of divergent musical ideas. MULE eventually decided to split the ensemble into smaller groups of 2-3 members assigned to familiar musical roles, such as percussion, bass, melodic synth, etc. Each of these groups shared responsibilities for parameters and sequences on a single instrument, which streamlined the collective efforts. Moments before the performance, the shared editor was divided into sections for each group using comments to visually separate blocks of code. In addition, one ensemble member who showed early interest and aptitude for ancillary features in Gibber was assigned the role of MC. This member learned how to add and route effects on their own time and was given sole responsibility for changing top-level musical parameters such as tempo, scale, and key. The MC was also charged with monitoring the overall musical texture, deactivating or executing instruments and sequences as needed.

FaMLE used several different approaches to organizing group dynamics across compositions. Two compositions used the standard approach of assigning individual roles to each performer. Within these roles, performers had the freedom to improvise based on code examples and compositional structures developed during rehearsals, but were responsible for defining and altering their own sequences and instruments.

The third FaMLE composition, titled ‘Terms of Service’, was based on the creation of a minimalistic API, and the basic structure of the performers’ code was predetermined. A global pitch array was shared by all performers; each performer was able to generate their own pitch sequence by copying the global array and applying transformations, and each performer was also responsible for defining and controlling their sound generating processes. Available transformations are detailed in the API — more details on the implementation will be discussed using Lens 4.

The fourth composition, entitled ‘Breakout’, specifically focused on the fact that although performers were using individual code boxes, these code boxes are in fact different views into shared memory. While performers began by defining their own sequences and instruments, over the course of the piece performers would ‘steal’ control over other performers’ instruments by redefining these instrument’s pitch sequences. This allowed for rapid changes of sequences, and even battles over control over different voices, but also made it difficult to perceive which sequence definition was currently active.

This is one instance where the annotation system built into Gibber would provide vital visual clues; however, the current implementation doesn’t support annotations for code boxes controlled by other performers due to technical challenges.

Figure 3

Screenshot showing multiple definitions for ‘synth.note.tidal’. The active sequence at this moment is in the bottom right codebox; however, this is not visually apparent.

Lens 3: Computational efficiency

The design decision to forgo temporal synchronization in favor of shared code and shared code execution affected each ensemble in different ways. Here we use the lens of computational efficiency to examine the consequences of this decision, and the effects of Gibber’s computational design more broadly.

For Chith, there was little to no concern about computational efficiency. Visuals in Gibber run in the main thread (for p5.js) or on the GPU (for Hydra and Marching.js) while all audio sequencing and signal processing occurs in an AudioWorklet, a specialized node that runs arbitrary JavaScript or WebAssembly in a separate thread [24] . This means that each of the two performers were effectively using different compute resources and not competing with each other. Roberts also prefers to do “blank slate” live coding—not using any starter code and typing “everything” from scratch6— which limits the number of instruments and effects they are able to incorporate over the duration of a performance, which in turn limits the processing power required. Since the two performers used different physical computers, there was the potential for uneven experiences between them. Chith streamed their performance from the higher-end computer in the duo (belonging to Roberts); however, Smith would occasionally see minor performance issues (primarily visuals lag when using Hydra) that Roberts could not see.

MULE featured a much larger ensemble of thirteen performers using personal laptops of varying make, model, and generation. Some members with older computers found that the browser slowed to a crawl once live-generated visuals were incorporated, making it difficult to navigate the shared editor. As a workaround, the members were asked to join the group’s Gabber room after Hydra was started on the computer connected to the projector, which removed the processing load of the visual engine from their individual machines. Additionally, since many of the members were novice programmers, they found it challenging to consistently instantiate and reference instruments with unique variable names, which in turn lead to “phantom instruments“ left running with no means to “kill” them. If enough of these are created, they can easily bog down a performance, squandering computational resources and muddling musical intentions. Because of this, we added a function named find(instrumentType) that searches for a specific type of instrument and checks to see if any of the found instruments are not assigned to a variable in Gibber. Such instruments are returned in an array at which point they can be conveniently disconnected from the audio graph.

FaMLE also featured performers using a variety of laptops, and found similar performance issues. As with MULE, disabling graphics processing on older laptops helped considerably. Of the four pieces performed with Gibber, three utilized the '“blank slate” live coding model, although the basic code structure was based on pre-composed templates.

Lens 4: Extensibility

In this section we use the lens of extensibility to address a variety of questions. Did Gibber have all the necessary features to support compositional concepts? When features needed to be added, could they be implemented by performers? What are the limits to the extensibility that Gibber provides?

These questions become important due to a (relatively) recent change to the audio engine for Gibber. Its previous version was written before the existence of AudioWorklets that run in their threads, and instead used ScriptProcessor nodes. ScriptProcessor nodes are capable of running sample-level audio callbacks written in JavaScript just like the AudioWorklet node is; however, the ScriptProcessor runs in the main thread, which made it easy to control every aspect of Gibber from within the generated audio callback itself7. Moving to AudioWorklets and their separate threads brought better performance reliability and lower latency, but came with the constraints inherent to multithreaded programming [25]. Operations that travel between threads have to be carefully considered and involve a message-passing queue, and as a result there is no longer the “anything goes” mentality provided by having everything run in a single thread.

This became an issue for FaMLE, whose members wanted as much control over sequencing and musical pattern manipulation as possible, functionality that now takes place in the audio thread. Experimentation with manipulating data in the audio thread caused frequent crashes of the audio engine; in response, a “restart engine” button was added to Gibber’s user interface (visible in Figure 1) so that crashes wouldn’t require a browser refresh.

Two specific compositional needs were the ability to generate and continuously update shared data buffers, and the ability to define shared pattern manipulation functions. For example, in “Terms of Service” the compositional idea was to create an API of pattern manipulation functions. Functionality to dynamically inject these functions into the audio thread was added to support this, and also enabled global data structures to be directly loaded into the audio thread.

In effect, this allowed “Terms of Service” to consist of a mini live coding language built within Gibber. This raises the question of what affordances and functionality we expect a live coding environment to provide. Within the context of FaMLE, the previous programming experience of some members caused them to expect to be able to freely extend the Gibber’s functionality. Any limitations due to technical implementation or creative constraints proved frustrating. Limitations of this sort caused numerous requests for additional extensibility during the preparation for the concert; however, this required direct modification of the Gibber codebase, a task only likely to be able to be easily accomplished by a software’s developer.

While FaMLE sought to extend Gibber in unanticipated ways, the inexperienced members of MULE often found it challenging to mentally model the program state over time. This included keeping track of active instrument instances and parameters. Some GUI elements of Gibber can help alleviate this issue, such as code annotations depicting the current step in rhythmic sequences [26]. However, using the browser console to monitor or query this information proved daunting for novices. Additional code annotations and visualizations and improvements to the granularity of code execution may help in such cases, though balancing approachability with flexibility should remain a priority.

Reflections and Future Work

In closing, we offer reflection upon the overall experiences of using Gibber from each ensemble. Chith’s reflection is written by duo, while MULE’s reflection is from the perspective of the ensemble director. FaMLE’s reflection is written from the perspective of a performer/director.

Both members of Chith found performing in Gibber to be a smooth collaborative process. For Smith, the visual performer, the major frustration was using the shared buffer, which meant that only one performer’s view of the code was visible at a time (typically the audio performer, as they were hosting the livestream)8. This was also frustrating and potentially jarring for audiences—who also only had a partial view of the code—as a great deal of activity was happening off screen. In performance it also raises communication challenges; for example, while having statistics about instrument output is helpful for mapping purposes, mapping would still be easier if the musical code was viewed concurrently alongside the graphical code. Smith also noted the primacy of the music in the duo; while they were able to map the output of specific instruments to their visuals, there is currently no way to quickly map visual properties to sonic results.

The audience engagement during MULE’s performance was a pleasant surprise. Since it was the ensemble’s debut concert, we hoped to provide an opportunity for the audience to interact with performers and ask questions without disrupting the overall setting — an environment somewhere between chamber music concert and Algorave [27]. This was largely made possible by the shared text buffer of Gabber, which affords decentralized control for all aspects of the performance. Since each group in the ensemble focused on one instrumental role, one or two members could easily continue adjusting instrument parameters and executing sequences while other members were otherwise engaged. Additionally, the interdisciplinary nature of the ensemble attracted audience members from multiple programs at Miami University who were able to examine code, reactive visuals, code annotations, and sound and discuss the relationships between these elements in depth. The ability to connect with an audience and explore alternative concert formats within conventional spaces proved valuable, especially for a nascent ensemble; the implementation of networked collaboration using Gabber within Gibber was key to achieving this outcome.

In general, FaMLE found Gibber an extremely useful and flexible environment to work within. Working within JavaScript was familiar to most ensemble members, and in particular the support within Gibber for quickly importing samples from FreeSound [28] was enjoyable. The creation of tools for passing custom functions and data into the audio thread enabled even more flexibility for exploring musical ideas. It is clear that extensibility of this kind will continue to prove useful for experienced performers. However, while it would be nice for more functionality of this kind to be provided, it is also clear that support for extensibility is a challenge, and that there is a tension between extensibility, performance, and language design.

Ethics Statement

Our research was conducted as either personal artistic practice or typical university-level instructional activities. In fact, the idea for this paper was conceived of only after all the performances had taken place and we, in retrospect, identified an opportunity for shared reflection and evaluation. Unfortunately this means that we lost the opportunity to present more directed survey instruments to students for their personal feedback; despite this we hope that our perspectives as performers and educators provide insight into the challenges and promise of our various approaches.

Our design choices for Gibber do incur an environmental cost. Because every participant is running the instruments and sequences of every other participant in addition to their own, the computational cost (and thus cost in energy and associated environmental consequences) increases by a power of two. At the current scale of Gibber use this is—perhaps—a negligible concern, but if greater numbers of users begin to perform with Gibber it is a design decision worth reconsidering. Moving to a model that uses temporal synchronization instead of shared code execution would remove this problem but also remove the ability to easily share data and patterns between participants without significant further development work.

Gibber is browser-based software that can be used in any modern browser at http://gibber.cc. It is also free and open-source software; the source code is available on GitHub at: https://github.com/gibber-cc/gibber

Acknowledgments

Miami University Laptop Ensemble was established with the support of a Student Technology Fee from Miami University.

FaMLE is supported by the MIT d'Arbeloff Fund for Excellence in Education.

Comments
0
comment
No comments here
Why not start the discussion?