L2Ork Tweeter is a new control-data-driven free and open source crowdsourced telematic musicking platform that deterministically addresses three of the greatest challenges associated with the telematic music medium, that of latency, sync, and bandwidth.
The following paper presents L2Ork Tweeter, a new control-data-driven free and open source crowdsourced telematic musicking platform and a new interface for musical expression that deterministically addresses three of the greatest challenges associated with the telematic music medium, that of latency, sync, and bandwidth. Motivated by the COVID-19 pandemic, Tweeter’s introduction in April 2020 has ensured uninterrupted operation of Virginia Tech’s Linux Laptop Orchestra (L2Ork), resulting in 6 international performances over the past 18 months. In addition to enabling tightly-timed sync between clients, it also uniquely supports all stages of NIME-centric telematic musicking, from collaborative instrument design and instruction, to improvisation, composition, rehearsal, and performance, including audience participation. Tweeter is also envisioned as a prototype for the crowdsourced approach to telematic musicking. Below, the paper delves deeper into motivation, constraints, design and implementation, and the observed impact as an applied instance of a proposed paradigm-shift in telematic musicking and its newfound identity fueled by the live crowdsourced telematic music genre.
Telematic, Tightly-Timed, Latency-Agnostic, Sync-Agnostic, Bandwidth-Agnostic, Paradigm-Shift, NIME Musicking, Live Crowdsourced Music, L2Ork Tweeter, Laptop Ensemble
•Human-centered computing → Collaborative and social computing → Collaborative and social computing systems and tools;
•Human-centered computing → Collaborative and social computing → Collaborative and social computing theory, concepts and paradigms → Collaborative content creation;
•Human-centered computing → Collaborative and social computing → Collaborative and social computing systems and tools → Synchronous editors;
Musicking is defined as taking “part in any capacity in a musical performance” . Traditionally, this activity has included anything from composing and rehearsing, to performance and listening. This paper presents an expanded NIME-centric version of the musicking definition that also includes the activity of musical instrument design and/or building. Telematic music is defined as “music performed live and simultaneously across geographic location via the internet” . By extension, telematic musicking may be defined as live music co-creation (instrument design, composition, improvisation, rehearsal, and performance) and listening or perception across geographic location via the internet.
Telematic musicking can be traced all the way back to 1987 Clocktower performance by the Hub ensemble . It can be audio- , control-data-based (e.g. using MIDI  or Open Sound Control (OSC) ), or employ a combination of the two . In part due to its primary focus on emulating in-person musicking, a challenge whose need was amplified by the COVID-19 pandemic and the ensuing social distancing restrictions, the focus of telematic musicking research to this day remains biased towards audio-based approaches. However, currently available technologies are simply unable to support this aspirational equivalency. This is most notably due to latency, sync, and bandwidth limitations. Existing research suggests that a latency of over 20-30ms, or even just 11.5ms  can have a disruptive effect in tightly-timed situations, an argument that has been repeatedly reaffirmed through ongoing empirical studies and real-world implementations. Despite ongoing technological advances aimed towards lowering the latency, the speed of light remains indisputable constant that, even under optimal conditions, renders a relatively short distance between London and New York a borderline case . As a result, composers leveraging live telematic medium typically resort to embracing such limitations as compositional constraints . Yet, the control-based systems offer unique opportunities, such as deterministic anticipation of future events that may go a long way towards abating the aforesaid challenges of the telematic medium that have limited its utility in tightly-timed scenarios.
This predicament is apparent in the conspicuously sparse telematic explorations of tightly timed and arguably more mainstream pulse- and pattern-driven music genres, such as electronica. In fact, there are only three widely known efforts that explore primarily control-based approach to musicking. The 2001, now defunct eJamming project started out exploring MIDI-like communication over the internet, including coping strategies for the inevitable delay . netPd, introduced in 2004, continues to be actively developed to this day . Geared towards Pure-Data-literate  users, the system offers a scalable widget-based approach to telematic musicking with remote sync that has produced a growing body of electronica-style music literature. The 2017 TeleMIDI project sought to connect two musicians over the internet using Ableton Live and MIDI protocol . Its explorations aptly highlight the most obvious ongoing bottlenecks associated with just-in-time form of networked communication.
Unlike the aspirational emulation of in-person musicking using analog (e.g. acoustic) instruments, the aforesaid control-based approach, when coupled with telematically underrepresented mainstream music genres that call for tight timing between parts, presents an opportunity at establishing an identity for the telematic medium that goes beyond emulating what currently remains effectively unattainable. The most immediate application of the proposed approach is a crowdsourced or collaborative telematic DJ-like environment where multiple agents can co-create tightly-timed pulse- and pattern-driven music. This proposition therefore complements a unique ability of telematic musicking to connect over distance, a focal trait of its identity, with another concept made possible by the internet—that of a crowdsourced approach to content creation.
Within the context of this newfound live crowdsourced telematic musicking identity it is important to consider that its crowdsourced nature makes it effectively a superset of organized or professional musicking. Whereas the crowdsourced musicking infers connecting strangers with potentially little or no prior musical training, organized or professional musicking also enables such strangers to develop communities and skillset that can help them transition into a more focused ensemble. By extension, the same can also support the needs of the existing professional musicking communities and ensembles. Therefore, organized or professional musicking, including existing approaches to telematic musicking, may be seen as a subset of live crowdsourced telematic musicking.
It appears that a control-based crowdsourced telematic musicking platform that leverages a mainstream tightly-timed pulse- and pattern-driven aesthetics presents an opportunity to solve some of the most complex problems associated with telematic musicking using arguably simplest of solutions. This is an opportunity to push the medium by addressing existing constraints, most notably latency, sync, and bandwidth. Coupled by a novel live crowdsourced telematic music genre, the ensuing approach has a potential to establish an even stronger identity for the telematic medium that goes beyond emulating its in-person counterpart over distance.
L2Ork Tweeter (a.k.a. Tweeter) was motivated by the COVID-19 pandemic and the ensuing social distancing mandates that have made ensemble musicking all but impossible. Its original goal was to ensure a way for the Virginia Tech’s Linux Laptop Orchestra (L2Ork)  to continue functioning remotely by offering all elements of curricular instruction in the most accessible and discipline-agnostic way possible, including collaborative or crowdsourced instrument building, improvisation, content co-creation and composition, rehearsal, and performance, as well as audience participation. Its development was driven by the view that if the envisioned platform is able to appeal to a broad audience (e.g. casual users), then it would surely also support the needs of a more focused professional musical ensemble. As a result, this motivation has also served as the foundation for the newfound telematic crowdsourced musicking medium.
Tweeter’s control-data-centric design empowers users to engage in collaborative musicking even over slow internet connections, while enabling tightly-timed sync necessary for the execution of pulse- and pattern-driven music. It facilitates exploration of audio synthesis and the rich variety of sounds one can generate using the frequency modulation (FM) algorithm and a battery of parallel audio effects that are fed through the output limiter into the main output. Tweeter supports up to twelve concurrent performers and up to 244 additional guests or audience members who can observe the performance live over the internet with pristine, locally generated audio. Each performing user is assigned one of the 12 instruments that comprise the main window. Each instrument is driven by a tracker that can be populated by up to 64 loop-enabled keystrokes or notes. This intentional note constraint requires users to build complexity through interaction with other users, and is in part inspired by the popular social media platform Twitterthat imposes a similar design constraint of allowing only up to 280 characters per message or Tweet. As a result, and as evidenced by its name, Tweeter can be seen as a Twitter’s musical counterpart.
One of the challenges of the telematic music medium is audio quality. With diminishing internet bandwidth and network congestion, audio quality may drop and/or the audio stream may be interrupted, resulting in audible artifacts. With Tweeter, the audio is generated locally. Therefore, each participant (performer or audience member) has access to pristine audio reproduction with minimal bandwidth requirements.
Tweeter’s inspiration and focus is on facilitating live crowdsourced telematic musicking, whose subset also includes the aforesaid organized or professional telematic musicking. As a result, every aspect of the musical creation and expression is deeply embedded in collaborative co-creation, including everything from conceptual instrument design and pattern building and sharing, to live and performative ability to copy and borrow elements created by others. We will revisit this aspect in greater detail below.
Since its inception, the project grew to become an international community building hub, bringing together creatives seeking to collaborate with other aspiring and academic musicians with whom they share a passion for a more mainstream style of pulse- and pattern-driven music.
Tweeter is built using Pd-L2Ork , a Pure-Data  variant whose recent development has focused specifically on ensuring Tweeter’s feasibility. As a result, Tweeter is prepackaged with the Pd-L2Ork installer and easily accessible through the user’s home folder, as well as built-in browser. This makes Tweeter implementation an OS-agnostic turnkey solution that enables users to connect to one of Tweeter’s servers with a click of a single button. Unlike netPd  that approaches telematic music making using coarse-grained widgets that build on the Pure-Data visual dataflow programming paradigm, Tweeter’s aim is to shield the user from technical complexities by providing a monolithic interface, with the goal of empowering the broadest possible pool of creatives to engage in telematic musicking regardless the level of their technical literacy or commitment, whether that be a casual crowdsourced engagement or a professional ensemble. The same monolithic interface also aims to provide transparency among twelve musicians, with particular focus on promoting the ease of coordination and content sharing.
Tweeter consists of a client and a server, both of which are included with the the Pd-L2Ork. It uses an enhanced TCP protocol that enables graceful handling of incomplete packets and uninterrupted operation even when servicing a large number of concurrent clients. For this purpose Tweeter uses a modified version of the maxlib/netserver object native to the Pd-L2Ork. Each client sends its data to the server using Pure-Data’s FUDI  protocol. All client-relevant messages are then re-broadcast to other clients, while admin messages are processed locally by the server. When connecting to the server, user optionally indicates the hostname and port using the <hostname:port> format (default dedicated server info is prepopulated) and a desired part/instrument (1-12), or the first available one (default value 0), and initiates the connection using the connect button. The server validates the client version and disconnects mismatched clients, notifying them of the problem. Once validated, if the user has indicated a preferred part that is unavailable, they are assigned a guest status, making them an audience member who can only listen and, if server permissions allow, post messages using built-in chat. Otherwise, they are assigned the requested or first available part. Tweeter’s user interface reflects the assigned part number with red color, any other parts with assigned users using green color, while unused ones are left colored black. Users occupying parts that are not assigned to them can be disconnected or reassigned using server admin commands.
In addition to communicating with other performers and audience, the chat can be also used to configure the server using server admin commands. All such commands need to have the second argument be the server password (the same is also reconfigurable remotely). The format of server-bound admin messages is:
admin p:<password> <command> <command-parameters>
The server currently supports 48 admin commands. For a complete list of server commands please consult the Appendix 1. Server commands are also scriptable using a complementing user-created pd patch. In this case all the aforesaid commands are prepended by the word “chat”, and broadcast via the [send chat-out] object. This approach allows for the automation of synchronized cues that may require instructor/conductor intervention, including ensemble-wide anticipatory future events.
At Tweeter’s core is an audio engine that consists of a custom monophonic FM synthesizer whose intentional simplicity is offset by its customizability and, more importantly, complexities that may arise from user-to-user interaction. Each client has a choice of five types of carrier and modulator waveforms (sine, sawtooth, triangle, square, and pink noise). Both can be shaped by their respective envelopes. Additional options include note duration, harmonicity, modulation amplitude, transposition, output level, and the instrument name. The synthesizer output is fed in parallel to the main mix as a combination of dry, reverberated, echoed, and flanged signal, effects that require minimal CPU overhead, while offering added timbral variety. Each effect is further configurable using a number of parameters, all of which are also accessible via the l2ork-conduct admin commands.
Tweeter’s client user interface is contained in a single window (Fig.1). It is split into a top gray bar with global part-agnostic settings, and twelve visible parts, each consisting of a tracker, options, the instrument, effects, and output parameter panels (Fig.2). The interaction is made possible using minimal amount of affordances that are commonly found on every computer, namely keyboard and mouse/touchpad. This conscious choice was to ensure optimal transportability with mappable MIDI being a logical future expansion. All options also offer keyboard shortcuts that allow for the manipulation of most options, including notes, patterns, instrument presets, etc., while leaving room for additional scriptable and more complex actions to be mapped via a user-devised supporting pd patch that leverages the aforesaid server commands.
The tracker offers room for 64 characters or notes distributed across 64 beats. These can be populated by pressing keyboard keys that correspond to a chromatic scale on a piano. Depending on the editing mode, entered notes are either immediately sounded out and placed at a point in time reflected by the white time bar (“default” mode), or where the mouse pointer currently resides (“hover” mode). When the “play” and “loop” modes are activated, a white time bar passes through the tracker’s 64 beats highlighting the current position and playing notes populating the beat, or remaining silent if no note is present. The tracker retains entered notes only if the “loop” mode is enabled, thus allowing for the repetition of the resulting tracker pattern.
The QWERTY-mapped keyboard is split into two rows with A&Z rows denoting lower piano roll row (A being black keys, and Z being white keys), and 1&Q rows the upper row. The resulting two rows form 3-octave chromatic scale where some of the A- and 1-row keys have no note assigned. The loop length, or the amount of time it takes for a single playthrough of the pattern loop, can be adjusted by indicating its length in milliseconds. The time bar can be also manipulated using the pointer (e.g. a mouse), thus allowing for the “scrubbing “ of the resulting pattern. When pointer-based time bar adjustment is coupled with the “play” mode, it can enable interesting effects similar to that of scratching a record.
The top-left corner of each part shows indicators for options that are currently enabled, such as toggling of “keyboard”, including note input and shortcuts, “hover”, “solo”, and “loop” modes, as well as the pattern playback (“play”). The panel also recognizes “delete” and “insert” events that allow for live editing and adjustments to the pattern, the “reset” option that resets the time bar to the beginning of the pattern, and the “clear” option that clears the tracker of all notes. The middle of the top row of the instrument panel is populated with the synthesizer options, while the top-right corner offers access to effects and the main mix parameters, including stereo panning.
The top gray bar placed above the twelve instruments, hosts instrument-agnostic options, including connection information and access to saving and recalling instrument and pattern presets. Tweeter comes with over 100 user-created instrument presets and 10 pattern presets, both of which start at values 11, leaving the first ten to be user-customizable and accessible during a performance using keyboard shortcuts. The top bar also hosts the chat message bar with messages being posted to the Pd-L2Ork console, the loading and saving of the entire session, and the subpatch with supplemental documentation.
Unlike the client’s user interface, the server patch (code) is designed to run headless. It is administered remotely using server commands.
The ensuing system offers a number of notable affordances, some that are arguably unique to Tweeter. As indicated above, its control-driven FM synthesizer-based engine enables pristine audio reproduction even in low-bandwidth environments. Tweeter maintains sync among clients by enabling deterministic future event cueing. Akin to netPd, the control-driven communication protocol ensures that the state is accurately reproduced on all clients. While each client’s local execution of the content may be off by the difference in latency between the client and the server, such latency among the most distant locations on Earth should remain well under one second. More importantly, the said latency is effectively irrelevant due to the way users can modify the tracker content. One notable way is the aforesaid “hover” mode by which a user can populate individual notes anywhere on the tracker only to have them played on the next repetition of the pattern loop. Doing so allows for such events to propagate among other clients well before the looping time bar reaches edited location. Naturally, this approach requires practice.
The aforesaid ability to script and automate global changes, such as loading sessions, (re)syncing, and changing instrument levels, as well as more fine-grained adjustments to individual parts from a conductor’s perspective, allows for tightly-timed execution of critical moments in the piece, such as electronica “drop”. The conductor role is optional, and any one or more users can serve as (co)conductors.
Tweeter in its current state does not account for any potential clock drift that will inevitably occur over time since each computer reconstructs the requested tempo locally (as described by the loop-length parameter). Yet, given the nature of the control-driven communication protocol, its deterministic future event cueing, and the pattern-centric target aesthetics, such a drift is unlikely to have any real world impact, leaving only extreme and highly improbable scenarios, such as a session that extends for hours without the ensemble ever re-syncing using one of several possible server commands.
Perhaps the most notable affordance of the Tweeter platform is its focus on co-creation or the emerging live crowdsourced music genre, where every facet of musical creation and expression is meant to be collaborative. To ensure the independence and significance of individual contributions, each part can only affect their own parameters, while also being able to copy from others, the exception being the aforesaid conductor (teacher) commands that require the server password and enable modification of other parts. In addition to being able to copy other parts by manually changing their own settings to match the desired ones, performers can copy loop sync, loop contents, instrument preset, main out and pan values, or a combination thereof by clicking on a part number they wish to copy from using one of the eight possible click variants provided in the Appendix 2. These multi-click actions need to be executed within 500 milliseconds of the initial click onset, after which the action takes place.
The instrument preset and tracker pattern saving and recalling allows for quick changes both at downbeat and mid-pattern. Such patterns can be also shared offline as cyclone/coll-formatted text files. Tweeter also offers the offline mode where users can take control of all parts using the shift+(F1-F12) shortcuts. Doing so effectively converts it into a single user DJ-like system. While recalling sessions online as a non-conducting user only affects one’s own part (loading session to all parts online is managed via the l2ork-load server admin command), in offline mode such sessions load all parts.
For its instructional and conducting needs, Tweeter comes with a supporting Teacher patch or “widget” (Fig.3) that mimics the appearance of a single part and syncs, upon request, with the desired part. This allows instructors to modify most of selected part’s settings using server-side teacher/conductor admin commands. The purpose of this widget is to allow instructors to assist students in navigating the initial learning curve. In addition, the teacher widget has a collection of buttons that serve as shortcuts to commonly used options during a rehearsal, such as loading and saving a session, and (re)syncing by using either the reset option or based on the loaded session sync data with or without a custom offset. To ensure accurate sync, when saving a session, the session data is first saved locally, and then broadcast remotely to all users. This approach ensures that all users will have identical locally-saved session data.
Following one month of initial development, Tweeter’s first collaborative telematic use was in April 2020. It was limited to five concurrent participants. Since, it has been used without interruption in both curricular (organized) and broader community (crowdsourced) scenarios.
For both semesters, students engaging in the curriculum had prior musical experience ranging from minimal to that of a music major. However, given Tweeter’s particular approach to musicking designed to challenge traditional boundaries of music creation and authorship, the existing music knowledge only provided minimal head start. Even though the focused nature of musicking was effectively mandated by the curriculum, a number of students, being in part attracted to the promise of electronica-like aesthetic, chose to take the class as an elective. As such, much of the early student effort was effectively equivalent to that of a crowdsourced activity. In due time, as students increased their proficiency, the ensemble transitioned into a more organized or professional state, resulting in a series of peer-reviewed and curated international telematic performances.
This journey has resulted in two works co-created by students and faculty across two semesters. Below, the paper outlines milestones associated with each work. Then, it discusses emerging trends and observations.
In anticipation of the curricular enrollment for the first (fall 2020) online-only semester, Tweeter’s interface was expanded to support up to 10 parts/performers. The first crowdsourced work titled “Into the Abyss” was premiered in December 2020. It featured 7 performers. Every facet of the work was conceived as a crowdsourced effort, thus setting the stage for the proposed live crowdsourced telematic music genre: students and the instructor jointly created and shared the instruments, co-developed patterns, the overall form, discussed and agreed on the title, and ultimately rehearsed and performed the work. This journey started on the crowdsourced end of the spectrum. As participants’ proficiency grew, the group organically transitioned into an organized ensemble. The instructor served as the team leader, co-creator, curator, and facilitator helping steer and fine-tune the creative process and the final product. The classes and rehearsals were conducted exclusively online via the Zoom2 conferencing tool, with Tweeter running in parallel. Zoom’s sharing of the screen was utilized to explain specific techniques, while Zoom’s chat was used to share newly created instrument presets, tracker patterns, and sessions.
For the performance, the instructor set up a YouTube livestreaming system that showcased the Tweeter’s main window, thus allowing audience to observe musical activity on-screen (Fig.4). Next to Tweeter’s window was the Zoom session window with performer camera feeds with intentional mosaic-like ordering of stitched backgrounds. The Zoom session allowed for performers to verbally communicate given the keyboard interface was encumbered with performance cues, leaving little room for efficient chat communication. Although the Zoom window was a part of the livestream, the verbal communication among performers was audible only to performers. To address any potential delayed audio feedback emanating from Tweeter and being broadcast via the Zoom microphone capture, all performers used headphones. No audience participated directly through the Tweeter interface and instead observed exclusively via the YouTube livestream.
The work is split into three sections. The first is pulse-free and ambiental. It is followed by the electronica-style “drop” and a transition into second tightly-timed pulse- and pattern-driven section. The third chordal section utilized the time bar “scrubbing” technique developed through collaborative experimentation. Performers using a pointer dragged the tracker time bar across the tracker populated with note clusters. The ensuing granular shimmer of reverberant notes was juxtaposed with melodic and soft pulse-driven rhythmic textures, with the work slowly fading into silence. A recording of the piece can be viewed using the following link <https://youtu.be/BBXPUuxOVu8>, with the aforesaid synchronized “drop” located near the 3:23 mark.
Following the premiere, the class collectively edited a written score using a shared Google Doc. The resulting documentation was embedded inside future releases of Tweeter, thus ensuring that the growing body of work composed specifically for this platform is appropriately preserved in a reproducible format. Since its premiere, “Into the Abyss” was performed half-dozen times internationally.
Notable observations emerging from this first utilization of the Tweeter platform include the affirmation of aspirational complexity that emerged from user-to-user interaction. Even with only seven performers, the musical complexity far exceeded the expectations of what was thought possible using a series of monophonic synthesizers. Throughout the semester, Tweeter was enhanced with increasingly more powerful keyboard shortcuts, making it easier for musicians to control multiple parameters using both mouse and keyboard, and to simplify editing of patterns by allowing insertion and deletion of individual events. Although this was only the first piece ever written for the platform, the semester-long endeavor set the stage for a shift in ensemble’s performance practice.
Second work titled “4th Beat” was developed in the fall 2021, premiering as part of the International Poznan Composers Forum in Poland (Fig.5). The telematic performance once again took place via the YouTube livestream. Due to additional enrollment, the interface was expanded to accommodate 12 performers. Its second December 2021 performance in the Virginia Tech Cube for the first time tested the audience system. One instance of Tweeter using guest or audience mode was run on a separate computer where the performance was recorded together with the Zoom session window using a local instance of the OBS Studio3 desktop capture software. This was necessary due to CPU constraints of running everything on a single system. A second instance of Tweeter with a guest/audience client was installed on the performance hall computer. It was projected alongside the performer-and-venue-staff-only Zoom session onto the performance hall screen. Audience, tech staff, and select ensemble members who performed in the hall reported pristine sound without any observable technical issues. The same held true for the locally recorded session using desktop capture. During December performance Tweeter flawlessly ran concurrently on 3 different OSs: Linux, Mac, and Windows. A recording of the piece can be accessed using the following link: <https://youtu.be/k2D8ZzSL9Ag>.
In addition to the use of the audience clients, the second piece was more ambitious in every possible way, warranting a number of additions and improvements to the system. Its multi-section structure had three “drops” at 3:07, 4:48, and 8:10 marks. The second “drop” was also prepended by a synchronous acceleration during which the tempo for all parts increased threefold. Each component of the piece was stitched from isolated sessions that were developed throughout the semester and were dynamically loaded using newfound server commands, after which all parts were re-synced based on the session data. Transitions were also more tightly coordinated, leaving less “dead space” between sections, as evidenced by the sudden change following the third “drop”. This was in large part made possible using the newly introduced future cueing commands.
The addition of the “hover” mode yielded unanimously positive student feedback, as it helped alleviate issues associated with the local sound hardware audio latency, as well as the unavoidable latency of just-in-time events. In situations where such latency was significant, the “default” mode could result in the performer (who was guided by the delayed sound rather than a quickly moving time bar) triggering the next note late, which would in turn propagate the resulting misalignment to other clients. The '“hover” mode allowed students to sidestep this limitation altogether. Further, it has encouraged new approaches to pattern building, and, more broadly, the creation of the overall musical structure. The system was also complemented with the main out controls that allowed musicians to adjust their overall level locally, so as to be able to more easily balance the Tweeter and Zoom output levels.
The piece called for an expanded improvisation in conjunction with tightly-timed parts. Unlike the first work where improvisation was limited to pulse-free sections, here both arpeggiated and more melodic elements were juxtaposed on top of pulse- and pattern-driven sections. Even though such just-in-time musical events did not compensate for latency, the end result did not detract from the underlying pattern sync because their audible result was more textural, ebbing and flowing in and out of the mix. Due to the reverberant nature of the chosen synthesizer sound, changes occurring with the pattern’s downbeat took time to overpower reverberant tails of the previous texture, thus making such changes feel as if they perceptually maintained sync with the underlying pattern. This is an area that may further benefit from the soloist-centric delay compensation that will be implemented in the future. Such an implementation will allow for a tightly-timed sync even in improvised sections, provided there is only one soloist at any given point in time.
Tweeter’s shortcuts were further optimized to make the most commonly used options easiest to access. New keyboard shortcuts included the ability to change instrument levels to 0 or the last non-zero value, which enabled for new sound gating techniques. The new version also introduced the aforesaid ability to easily copy select content from another instrument. This technique made it particularly easy to experiment with phasing style of minimalist music, as well as heterophony. Both were utilized early in-class to encourage students to acknowledge their own preconceived aesthetic expectations, and broaden their knowledge of possible compositional techniques. Rehearsals also inspired the development of the aforesaid Teacher widget and several improvements designed to ensure consistent sync. New presets were compiled on the Google Drive to be included with future Tweeter releases. The collaborative score generation via the Google Doc was complemented by a collaborative visual structure design using simple discipline-agnostic geometric shapes drawn inside Google Slides, and the ensuing content from both the first and second work was refactored in preparation for easier local retrieval and configuration to be introduced in a future Tweeter release.
It may be worth noting that this second iteration of Tweeter was utilized in a hybrid curricular setting, with some classes taking place in person, others exclusively online, and select ones exploring a mixed approach, including both in-person and remotely attending students. While rehearsals were perceived as equivalent in terms of musical outcomes and the overall progress across all three modalities by both students and faculty, students in particular expressed preference for the in-person modality due to greater sense of connection or presence with their collaborators, thus hinting at issues of embodiment and presence and how they may manifest telematically. When combined with the Zoom conferencing, screen sharing, and the Google Suite, from the instructor’s perspective, Tweeter-based telematic musicking offered instructional equivalency across all modalities.
The Tweeter journey has resulted in a number of trends and notable observations. One pertains to varied student engagement, in part due to students’ educational backgrounds and expectations, and likely also because of the ongoing COVID-19 uncertainty. Yet, the overall effect was that of complexity and intricacy, allowing for all contributions, once tamed, to be relevant and complementing. This has led to all students showing a predominantly positive level of engagement by the end of the semester, with scheduled performances serving as critical milestones and motivation catalysts. The fact that Tweeter musicking can produce both experimental and more mainstream kinds of aesthetics without any alteration to the system, as well as integrate the two, and do so both in-person and telematically, has established it as a promising platform for experiential learning. This ability has also allowed it to shift the focus away from technical and towards creative and aesthetic.
If there is one thing Tweeter has taught its participants, it is that crowdsourced music is difficult to implement in a culture of authorship and supporting academic benchmarks. Yet, once the ensemble members became able to look past their own egos and preexisting expectations, the effort felt liberating, serving as a logical creative counterpart to that of rapidly growing shared economies. And this trend leaves ample opportunities for further expansion and enhancement of both the platform and the newfound genre.
Although Tweeter offers a chat system designed to facilitate telematic non-musical interaction among performers, and (if server permissions allow) audience, its optimal telematic use is, by design, heavily dependent on Zoom or other similar video conferencing tools. The fact that Tweeter requires minimal bandwidth makes the concurrent use of both technologies possible even in limited-bandwidth scenarios. As the platform shifts further towards encouraging audience participation directly through Tweeter, the built-in chat may emerge as a useful way for participants (performers and audience members alike) to communicate.
As evidenced by the two works and their performances, Tweeter has made perceptual latency, sync, clock drift, and bandwidth challenges associated with telematic musicking using mainstream pulse- and pattern-driven aesthetic effectively a non-issue. However, its architectural choices do not cover all possible scenarios. As outlined above, tightly-timed live and improvisatory passages generating just-in-time notes and events remain susceptible to latency and will be addressed in a future release using soloist delay compensation that has in preliminary tests proven effective as long as there is only one soloist at any given point in time.
This paper introduces L2Ork Tweeter, a new platform for telematic musicking designed to address challenges associated with tightly-timed pulse- and pattern-driven music. Its design facilitates a collaborative environment that serves as the foundation for a novel live crowdsourced telematic music genre that is poised to further strengthen a unique identity of the telematic music medium. Tweeter’s crowdsourced approach uniquely covers most if not all the facets of musicking, including instrument design, improvisation, composition, rehearsal, and performance, as well as audience participation. Its control-driven communication protocol and synthesis-based local audio engine is effectively latency-, sync-, and for the most part drift- and bandwidth-agnostic. Having been tested in telematic, in-person, and hybrid environments, the platform has proven equally effective across all three modalities. While its design implicitly promotes a particular aesthetics, as evidenced by the aforesaid two works, Tweeter has proven just as capable in supporting pulse-free abstract music. Having reached a level of maturity, and with its growing community adoption, this paper is designed in part as a call to the NIME community for engagement in and exploration of the emerging control-data-centric and tightly-timed live crowdsourced telematic musicking.
Future work includes expanding the anticipatory event queueing system. Such additions will be prioritized based on expressed and perceived needs. Other notable areas include the support for an analog (e.g. acoustic) soloist, and enabling users to easily retrieve various parts belonging to a growing library of included works. Tweeter will be also complemented by the sampling engine that will allow for use of samples instead of existing basic waveforms, including their broadcasting to ensure sync across all parts. Doing so will vastly expand the sound palette, but may also put a bigger strain on the bandwidth requirements. Given the newfound Web-centric affordances of the Pd-L2Ork, future iterations of Tweeter may be also ported to run natively inside a browser, various mobile platforms, and possibly explore immersive implementations. There is also a growing interest among users to integrate external hardware support, such as MIDI controllers and L2Ork-specific NIMEs. Lastly, some aspects of the code may benefit from refactoring and further optimization that may lower the overall CPU footprint.
Tweeter is a free and open source tool available to all. While it does require access to technological affordances (a computer), it is OS-agnostic and, even though its primary focus is on live crowdsourced telematic musicking, can be utilized in both offline and online modes. The software was developed without any external funding. Works created for the platform including their score are bundled with thePd-L2Ork installer, with the goal of ensuring their sustainability and full reproducibility. Participation of the documented activities was open to all students without any audition or restrictions and students willingly participated and contributed to the presented work. No known conflicts of interest exist.
l2ork-conduct (user_num, command, command_value) (offers direct access to user parameters):
carrier (0-4) (sine, sawtooth, tirangle, square, pink)
echo-amp (0-1) (echo effect amplitude)
echo-fb (0-0.999) (echo effect feedback)
echo-ms (0.001-9999) (echo effect delay in milliseconds)
flanger-amp (0-1) (flanger effect amplitude)
flanger-hz (0.001-100) (flanger oscillation frequency)
flanger-mod-amp (0-100) (flanger modulation amplitude)
flanger-ms (0.001-50) (flanger effect delay offset)
hover (0/1) (hover mode for entering notes)
inst-name <symbol> (instrument name)
keyboard (0/1) (toggle keyboard input)
loop (0/1) (toggle loop mode)
loop-length (500-60000) (loop length in milliseconds)
mod-amp (1-9999) (FM synth modulation amplitude)
modulator (0-4) (same as carrier above)
note-dur (1-???) (note duration in milliseconds)
octave (-192-192) (note transposition)
out-amp (0-100) (output amplitude expressed in percent)
out-pan (0-1.5708) (output panning)
overdrive (0.1-10) (FM synth output level, 1+ being overdriven)
overdrive-override (0.1-10) (FM synth output level that overrides user inst. level in case it is toggled off via a keyboard shortcut)
reverb-amp (0-1) (reverb effect amplitude)
reverb-type (0-3) (reverb types, 0 being the least reverberant, and 3 the most reverberant)
tracker (0/1) (toggle tracker playback)
tracker-time (beat(0-63), offset (ms), play(0/1)) (specify location on the tracker)
solo (0/1) (toggle solo mode)
username <symbol> (specify part username)
l2ork-disconnect (user_num) (disconnect user on a specific part)
l2ork-free-slot (user_num) (bumps the user to audience)
l2ork-guest-chat (1=default/0) (toggles audience’s ability to chat)
l2ork-inst-level-* deterministic future event cueing system for part levels -(overdrive parameter):
l2ork-inst-level-at (user_num, target_time, target_level 0.1-10)
l2ork-inst-level-all-at (user_num, target_time, target_level)
l2ork-inst-level-list-at (user_num, target_time, list_of_target_levels)
l2ork-length (list of millisecond values for each user’s loop-length
l2ork-length-all (one loop-length in milliseconds for all users)
l2ork-load (session-name) (load session, without the -session-coll.txt suffix)
l2ork-load-sync-delay (ms) (sets sync delay for all clients in milliseconds after l2ork-load)
l2ork-obfuscate (1/0=default) (obfuscates note letters to avoid users intentionally trying to type offensive language using notes on the tracker)
l2ork-report (user_num) (requests state report for the part indicated by the user_num)
l2ork-reset-client-list (resets all clients and reassigns them part numbers)
l2ork-save (session_name) (distributes locally saved version to all)
l2ork-set-password (new_password) (change the admin password)
l2ork-sync-all (syncs all clients over network using the reset command
l2ork-teach (same as l2ork-conduct)
single click - loop sync (time and loop length)
double click - loop contents
triple click - loop and loop sync
quadruple click - everything (loop, loop sync, preset, main out, and pan)
shift + single click - preset (instrument)
shift + double click - main out and pan values
shift + triple click - preset and main out and pan values
shift + quadruple click - everything (loop, loop sync, preset, main out, and pan)