What follows is a section of my Introduction to Composition syllabus that I’m adding this year. I get questions about software a lot from my students. (Poor things don’t know what they’re getting into by asking me such questions.)
tl;dr: I recommend you invest in Dorico Pro if you can afford it, with MuseScore as a temporary solution if you aren’t ready to spend the cash.
When starting a new composition, it is important that you begin working using pencil and paper, even though your final work will usually be completed in software. This is to avoid the many assumptions that software will make for you. In the early sketching phases of a composition, it is important to not be bound to these assumptions. I expect all students to be prepared to show handwritten sketches of their work.
When it comes to completed projects, composers present their works in computer-notated/engraved form. It is no longer common (and in most circumstances, no longer acceptable) to supply performers with hand-written manuscripts. Students in this class may choose to use any of the following applications to prepare their assignments:
MuseScore is free, but the quality of the finished product is not as high as the other three commercial applications. I will accept work completed in MuseScore for this class. However, students who are planning to major in composition or work professionally with notation should plan to purchase one of the professional applications and begin learning it. I know these are expensive, but these are the tools of our field, and it’s worth learning them sooner, rather than later, and while you have access to the very steep student discounts. Beginning with 400-level composition lessons, you will be required to work with one of the commercial applications, so it might be worth starting to learn it now.
When it comes time to select from those three, which you choose is ultimately up to you, but I generally recommend Dorico Pro, as I think it has the brightest future of the group (reasons for that determination are beyond the scope of this syllabus, but I’m happy to discuss it sometime). While there are still many pros who use it, I do not recommend new users invest in Finale. When purchasing your software, keep the following things in mind:
Get your student discount! This will save you hundreds of dollars.
Get the professional “tier” of whatever product you select. Both Dorico and Sibelius come in cheaper, feature-limited “lite” versions. For the kinds of things you will need to do in this class, you will end up being frustrated by those limits, and there is often not an easy way to upgrade to the pro tier without paying all over again. If you’re not ready to invest in the pro tier, stick with MuseScore while you save up. It will be worth it in the long-run.
Be patient with learning it. These are professional tools, which means they’re complicated. They need to serve as wide a variety of musicians and musical traditions as they can. Think about it a bit like investing the time, care, and money into learning a musical instrument. It’s hard at first, but with dedicated practice you can make it work for you and create something amazing with it.
With Finale 27 this week, MakeMusic has created new versions of all their music fonts that work with Dorico and MuseScore (sadly, not Sibelius), thanks to the beautify of the Standard Music Font Layout (SMuFL) standard created by Daniel Spreadbury. They’ve also released those fonts under an open license. One cool feature of SMuFL is that a font can recommend (by way of an extra metadata file) other engraving defaults, such as staff line thickness, that work well with the symbols in the font. Finale included this metadata with their fonts, but they didn’t actually implement the engraving defaults in Finale. Here’s the cool thing about technology standards, though: with no extra work at all from Steinberg, these fonts and engraving defaults work great in Dorico.
Just to play around with this, here’s a side-by-side comparison of Bravura and Finale Maestro.
Bravura (left) and Finale Maestro (right)
You should read more about Finale 27 over at Scoring Notes, and listen to our recent podcast episodes all about it. I’m optimistic that Finale will implement the rest of the font defaults in a future update, because Bravura looks really silly with the dainty staff lines that work so nicely with Maestro.
Steinberg, the company that produces Dorico, Cubase, Nuendo, and other professional audio applications, announced today that they would be moving away from the hardware license key, which requires users to plug in what looks like a USB thumb drive to their computers any time they want to run a Steinberg. This key, called eLicenser, was the single greatest annoyance for me as an early adopter of Dorico. Remembering to carry an extra thing around, finding adapters for modern notebooks that have removed USB-A ports in favor of USB-C, danger of bumping a port while working, and just the general inelegance of the whole thing. Product Marketing Manager Daniel Spreadbury has been saying for years that they’ve been trying to work on this, and it seems things are finally happening.
I have no idea what the result will be, but I’m convinced it will be better than the current situation, if only because it will have been created by people who have seen what computing looks like in the 2020s: mobile devices and super-thin notebook computers with limited USB ports. When the eLicenser was originally developed, the idea that you could run something as complex as Cubase on a laptop was absurd, and so the hardware licensing system didn’t seem like a burden, but obviously that is no longer the case. Even among media pros, laptops are increasingly common, and the eLicenser feels increasingly anachronistic.
The trick will be to balance the needs of users—simplicity, flexibility, and reliability—with Steinberg’s need to protect its massive investment in the development of these applications. I’ve seen some speculation on social media that this is a signal that Steinberg is moving to a subscription model, but I don’t see any evidence of that, and it’s something Spreadbury has stated in the past is something he opposes for Dorico.
My hope for this future licensing platform is that it will be easy to transfer a license over the Internet, but that an active network connection would not be required to use the software. I think it’s reasonable for a person buying license to professional software like this ($600 USD before any discounts) would expect to be able to use it on at least two to three computers (say a desktop and a laptop) without too much hassle. Long-time Sibelius users will likely recall the tedium of transferring Sibelius licenses by copying long numbers back and forth between computers.
The replacement for Steinberg’s eLicenser technology isn’t here yet, so if you’ve got an eLicenser, you can’t ditch it yet; and of course, other applications like Vienna Symphonic libraries still use this same system, not to mention Avid’s iLok. But, I’m happy to see this commitment to our brighter, dongle-free future.
At the beginning of the semester, I was constantly fiddling with my tech setup at home to make it better and easier to get in and focus on the teaching. Now that it’s settled, I’m pretty happy with it. This video is a really quick overview of the software and hardware I’m using at home to teach my theory and composition courses remotely over Zoom. It is not a how-to, but a brief tour and demo of all the parts.
If folks are interested, I might do a little more detailed write-up or video on individual components now that everything is pretty much settled. Thanks to my friend and former theory prof. Leigh VanHandel for asking me to make this video and for sharing it with the Music Theory Pedagogy Interest Group at SMT 20201 this past weekend.
For better or worse, some of what I teach in Theory I simply needs to be memorized. Sure, I can talk about how we derive a diatonic collection through the circle of fifths, but then you’d have to know what a perfect fifth is, and that can be tricky to explain without getting into intervals, and those can be tricky to explain without getting into major and minor scales, and there we are, right back at the diatonic collection. So we pick an arbitrary place to start and brute-force memorize a few things.
When I’m teaching on campus, we do timed quizzes quite a bit in the first semester. These are things like recognizing notes on the staff, writing key signatures, and writing scales. That’s something that I have been struggling with how best to replicate in our remote Zoomclass reality. So far, I’ve come up with two solutions.
Note ID quizzes in the LMS
While I can’t ask students to write notes on the staff in Blackboard, I can have them type things. A few weeks ago, I did some timed tests in which I simply set up a big collection of image-based fill-in-the-blank questions on Blackboard. I made images of whole notes on a staff and asked students to identify the notes by typing the letter in the box. Thanks to Dorico’s Flows and Graphic Export features, this was considerably less tedious than it might have been.
I could have Blackboard select a random 25 questions and limit the time precisely. Blackboard can select 25 random questions from a pool, it lets me allow each student to have multiple attempts at the test. (I mean, it’s literally practice. What kind of music teacher wouldn’t encourage practicing?) The other great benefit to using Blackboard tests for me was that the tests were graded automatically and added to the Blackboard gradebook. The benefit to students was that they didn’t have to go to any other site or use any other logins.
My fill-in-the-blank quiz above is great if all I need students to do is type a letter, but as soon as we get to even a small amount of complexity, like writing a scale or key signature on a staff, that breaks down quickly. Sure, they could type something like “A B C-sharp, D, E, …”, but then we start to add enough complexity that Blackboard can no longer auto-grade With over 50 students, I do not want to deal with 250 things to grade after a week of daily quizzes. A familiar friend is here to help.
MusicTheory.net is way cooler than you may have thought
I’m not sure exactly when MusicTheory.net came across my radar, but I’ve been recommending it to students for years as a place to go to drill fundamentals, flashcard-style. However, I only recently discovered that you can actually create custom, timed quizzes for students to complete. Best of all, they don’t need a login. You can create a quiz with all the parameters you need, post a link, and then students can share their report back in a similar unique link.
To create your quiz, scroll down to the very bottom of the Exercises page and click Exercise Customizer. From there, you can create your own custom version of any of the exercises you’ve seen on the site. For my first, I created a quiz that would cover major and minor scales, up to four sharps and flats, treble and bass clef, and that would give students ten minutes to complete ten scales. (This may seem fast, but it’s actually very generous.)
Once you’ve selected the customizations you want, you can copy the link at the bottom of the customizer. That link will always be set to those customizations—you can’t change them without changing the link—so make sure you’ve got everything the way you want it. From there, students can click the link and immediately start the exercise you’ve created.
At the end, students get the opportunity to create and “sign” a report by typing their name. That will generate a unique code and link that you can use to check their score. That’s it! I made a quick screencast for my students, but I doubt they’ll need it.
To get this integrated into my gradebook, I’ve set these up as one-question short-answer quizzes on Blackboard. Each quiz has a link to the MusicTheory.net exercise and a space for the student to enter their report link. I’ll still need to open each link and copy the grades manually, but compared to grading 500 scales every day for a week, I’ll call it an improvement.
Another nice benefit of this system is that it allows students to take the quiz as many times as they need. As long as they continue using my link, they’ll continue to get the same parameters I’ve set up. As before, I don’t mind at all that they can practice as much as they want before doing the one they submit for their homework.
In some ways this is less good than my daily written quizzes on campus. Notably, students aren’t getting practice with the mechanics of writing notes and accidentals on a staff, which is far from trivial for students who are new to all this. On the other hand, I can give a more thorough quiz that students can practice more before taking, and that I can give more regularly without blowing up my grading schedule. This system also dramatically shortens the feedback loop, as students know as soon as they submit a question whether it was correct or incorrect. So there’s more practice, that is lower-stakes, which means less pressure, and all with immediate feedback.
Best of all, this is totally free. I’m aware of premium platforms like Musition that allow for even more robust testing with greater flexibility, but MusicTheory.net gets me where I need to be for this particular task, and it’s completely free. This is yet another tool I’ve incorporated to remote teaching that I intend to continue using after we return to campus.
It’s really easy to make online lectures that suck. If my lesson is just me talking for 50 or 75 minutes, it’s a of a waste of the format. We’ve all committed to building our schedules around having these precious hours together at the same time, so I’d hate to waste it by doing little more than a poorly rehearsed YouTube video.
Polls in Zoom are an easy way to give some level of interaction with even really big classes. The downside of Zoom polls is that they suck to create: you have to log into the web and do them in advance of the meeting. They can’t easily be spontaneous, and I can’t-slash-won’t (lazy? maybe) plan far enough in advance to assemble meaningful one-time-use polls.
My solution is to create a couple of very simple, generic polls that I can place in a relevant context for each meeting. If that sounds like nonsense, I expect an example will help.
The one I use the most frequently I call “Temp Check”, and it simply asks “How comfortable are you with this concept? (5: ‘Great, got it!’ to 1: ‘I’m totally lost.’)”. I say aloud what I’m referring to (“Tell me how you’re feeling about constructing harmonic minor scales.”), then launch the poll. Best of all, it’s anonymous, at least from other students.1 I think this makes students more comfortable admitting they don’t know something, and it’s been really helpful in pacing new material.
A similar poll I use a lot is a very simple self assessment. After we do an activity in class, such as “write out the counting for this measure’s rhythm”. Then we look at it together and I ask everyone to self assess, with the options of either “Nailed it!”, “Not quite, but close.”, and “Nope.” Again, this lets me see how students are doing as a whole without singling anyone out, and it also makes sure they’re all playing the home version of the gameshow, since I can see how many folks have answered and make sure (nearly) everyone does before moving on. And this is all way less tedious than creating a different poll for each concept or meeting. Because I use the same recurring Zoom session for each class meeting, I only have to set these up one time.
To create a poll, log into Zoom on the web (not the app), and go to Meetings > My Meetings and scroll to the bottom of the page to find Polls, with a button in the right corner to add a new one.
Having just one or two of these that you can recycle in a lot of different situations can help keep your prep time down, and students quickly get used to responding to these questions when they see them regularly. This kind of recycling won’t save the planet, but it might help preserve your prep time sustain your personal health.
If I really want to know, I can dig into reports after the meeting is over to see who selected which options. But like most things related to polls, it’s more trouble than it’s worth. ↩
Considering the rapidly evolving landscape of realtime audio collaboration tools that musicians and music teachers are swimming in at the start of the school year (in the northern hemisphere), I think the new Zoom audio features are a huge step forward in quality and simplicity.
While Zoom’s new audio quality isn’t quite as high as Cleanfeed, the added convenience and simplicity more than makes up for what small quality differences exist. It’s not a perfect A-to-B comparison since they’re using different compression algorithms, so some things might sound better on one than another. And depending on your setup and the setups of your collaborators or students, I’m not convinced everyone would notice a difference at all based on my brief testing.
I initially wrote and presented this paper about my composition Music for Social Distancing for the 2020 Aspen Composers Conference.1 If you’re interested in the work that is the subject of this paper, you can get the score and performance information on my site. The presentation included this performance of the work by Wichita State University’s Happening Now new music ensemble, with a little help from my friends.
In March 2020, I and countless other musicians across the United States were asked to stay in our homes and limit our personal interactions as much as possible to limit the spread of the novel coronavirus. This impacted nearly every music presenter, performer, venue, and school in many ways. As many interpersonal interactions—meetings, lessons, and even parties—migrated to videoconference platforms like Zoom and Skype, it quickly became obvious that performing traditional repertoire would not be feasible over these platforms for a variety of reasons I will discuss momentarily. Even beyond the common practice period, more recent, flexible compositions pose similar challenges to remote performance. In this presentation, I will discuss some of the issues associated with remote ensemble performance, the compositional techniques I used in my work Music for Social Distancing to account for those issues, and the experiences with various readings and performances of the work in the months since I first published it.
The term “social distancing” was quickly and widely adopted by health and policy experts in the earliest days of the pandemic. The World Health Organization (WHO) and US Centers for Disease Control and Prevention (CDC) recommended that individuals maintain a minimum of six feet of space between themselves and others outside their household. However, it quickly became apparent that “social distancing” might not be the most apt description of this recommendation, and some adopted the phrase “physical distancing” instead. In a WHO press conference, Dr. Maria Van Kerkhove elaborated:
… [K]eeping the physical distance from people so that we can prevent the virus from transferring to one another, that’s absolutely essential. But it doesn’t mean that socially we have to disconnect from our loved ones, from our family. Technology right now has advanced so greatly that we can keep connected in many ways without actually physically being in the same room or physically in the same space with people. … So find ways to do that, find ways through the internet and through different social media to remain connected because your mental health going through this is just as important as your physical health.2
This idea, maintaining social bonds in spite of physical isolation, became very important to me as I was working on this piece, teaching lessons and classes remotely, and imagining what music could look like under these restrictions. I will continue to use the expression “social distancing” here, as it is more familiar, but I intend it to mean physical and geographic separation rather than social isolation.
When institutions started canceling concerts and universities and conservatories started sending students home, I and many of my colleagues scrambled to find the best ways to move our performances, rehearsals, and classes to the Internet. A popular question in online music forums was “What application do I need to use to have my rehearsal online?”. The obvious assumption there is that there was such a thing. It turns out there wasn’t, isn’t, and likely won’t be any time soon, due to technical limitations.
One well-explored solution is that of so-called “virtual ensembles”, exemplified and popularized by Eric Whitacre’s Lux Aurumque virtual choir video in 2010. Virtual ensembles create a fixed recording and require a reasonably high level of planning, editing, and technical expertise. As recordings, they are fixed and do not unfold in realtime as live performances. Additionally, unlike most classical music recordings, they are not created in a way that allows performers to listen and react to one another, because they are not in the same room and at the same time. As admirable and impressive as virtual ensemble recordings are, I did not find them to be a very good substitute for the things that I missed most from the performances that I loved.
One particular thread in an online music teaching forum got stuck firmly in my mind. It was devoted to a question about how to do remote chamber music coaching, rehearsing, and performance. There were a number of suggestions that all centered around making multitrack virtual-ensemble-style recordings. Each time some version of this was suggested, the asker promptly replied “that is not chamber music!” I want to examine what chamber music is, and why I felt that remote chamber music performance required the creation of a new kind of repertoire.
In the simplest, most literal sense, chamber music is defined by the small size of the performing ensemble. It is the implications of that small size that make chamber music worth distinguishing from larger works. Conductorless musicians have greater responsibility for shaping the performance individually, listening, and reacting to one another, encouraging what James McCalla in the preface to his Twentieth-Century Chamber Music describes as “individuality as an essential part of [their] collectiveness”.3 This individual-collective dichotomy is what I find most appealing about chamber music. It is what allows chamber music to be subtle, and intimate, and exciting. However, these same features are also the first to falter when attempting remote performance over the Internet.
Reacting to sounds of other musicians in chamber music relies on the ability to hear nearly instantly every other musician in the ensemble. In a chamber ensemble setting where performers are positioned within a few feet of one another, the delay from the speed of sound traveling through the air is negligible, just a few milliseconds. Take those same players and move them to different locations connected over the popular Zoom videoconferencing platform, and the time it takes a player’s sound to reach their colleagues ears is likely in the hundreds of milliseconds, roughly equivalent to being spaced hundreds of feet apart. Even using specially engineered, low-latency solutions, it is difficult to achieve a level of precision most musicians would be comfortable with. The physical limitations of converting an audio signal to digital information, translating that over several network layers, and converting it back to audio, will likely always be too slow for live performance between musicians performing together from their respective homes. It’s possible that performances could be arranged by a synchronized click track, but that precludes flexibility and spontaneity in many of the same ways as virtual ensembles.
While latency is the largest and most prominent limitation for remote musical performance, it is not the only one. Services like Zoom, Skype, Google Meet, and others were designed to allow verbal conversations, which have a very different sound profile and performance characteristics to music performance. In addition to being more tolerant of latency, spoken communication does not have the same dynamic range or frequency range, and unlike music, conversations rarely have more than one or two simultaneous contributors. Because of these differences, videoconference platforms often compress the dynamic range and attenuate high and low frequencies of a music performance. They may also identify very soft sounds as noise, and attempt to remove them. When more than two players are sounding at the same time, the platform may decide to silence other audio feeds. Different platforms offer varying levels of control over the categories and degrees of audio processing, but none are built with the goals of music-making.
Working within extreme limitations often requires works to have a degree of flexibility. However, many popular examples of flexibly scored music—Terry Riley’s In C and Julius Eastman’s Stay On It—are open ended in certain parameters like texture, form, duration, and orchestration, but still require an extremely high degree of rhythmic precision (relative to a common pulse) and also ask each player to be acutely aware of every other player’s sound in real time.
I wrote Music for Social Distancing for unspecified remote ensemble, four or more players and optional conductor, in March, with the intention of making it available to school ensembles who were starting remote instruction and other ensembles struggling to find something to do with what remained of their season. The work calls for performers to play from home over a videoconference, and the composite of all the players is streamed live to an audience. In this medium, I chose to focus primarily on the issue of audio delay, which I see as the most obvious barrier to remote performance. I also wanted to make sure that players were truly making chamber music by listening and reacting, not just contributing their discrete layer at roughly the same time as the rest of the ensemble.
The opening and closing sections of the work move forward through latency and listening. All four parts play the same lines (with octave adjustments as needed). Each note or two- to three-note gesture is labeled as being initiated by one of the four parts, and the other players only move to that gesture after they hear someone else play it. For example, the piece opens with an F which is initiated by Player 1. The other three players wait to play the F until they hear it from Player 1. All players sustain this until Player 3 moves down a third to D, at which point the other three join the new unison. The result is a kind of floating cloud of canon-like imitations that drift based on who the leader is, how clearly they are heard by the other players, and what the player-to-player latency is at that moment.
The next section, mm. 6-9, allows each player to decide the pace at which they move through a succession of pitches over a given period of time. For example in m. 6, players are given eight to ten seconds to move through six pitches. The result is somewhat similar to the opening section, however the difference is that individuals have the opportunity to diverge from one another a bit further. The beginning and ending of each one-measure phrase will be relatively stable, but the middle is a point of maximum divergence, as some players may choose to start quickly and end slowly, while others choose the opposite and others do something in between. This gives the individual players yet more control over the unfolding of the work, and greater responsibility for listening and reacting.
After a short transition (mm. 10-11) which refers to the latency-driven idea of the opening section (this time without the canon-like imitation), I present the most conventional chamber music texture of the work: a chorale. In this remote performance setting, each chord is conducted visually. Since there is sometimes some drift in audio-video synchronization of the videoconference, this can show some other implications of the network and software environments of the performance. The not-quite synchrony of this section concludes with the loudest section of the work, which is also the moment where players are least likely to hear one another and the least reliant on aural cues.
The next passage is a brief variation on the time-bracketed phrases of mm. 6-9. In it, players choose a moment within a time window to quickly swell from soft to loud and back to soft. My goal in this section is to highlight any quirks of the audio processing happening within the videoconference application. Fast dynamic changes can trip up compressors and cause them to behave strangely. Receiving a suddenly loud sound from one feed might cause the conference to drop audio from another feed entirely. The uncertainty here is coming from a combination of player choice and unpredictable audio processing algorithms. I like to think of the ensemble as playing Zoom like a musical instrument during these moments. And the piece concludes with a textural recapitulation of the follow-the-changing-leader idea which opens the piece.
The techniques I use are central to my conception of the piece. They are musical solutions to technical problems, which make Music for Social Distancing not simply tolerant of network delay, but in some ways reliant on it. This piece would not work the same way for an ensemble of musicians all sitting in the same room.
Music for Social Distancing was written in response to physical isolation that musicians at all levels were dealing with in their own ways. Because of the flexible nature of so many of the key elements of the work, the technical demands are relatively low, which has allowed the work be be performed by high school, university, and professional musicians. I want to briefly discuss some of the reception that it received from performers.
The first four groups4 to rehearse and perform Music for Social Distancing all did so within about a month of one another. I was pleasantly surprised at the number of groups that were interested in taking on such an unusual experiment. The first thing that all three ensemble directors told me was “We needed this” or “It felt so good to make music together again.” This to me is the biggest indicator that the work achieves at least some of the goals that I set out regarding an expression of chamber music collaborative spontaneity over a physical distance.
In addition to this positive feedback, I also heard about—and experienced in my own performance—a few notable challenges that could be addressed either technologically or through further exploration of compositional techniques.
All of the performances that I’m aware of so far have used Zoom as the videoconference platform, and all of them have struggled to varying degrees with Zoom’s audio processing. As much as I tried to account for this with the composition techniques I described earlier, there are still some serious limitations, particularly around the way Zoom selects individuals to be the “main” speaker. Certain timbres or frequencies seem to regularly push out others from the mix in a way that can’t be mitigated by any of the currently available audio settings in the application. Even with Zoom’s “Original Sound” feature enabled, there is still a certain amount of echo cancelation, dynamic compression, and data compression that is inescapable.5
Relatedly, some particularly loud instruments struggle to play within the dynamic range that is suitable for built-in microphones on most laptops. In terms of objective audio quality, having all players use specialized audio equipment could improve these concerns, but at the same time, I find the uneven sounds to have a pleasing verisimilitude that reflects how these technologies are used in their intended contexts. Having better ways to control audio clipping for louder instruments and better microphone options for mobile devices could dramatically improve the overall audio quality of the performance.
The last challenge with the work goes well beyond musical performance but is worth mentioning in this presentation because of the number of student musicians performing this work. It seems unavoidable to me that a performance of a work that requires certain computers, audio hardware, or Internet connectivity will shine a bright light on any discrepancies in technology access. Performing Music for Social Distancing is quite challenging in ensembles where some players are using devices with desktop operating systems, and others are using mobile devices or Chromebooks. It would be deeply unjust to tell a group of students that a particular rehearsal or performance opportunity is only available to students who have an instrument at home, a laptop that meets x requirements, and a broadband Internet connection of at least y megabits per second.
Performances of Music for Social Distancing all dealt with at least some of these issues, and yet all were successful on the whole. Some of these concerns may improve over time—such as software options or broadband availability—but others will require yet more creative solutions both in and out of my control as the composer.
I am presenting this around four months after social distancing became a shared social and cultural experience, and it feels at this moment as though it may be here for quite a while, as intimidating and depressing as that might be. Music for Social Distancing was an experiment in making music remotely as a group. The compositional techniques that I have described today could be explored much more deeply, and there are many more possibilities. I hope that I can contribute music that is thoughtfully constructed for these times. I don’t want to give up on chamber music simply because we can’t make it in the same ways we are used to. Rather than forcing traditional compositional techniques, textures, and styles to fit into the limitations of socially distant performance, I want to take what is available and makes music with it.
As a shy midwesterner, I am very uncomfortable with writing a few thousand words about my own music, and I feel an appropriate amount of shame for posting it here on the blog. Ope. ↩
Plano West Senior High School String Quartet, Ryan Ross, director; Susquehana University Symphony Orchestra, Jordan Smith, director; a Milliken University faculty mixed chamber ensemble, Corey Seapey, director; and Lone Star Youth Orchestra, Kevin Pearce, director. ↩
As of this blog post in late August 2020, Zoom has announced in a blog post that they will be releasing a new “Advanced Audio” feature which eliminates even the echo cancelation and dynamic compression. The performance presented at the conference used Zoom for video with Cleanfeed for realtime(ish) audio to avoid compression. ↩
UPDATE 13 Jan 2021: OBS Studio 26.1 for Mac has since added a built-in virtual webcam, which means you no longer need to install the extra plugin. If you have already set up OBS with the plugin, you must remove the plugin.
For better or worse, the piano keyboard informs a lot of my teaching, and having a way to show a keyboard in realtime on Zoom has been something I’ve been trying to work out since we were exiled from campus back in March. Most of the pieces needed to make this happen have existed for a while, but it’s only in the last few weeks and months that they’ve been updated to make connecting them together relatively1 easy and reliable.
Here’s our end goal.
Live video from my webcam, plus an interactive animated keyboard that responds to my connected MIDI keyboard.
Now that we have our measurable learning outcome, let’s take a look at the next part of the syllabus.
Mac (Windows will also work, but I’m not set up to demo it.)
USB MIDI keyboard (mine: Yamaha P-115, for the price and size, I like this one.)
I’m going to describe this setup for the Mac, but it should work more-or-less the same way on Windows. If you’re following along on Windows, you’ll need this version of the OBS virtual camera plugin instead.Virtual camera is now built in on Windows and Mac.
I’ll give you a minute to download and install all that stuff. After you’ve installed the virtual camera plugin, you may have to restart Zoom or OBS Studio once or twice to make sure they’re seeing the new virtual camera.
We’re going to use OBS Studio to combine the camera feed and the VMPK window into a single video stream. Then, we’re going to send that stream to Zoom by creating a virtual3 camera. Finally, in Zoom, we’ll select that fake camera, instead of the one plugged into the computer. This will take a little bit of setup the first time, but everything will be saved automatically. You should only have to go through all these steps the first time.
Questions? No? Alright. Here we go.
Set up VMPK
VMPK should work pretty much right out of the box. It’s not beautiful, but it’s very functional. In preferences (vmpk > Preferences) you can change the number of keys and the colors that it uses. I like to stick with plain ol’ blue, but you can do whatever you like, including a multicolor setup. I have mine set to be 61 keys so they don’t get too small to see over Zoom. You may need to adjust the “Base Octave” in VMPK to get the register you expect. Mine is set to 3.
Test your keyboard to make sure it’s lighting up the keys you expect in VMPK. If you don’t see anything change when you play the keyboard, open Edit > MIDI Setup and make sure you have checked Enable MIDI Input and MIDI Omni Mode. That will ensure that VMPK will accept any incoming MIDI signal from any connected MIDI device. We’re done messing with VMPK for now, but leave it open in the background. To use it for our live presentation, we can put other windows on top of it, but we can’t minimize or close it.
Create a “Scene” in OBS Studio
Create a Scene that combines your camera and VMPK. There’s a lot going on in OBS Studio. Don’t pass out. First, we’ll need to create a Scene by clicking the + in the Scene’s panel in the bottom left. Name it whatever you like. Mine is called “Piano keyboard”. Have I mentioned that I am a creative professional?
Add your camera
With your newly created Scene selected, click the + in the Sources panel. We want to add a Video Capture Device. Select Create new and call it something clever like “webcam”. I’m going to call my Logitech C920 “C920”. In the window that comes up next select your camera (Device) and resolution (Preset). For my camera, I selected “high”, but you might see a different set of options depending on your camera. Click OK.
Your Source should now show up in the Sources list for the Scene you created in step 2. If for some reason it isn’t, click the eye icon next to it in the Sources panel. If your video output and your camera are different resolutions, you may need to click the camera feed in the Program panel (that’s where your camera’s picture should be) and resize it using the red transform controls (little red squares in the corner).
Add the VMPK window
Since VMPK is still running, we can grab a picture of it. Think of it like a continuously updating screenshot. In the same Sources panel where our camera feed is listed, click + again to add a new source. This time, we want a Window Capture. Select Create New and give it a name. We’re going to be capturing VMPK, so I’m going to call it “Steve”. Just kidding. I’m going to call it “VMPK”. In the Properties window that pops up, select “[Virtual MIDI Piano Keyboard] Virtual MIDI Piano Keyboard” and click OK.
You should see the VMPK window on top of your camera feed now, and it should be listed above your camera feed in the Sources panel. Use the transform controls to resize the VMPK window and drag it so that the lowest and highest keys fit within the video frame. If you want to position it in the same place as mine (at the bottom), drag it down so that it “snaps” in place. It’s ok to let the non-piano-key parts of the window hang off the left and right.
Now, we want to crop out all the settings nonsense from the VMPK window that’s above the piano keys. To do that hold down the option key (alt on Windows) while you drag the transform handle at the top of the VMPK capture in the Program panel. While you have the option key down, the transform controls will crop instead of scale the VMPK capture. Now your OBS program should show exactly what you want your students to see in Zoom.
I know this took a lot of effort, but remember that everything we just did is already saved. If you plan to use OBS for other things in the future, you can save this setup by naming it something memorable in Profile > Rename. We’re done with the hard stuff, I promise.
Start the virtual camera in OBS
To start the built-in virtual camera in OBS, you should see a button on the control palette (bottom right) that says Start Virtual Camera. You can assign a keyboard shortcut to this if you would like in OBS > Preferences > Hotkeys and finding the blank next to “Start Virtual Camera” and “Stop Virtual Camera”. I use the same keyboard shortcut ⇧⌘V for both.
If you’ve installed the virtual camera plugin, you can start it in OBS Studio by going to the Tools menu and selecting Start Virtual Camera. If you don’t see it there, that means the plugin isn’t installed. If you think you’ve already installed it, try restarting OBS and/or Zoom.
Show your work in Zoom
Once you’ve got the virtual camera started, you can switch over to Zoom and start (or join) a meeting. Start your video in Zoom and use the arrow on the video button to select OBS Virtual Camera. If you don’t see OBS Virtual Camera as an option, you may need to update Zoom by going to zoom.us in your menu bar and selecting Check for Updates. If you’re already up-to-date, try restarting Zoom.
At this point you should see your camera feed showing up in Zoom. If you see your image as mirrored, don’t worry. It only looks that way to you. Your meeting attendees will see everything normally, and it will record normally as well. If that freaks you out a little, you can go to Zoom > Preferences > Video and uncheckMirror my video. This will make your piano look normal to you, but it will make it a little awkward to fix your hair by looking at Zoom until you get used to it.
To recap: Your OBS Profile, Scene, and Camera/Window Capture will all be saved in OBS Studio automatically. You don’t have to click “Save” or anything. The next time you have class (or just want to show your friends how cool you are) just open VMPK, OBS, and Zoom; start the virtual camera (Controls palette > Start Virtual Camera); and then launch or join your Zoom session.
In your Zoom session, this isn’t going to explode your feed to fullscreen the way a screenshare does, so you may want to suggest to your attendees that they “pin” your video by clicking the three-dots in the top right corner of your video and selecting “Pin”. This also won’t share your audio. Audio routing is outside the scope of this particular post, but there are a number of ways to take care of this.4 Another small challenge is that if you are recording your sessions and don’t “pin” your own video in Zoom, you and your fancypants piano keyboard will just be one tiny part of the Brady Bunch grid.5
My favorite thing about this setup, in addition to it being free and relatively painless (especially after the first time) to set up, is that it represents a small but meaningful example of the kind of thing that suits a Zoomclass that is hard or impossible to replicate in meatspace.
If you try this out with your class, I’d love to hear how it goes!
EDIT, 7 September 2020: I’ve heard from a few folks who have gotten everything working except live input from a hardware keyboard into VMPK. There are a lot of variables that could possible cause this issue, but I think the most common one is the selection of a different “MIDI IN Driver” in the VMPK settings. In VMPK, if you go to Edit > MIDI Connections, you’ll get a dialog with a few options to fiddle with. I think for most people, you’ll need to check Enable MIDI Input, MIDI Omni Mode, and set your MIDI IN Driver to CoreMIDI on a Mac, or the equivalent on Windows. Here’s what my settings look like.
In this screenshot “Digital Piano” is the name of my hardware keyboard. (Not very creative, but descriptive.) Thanks to Danielle and others who asked!
Everything is relative. Buckle up, Buttercup. We’re getting nerdy. ↩
At the time of this writing (August 2020), please don’t buy a webcam from Amazon. 3rd-party Amazon sellers are price-gouging for all the work-from-home. If you really want to upgrade to a nice webcam, you’re better off watching places like B&H or Best Buy for when they get them in stock. ↩
Short version: open GarageBand or any other app that will make sound and share audio how you normally do. I use Loopback from Rogue Amoeba, but you can also use the built-in Zoom audio sharing features in Share Screen > Advanced > Music or Computer Sound Only. Loopback is $100 and worth every penny, but I totally get that not everyone is prepared to drop that kind of money on software that may only be used for Zoom music class. ↩
A possible workaround is to record your video directly from OBS, but then you’re recording two videos per class, which will either require some editing to assemble with the Zoom video (way too much time and effort to do each day) and a boatload of space for those video files. Pinning your video will help, but that makes it a little harder to see your attendees. As ever, we live in a world of compromise. ↩
We’ve all been there. You have a carefully planned lesson. Things are going great. Your students were on time to the Zoom; they have their cameras on so you can see them smiling and nodding. They have good questions. You’re killing this lesson. And then you play your musical example. Crickets. “I don’t hear anything. Does anyone else?” And just like that, it all falls apart, and you lose ten minutes to troubleshooting. Nobody wants that.
One of the most challenging things about teaching music on a Zoom1 conference is playing recordings for everyone. Audio over Zoom is going to be compressed and likely out of sync, that is if you can get audio from where it lives into Zoom to begin with. Showing YouTube videos is worse, because you’re getting another layer of compression with the video, and it’s just a mess if anyone (including the presenter) starts talking over the video.
The solution to these and other problems that I’ve really had fun with is called Watch2gether, a freemium2 service allows YouTube video playback to be synchronized across many different users.
It’s astonishingly simple and reliable. As a free user, I can create a “room”, where I can add YouTube videos to a playlist. I can give a link to that playlist to my students, and then when I press play on a video, it starts playing for everyone else, directly from YouTube. Need to start at 24:13? Need to scrub back a few seconds to catch that cadence again? All of my students’ video players will hop back too. Same with jumping to the next or previous video in the playlist. Rooms also include a text chat feature, but I’ve used it pretty minimally, since I usually have a Zoom conference running alongside it.
I used Watch2gether with my graduate seminar last spring when they presented their research at the end of the term. Since we were remote, I asked them all to record their presentations and upload them to YouTube. Then for the Q&A, we all watched the presentations using Watch2gether and then moved to a Q&A. This meant that our discussions weren’t at the mercy of any presenter’s connection speed, and nobody had to struggle with getting their musical examples to work. (Also, it was lower pressure for the presenters, since they could just re-record or edit as needed.) It’s possible to save a room or a playlist to re-use throughout a course as well.
One thing to look out for is that by default, anyone in the class can take control of the playlist. It could be a fun class project to send them out to YouTube and find examples of things to add to a playlist, but you may not want them putting the latest Cardi B tune in your playlist of Renaissance polyphony. So take note of the playlist settings (which can be changed at any time) if you decide to give it a go with your students.
I have only tried Watch2gether with YouTube, but the company also advertises that it works with other platforms, including Vimeo, Dailymotion, and Soundcloud. Watch2gether is free to use for any of the above features, and for a reasonable $3.49/mo. you can remove ads from the site for any room that you create, which I think is pretty reasonable. It’s not a perfect service—I would love to include Spotify or uploaded audio/video in a room—but it’s very handy and reliable. Super useful for music and performing arts classes who will be going remote.
In my head, and in many emails, I say/write it as ZooOOoom. It’s fun. Or at least it’s what amounts for fun in 2020. I’m a simple man. ↩