The downtime between semesters is a great time to do “digital hygiene”-type tasks. One of the big ones that everyone should be doing—especially those of us who make things with computers for a living—is to back up everything we do. I have had more than one student in this calendar year who lost work due to a computer problem. There are a lot of different backup strategies, but I think for those of us who work regularly on a portable computer, online backup is a must. Here are a few things to keep in mind.
Cloud sync services like Dropbox, Google Drive, and OneDrive are great, and they are backup-adjacent, but cloud sync is not backup. They serve a different function, and you can and should have both.
Local backups are also great. If you have a Mac, the built-in Time Machine on your computer is excellent and easy. But local backup drives have some important drawbacks. Notably, you have to make sure they’re plugged in and working all the time (not trivial for a laptop!). They are subject to the same kinds of failures as the internal drives of your computer. And lastly, they are subject to theft and disasters (eg. fires, spilled coffee) that online backups avoid. They can be fast to restore from, which is great in an emergency, but again, you should have both.
File history is a feature of any good online backup service, and it can save your butt in all kinds of user-error situations. Raise your hand if you’ve ever overwritten a file and lost the original because you thought you were editing a copy (🙋♂️). A 30-day history is a good start, but a few months or a year is even better.
About a month after I finished my D.M.A., my computer died. It was the desktop computer that had gotten me through my masters and doctorate at Michigan State. The only reason I have any of my compositions, performance recordings, or even class work, from before 2012 is that I had an online backup service.
I’ve used a few different services over the years, but the one I like the most now is Backblaze. It is the easiest $70 I spend every year (their annual rate). It includes unlimited data storage, and you can add a year+ of file history for something like $2/month. Here is a nice referral link you can use if you want to get an extra month or two free (and I will too). No pressure though! Feel free to not use this, or use another backup service. The imporant thing is that you have some online backup running!
This is the text of a presentation I gave at the inaugural Teaching Composition Symposium at the University of Maryland, Baltimore County on 21 October 2022. I’m told presentations were video-recorded, so I’ll update this post later with that recording. [conference slides, text PDF]
I’m sure you’ve had the experience of getting feedback on a composition that was well-meaning, but ultimately unhelpful. Even someone telling you how great your music was or how much they loved it is often frustrating because it’s hard to know what they heard that made them love it. Feedback that your music was mind-blowing and that your music was stomach-turning are equally unhelpful, because without more information, it’s impossible to learn something from this feedback.
In this presentation, I’ll talk about some of the common limitations of informal, unstructured feedback like this; and I’ll describe how I have used Liz Lerman’s Critical Response Process (CRP) to better support and motivate composers in my studio, and how you might implement it in yours.
In my previous experiences with critique sessions in studio classes, I found that the feedback offered usually said a lot more about the person offering it than it did about the music they were nominally responding to. Rather than suggesting how the composer might have written a work differently, this feedback often seems to answer the question “How would this piece have gone if I had written it, rather than you.” While I do think there should be space for composers to respectfully challenge one another’s creative intent, it is worth starting by identifying what that intent was to begin with. A better feedback system should assume that each composer in the room has a different set of musical goals and experiences.
An important secondary benefit of group feedback in a composition studio—at least to me—is developing a sense of creative community. I want the composers in my studio to want to support each other and collaborate with one another. My studio includes first semester undergraduate students, graduate student composers, and everything in between. In unstructured critique sessions, the less experienced students often felt intimidated by the older students, assuming that they had nothing meaningful to say about their work. And graduate students would often unintentionally patronize or belittle the newer composers. Another goal for my feedback sessions is to create space for meaningful responses from anyone in the group, even those who are very new to music and music composition studies.
I think critique sessions are an important way of building a creative community within my studio, giving them more perspectives than only my own or another faculty composer’s. But I found that the feedback composers received in these sessions often backfired: it often made them feel less confident and motivated to improve their work.
Now that I’ve described some of the limitations of unstructured feedback, I want to give an overview of CRP, slightly adapted for composers, and my experiences implementing it within my studio classes at Wichita State University and elsewhere.
Critical Response Process
CRP was developed by Choreographer Liz Lerman to help improve her work, especially during the workshopping process, though it is not specific to dance. My description will follow the process as Lerman describes it in two books—both co-authored with John Borstel—2003’s Liz Lerman’s Critical Response Process and 2022’s Critique is Creative. The second of these came out just after I submitted the proposal to give this talk and includes not only a new description of the process itself, but also some chapters from outside contributors about their experiences using CRP, several outside of dance. If you’re interested in implementing this process after today’s presentation, I highly recommend either of these books.
There are three main roles in CRP: the artist, responders, and the facilitator. I will use “artist”—Lerman’s word—and “composer” interchangeably here. The responders, in my most common use of the process, are the other students, but it could easily include faculty, other musicians, or even non-musicians. The facilitator leads the discussion and enforces the structure that I’ll discuss in a moment. I would personally recommend the facilitator going over everything I’m about to describe privately with the composer before the session, and then give an introduction to the whole group of responders if they are new to CRP at the beginning of the session. I have also found that with new groups of responders, the facilitator may also need to momentarily play the role of responder in the interest of modeling the kinds of engagement that fits each step of the process. As a possible fourth role, you may wish to have a dedicated notetaker, though for smaller groups like the ones I usually lead (around ten or fewer), that may not be practical or necessary.
One interesting consideration is when in the composition process to engage in feedback from a larger group. I personally like to use CRP with works that are still in progress, rather than those that are completed, since it can be hard to take a piece apart and put it back together once it is already complete, and for all sorts of external reasons—like the need to prepare a senior recital or meet an application deadline—students may not have the time to significantly revise an already-complete composition. At the same time, the work needs to be substantial enough that there is something to which responders can respond. So, it needs to be more substantial than a sketch, but not necessarily a complete draft.
When presenting a work in class, I usually have students share a PDF of the score and audio realization in our studio Discord (though sometimes a video or a live performance may be relevant). I know there are lots of issues around relying too heavily on computer realizations, but I think the benefits outweigh the risks in this particular instance, and I discuss relevant caveats with students beforehand. I ask the composer to not say very much about their piece before we listen to avoid “priming” the responders, but I do ask them to briefly describe what proportion of the total piece they are about to present. For example, they might say “this is the first half of the second of three movements of a work for woodwind quintet”. After that introduction, we listen to the piece and follow the score, and then we begin Lerman’s process.
Critical Response has four distinct steps.
Step 1: Statements of Meaning
Critical Response begins by simply asking about what things the audience perceived—not necessarily what they liked or didn’t like, or even what “worked” and didn’t “work”. I ask responders to think about answering the question “What did you hear?” For example, they might say something like “I heard two distinct sections”, or “I heard bluesey harmony”, or “I heard a Thelonius Monk quote”. Lerman suggests the facilitator frame even more specific questions like “What was meaningful?” or “What was interesting?”. These can be useful in steering towards feedback that is somewhat positive. And, it can be tempting for responders to start with the phrase “I liked that…” or “I loved when…”, and I—as nicely as possible—ask them to hold their opinions until a later step. It’s unavoidable for opinions to creep into earlier steps of the process, but I try to catch the more directly stated ones. I sometimes change the name of this step to “Observations” or even “Neutral Observations” to avoid sliding too strongly into opinion.
The reason for avoiding direct opinions here is that the goal of this step is to show the composer how their work is perceived by a group of curious, generous listeners. If the composer imagines there were three sections to what they played, but a responder hears five sections, it’s an opportunity for the composer to think about form: why they made the choices they did, how important those choices are to their conception of the piece, and what they could do to clarify those choices.
Once the responders seem to have run out of insightful Statements of Meaning, it’s time to give the composer the floor in the next step.
Step 2: Composer as Questioner
In Step 2, the composer asks about specific elements of the work. This is the composer’s opportunity to show what is important to them about the piece, or ask about how an element was perceived that the responders might not have thought to mention in Step 1.
Formulating these questions can be a bit tricky, especially for younger composers who are still discovering what is important to them in their music. I like to give a few guidelines to students. First, don’t solicit opinions—again, there will be time for that later. Try to stop them if they start a question with “what did you think of…” or “did you like it when…”. Second, I try to get them to ask questions that are relatively specific. Lerman warns against getting too specific, but I haven’t found that to be a big concern with my students. For example, a student might ask “What instrument did you hear as the primary voice in the introduction?” or “What relationship, if any, did you hear between the first section and the second section?”. I even had one student ask “Can you sing back the main idea?” which was pretty interesting!
For facilitators, this is the step that might require the most coaching in advance of the session. As I said, younger composers may struggle to formulate these questions, so I will usually discuss them in a one-on-one meeting or lesson with the composer before they present to avoid putting them on the spot. Even for more experienced composers who have participated in CRP before, I usually send them a note the day before to remind them to bring some questions.
Much like Step 1, Step 2 focuses largely on the way a composition was perceived. The main difference is that in Step 1, the responders approach the music without guidance, much like most audience members. In Step 2, the composer guides some reflection through their questions, which shows some of what they find to be important about the piece. The next step begins to address why the composer made the decisions they made.
Step 3: Neutral Questions from Responders
In this step, the responders ask questions about the creative decisions the composer has made. These can be very specific, like “Why did you choose to write mezzoforte in the alto clarinet part on beat 2 of measure 81?”; or they can be very general, like “Why did you choose to write for alto clarinet?”.
It’s easy for questions from responders to have embedded opinions, so this is another time for the facilitator to be active. Most of the time, a question that has an embedded opinion can be reworded into a more neutral form that asks the composer for the same explanation but doesn’t invite them to feel defensive. (CRP tries to avoid exactly sort of defensiveness that comes from other forms of feedback.) So instead of asking why the middle section is so long, a responder might ask how the composer is thinking about structural proportions, or what sorts of formal planning they did (and why). There might a very good reason for a section to be much longer than others, or there might not! If there is a good reason, the composer might want to consider ways to make the proportions seem more intentional. If there isn’t a good reason, it’s more powerful for them to discover that on their own and look for ways to revise the length.
If your students are like mine, the first few times through CRP, they may start to get frustrated that they can’t jump directly to offering opinions. However, I think the two questioning steps are some of the most meaningful parts of a session, especially for composers who are earlier in their studies, because it helps them discover what is important to them about the music that they are making. Only after that has been established do responders get to offer direct opinions.
Step 4: Permissioned Opinions
Finally, in the very last step of the process, responders are invited to share their opinions, but even then, there is a highly structured way to do so. This step is the only one with a specific script: “I have an opinion about %blank%. Would you like to hear it?” If the composer says yes (which is usually the case), the responder shares their opinion. The topic of the opinion could be anything from dynamic balance to counterpoint to idiomatic writing to formal structures.
Lerman herself admits this can be a bit awkward, but it has a lot of utility, especially in keeping the composer from becoming defensive. Even if they always say “yes”, knowing that they could say no, puts them in control of the dialogue. Additionally—and this is the part that I think is sneakily brilliant—it gives the composer a few seconds after hearing the subject of the opinion to try to put themselves in the mindset they were in when they initially made those decisions. It makes them better prepared to receive the opinion—positive or negative—in the constructive spirit it was intended. I think of it like a baseball catcher getting down in their crouch before the pitcher winds up and pitches. They’re going to be far better at catching the ball.
By placing the opinions at the very end of the process, it ensures that the responders have a better sense of the composer’s goals, and that their opinions can be expressed more empathetically and have more useful implications for the composer.
Benefits, Caveats, and Conclusion
My students really respond well to our Critical Response sessions. They are almost always highly motivated to work on their compositions at the end of a session (or at least they tell me they are), which was usually not the case in other critiques I have done. It also lets them show their peers what is interesting and valuable to them, which makes for a supportive, creative community.
One downside I do find with CRP is that it takes a long time. In a 50-minute session, if the pieces are relatively short and students are already familiar with CRP, we can squeeze in two. With a 75-minute class, I find that three or four is reasonable. The first day I introduce CRP each year, I only plan to do only one. If speed and efficiency are crucial, it’s possible to put a timer on each step. In a real pinch, I find that I can often skip Permissioned Opinions and still give students a lot to work with. Despite the time investment, I find that CRP is deeply valuable to my students, even when they aren’t the ones presenting work.
I think the greatest value of Critical Response is in the structure and the dialogue. The structure separates the composer’s intent from the intent of the responders. Critique changes from being about how someone else would have done it differently to being about how well the composer has defined and achieved their own goals. Through dialog, CRP asks composers to take ownership over creative decisions they may not have realized they were making. By interrogating their assumptions, it gives them tools they can use on their own as they work on future compositions. Compared to before I implemented CRP, the composers who study with me are writing music that is more different from one another, more clearly defined, and in which they feel greater agency and pride.
What follows is a section of my Introduction to Composition syllabus that I’m adding this year. I get questions about software a lot from my students. (Poor things don’t know what they’re getting into by asking me such questions.)
tl;dr: I recommend you invest in Dorico Pro if you can afford it, with MuseScore as a temporary solution if you aren’t ready to spend the cash.
When starting a new composition, it is important that you begin working using pencil and paper, even though your final work will usually be completed in software. This is to avoid the many assumptions that software will make for you. In the early sketching phases of a composition, it is important to not be bound to these assumptions. I expect all students to be prepared to show handwritten sketches of their work.
When it comes to completed projects, composers present their works in computer-notated/engraved form. It is no longer common (and in most circumstances, no longer acceptable) to supply performers with hand-written manuscripts. Students in this class may choose to use any of the following applications to prepare their assignments:
MuseScore is free, but the quality of the finished product is not as high as the other three commercial applications. I will accept work completed in MuseScore for this class. However, students who are planning to major in composition or work professionally with notation should plan to purchase one of the professional applications and begin learning it. I know these are expensive, but these are the tools of our field, and it’s worth learning them sooner, rather than later, and while you have access to the very steep student discounts. Beginning with 400-level composition lessons, you will be required to work with one of the commercial applications, so it might be worth starting to learn it now.
When it comes time to select from those three, which you choose is ultimately up to you, but I generally recommend Dorico Pro, as I think it has the brightest future of the group (reasons for that determination are beyond the scope of this syllabus, but I’m happy to discuss it sometime). While there are still many pros who use it, I do not recommend new users invest in Finale. When purchasing your software, keep the following things in mind:
Get your student discount! This will save you hundreds of dollars.
Get the professional “tier” of whatever product you select. Both Dorico and Sibelius come in cheaper, feature-limited “lite” versions. For the kinds of things you will need to do in this class, you will end up being frustrated by those limits, and there is often not an easy way to upgrade to the pro tier without paying all over again. If you’re not ready to invest in the pro tier, stick with MuseScore while you save up. It will be worth it in the long-run.
Be patient with learning it. These are professional tools, which means they’re complicated. They need to serve as wide a variety of musicians and musical traditions as they can. Think about it a bit like investing the time, care, and money into learning a musical instrument. It’s hard at first, but with dedicated practice you can make it work for you and create something amazing with it.
Steinberg, the company that produces Dorico, Cubase, Nuendo, and other professional audio applications, announced today that they would be moving away from the hardware license key, which requires users to plug in what looks like a USB thumb drive to their computers any time they want to run a Steinberg. This key, called eLicenser, was the single greatest annoyance for me as an early adopter of Dorico. Remembering to carry an extra thing around, finding adapters for modern notebooks that have removed USB-A ports in favor of USB-C, danger of bumping a port while working, and just the general inelegance of the whole thing. Product Marketing Manager Daniel Spreadbury has been saying for years that they’ve been trying to work on this, and it seems things are finally happening.
I have no idea what the result will be, but I’m convinced it will be better than the current situation, if only because it will have been created by people who have seen what computing looks like in the 2020s: mobile devices and super-thin notebook computers with limited USB ports. When the eLicenser was originally developed, the idea that you could run something as complex as Cubase on a laptop was absurd, and so the hardware licensing system didn’t seem like a burden, but obviously that is no longer the case. Even among media pros, laptops are increasingly common, and the eLicenser feels increasingly anachronistic.
The trick will be to balance the needs of users—simplicity, flexibility, and reliability—with Steinberg’s need to protect its massive investment in the development of these applications. I’ve seen some speculation on social media that this is a signal that Steinberg is moving to a subscription model, but I don’t see any evidence of that, and it’s something Spreadbury has stated in the past is something he opposes for Dorico.
My hope for this future licensing platform is that it will be easy to transfer a license over the Internet, but that an active network connection would not be required to use the software. I think it’s reasonable for a person buying license to professional software like this ($600 USD before any discounts) would expect to be able to use it on at least two to three computers (say a desktop and a laptop) without too much hassle. Long-time Sibelius users will likely recall the tedium of transferring Sibelius licenses by copying long numbers back and forth between computers.
The replacement for Steinberg’s eLicenser technology isn’t here yet, so if you’ve got an eLicenser, you can’t ditch it yet; and of course, other applications like Vienna Symphonic libraries still use this same system, not to mention Avid’s iLok. But, I’m happy to see this commitment to our brighter, dongle-free future.
At the beginning of the semester, I was constantly fiddling with my tech setup at home to make it better and easier to get in and focus on the teaching. Now that it’s settled, I’m pretty happy with it. This video is a really quick overview of the software and hardware I’m using at home to teach my theory and composition courses remotely over Zoom. It is not a how-to, but a brief tour and demo of all the parts.
If folks are interested, I might do a little more detailed write-up or video on individual components now that everything is pretty much settled. Thanks to my friend and former theory prof. Leigh VanHandel for asking me to make this video and for sharing it with the Music Theory Pedagogy Interest Group at SMT 20201 this past weekend.
For better or worse, some of what I teach in Theory I simply needs to be memorized. Sure, I can talk about how we derive a diatonic collection through the circle of fifths, but then you’d have to know what a perfect fifth is, and that can be tricky to explain without getting into intervals, and those can be tricky to explain without getting into major and minor scales, and there we are, right back at the diatonic collection. So we pick an arbitrary place to start and brute-force memorize a few things.
When I’m teaching on campus, we do timed quizzes quite a bit in the first semester. These are things like recognizing notes on the staff, writing key signatures, and writing scales. That’s something that I have been struggling with how best to replicate in our remote Zoomclass reality. So far, I’ve come up with two solutions.
Note ID quizzes in the LMS
While I can’t ask students to write notes on the staff in Blackboard, I can have them type things. A few weeks ago, I did some timed tests in which I simply set up a big collection of image-based fill-in-the-blank questions on Blackboard. I made images of whole notes on a staff and asked students to identify the notes by typing the letter in the box. Thanks to Dorico’s Flows and Graphic Export features, this was considerably less tedious than it might have been.
I could have Blackboard select a random 25 questions and limit the time precisely. Blackboard can select 25 random questions from a pool, it lets me allow each student to have multiple attempts at the test. (I mean, it’s literally practice. What kind of music teacher wouldn’t encourage practicing?) The other great benefit to using Blackboard tests for me was that the tests were graded automatically and added to the Blackboard gradebook. The benefit to students was that they didn’t have to go to any other site or use any other logins.
My fill-in-the-blank quiz above is great if all I need students to do is type a letter, but as soon as we get to even a small amount of complexity, like writing a scale or key signature on a staff, that breaks down quickly. Sure, they could type something like “A B C-sharp, D, E, …”, but then we start to add enough complexity that Blackboard can no longer auto-grade With over 50 students, I do not want to deal with 250 things to grade after a week of daily quizzes. A familiar friend is here to help.
MusicTheory.net is way cooler than you may have thought
I’m not sure exactly when MusicTheory.net came across my radar, but I’ve been recommending it to students for years as a place to go to drill fundamentals, flashcard-style. However, I only recently discovered that you can actually create custom, timed quizzes for students to complete. Best of all, they don’t need a login. You can create a quiz with all the parameters you need, post a link, and then students can share their report back in a similar unique link.
To create your quiz, scroll down to the very bottom of the Exercises page and click Exercise Customizer. From there, you can create your own custom version of any of the exercises you’ve seen on the site. For my first, I created a quiz that would cover major and minor scales, up to four sharps and flats, treble and bass clef, and that would give students ten minutes to complete ten scales. (This may seem fast, but it’s actually very generous.)
Once you’ve selected the customizations you want, you can copy the link at the bottom of the customizer. That link will always be set to those customizations—you can’t change them without changing the link—so make sure you’ve got everything the way you want it. From there, students can click the link and immediately start the exercise you’ve created.
At the end, students get the opportunity to create and “sign” a report by typing their name. That will generate a unique code and link that you can use to check their score. That’s it! I made a quick screencast for my students, but I doubt they’ll need it.
To get this integrated into my gradebook, I’ve set these up as one-question short-answer quizzes on Blackboard. Each quiz has a link to the MusicTheory.net exercise and a space for the student to enter their report link. I’ll still need to open each link and copy the grades manually, but compared to grading 500 scales every day for a week, I’ll call it an improvement.
Another nice benefit of this system is that it allows students to take the quiz as many times as they need. As long as they continue using my link, they’ll continue to get the same parameters I’ve set up. As before, I don’t mind at all that they can practice as much as they want before doing the one they submit for their homework.
In some ways this is less good than my daily written quizzes on campus. Notably, students aren’t getting practice with the mechanics of writing notes and accidentals on a staff, which is far from trivial for students who are new to all this. On the other hand, I can give a more thorough quiz that students can practice more before taking, and that I can give more regularly without blowing up my grading schedule. This system also dramatically shortens the feedback loop, as students know as soon as they submit a question whether it was correct or incorrect. So there’s more practice, that is lower-stakes, which means less pressure, and all with immediate feedback.
Best of all, this is totally free. I’m aware of premium platforms like Musition that allow for even more robust testing with greater flexibility, but MusicTheory.net gets me where I need to be for this particular task, and it’s completely free. This is yet another tool I’ve incorporated to remote teaching that I intend to continue using after we return to campus.
I initially wrote and presented this paper about my composition Music for Social Distancing for the 2020 Aspen Composers Conference.1 If you’re interested in the work that is the subject of this paper, you can get the score and performance information on my site. The presentation included this performance of the work by Wichita State University’s Happening Now new music ensemble, with a little help from my friends.
In March 2020, I and countless other musicians across the United States were asked to stay in our homes and limit our personal interactions as much as possible to limit the spread of the novel coronavirus. This impacted nearly every music presenter, performer, venue, and school in many ways. As many interpersonal interactions—meetings, lessons, and even parties—migrated to videoconference platforms like Zoom and Skype, it quickly became obvious that performing traditional repertoire would not be feasible over these platforms for a variety of reasons I will discuss momentarily. Even beyond the common practice period, more recent, flexible compositions pose similar challenges to remote performance. In this presentation, I will discuss some of the issues associated with remote ensemble performance, the compositional techniques I used in my work Music for Social Distancing to account for those issues, and the experiences with various readings and performances of the work in the months since I first published it.
The term “social distancing” was quickly and widely adopted by health and policy experts in the earliest days of the pandemic. The World Health Organization (WHO) and US Centers for Disease Control and Prevention (CDC) recommended that individuals maintain a minimum of six feet of space between themselves and others outside their household. However, it quickly became apparent that “social distancing” might not be the most apt description of this recommendation, and some adopted the phrase “physical distancing” instead. In a WHO press conference, Dr. Maria Van Kerkhove elaborated:
… [K]eeping the physical distance from people so that we can prevent the virus from transferring to one another, that’s absolutely essential. But it doesn’t mean that socially we have to disconnect from our loved ones, from our family. Technology right now has advanced so greatly that we can keep connected in many ways without actually physically being in the same room or physically in the same space with people. … So find ways to do that, find ways through the internet and through different social media to remain connected because your mental health going through this is just as important as your physical health.2
This idea, maintaining social bonds in spite of physical isolation, became very important to me as I was working on this piece, teaching lessons and classes remotely, and imagining what music could look like under these restrictions. I will continue to use the expression “social distancing” here, as it is more familiar, but I intend it to mean physical and geographic separation rather than social isolation.
When institutions started canceling concerts and universities and conservatories started sending students home, I and many of my colleagues scrambled to find the best ways to move our performances, rehearsals, and classes to the Internet. A popular question in online music forums was “What application do I need to use to have my rehearsal online?”. The obvious assumption there is that there was such a thing. It turns out there wasn’t, isn’t, and likely won’t be any time soon, due to technical limitations.
One well-explored solution is that of so-called “virtual ensembles”, exemplified and popularized by Eric Whitacre’s Lux Aurumque virtual choir video in 2010. Virtual ensembles create a fixed recording and require a reasonably high level of planning, editing, and technical expertise. As recordings, they are fixed and do not unfold in realtime as live performances. Additionally, unlike most classical music recordings, they are not created in a way that allows performers to listen and react to one another, because they are not in the same room and at the same time. As admirable and impressive as virtual ensemble recordings are, I did not find them to be a very good substitute for the things that I missed most from the performances that I loved.
One particular thread in an online music teaching forum got stuck firmly in my mind. It was devoted to a question about how to do remote chamber music coaching, rehearsing, and performance. There were a number of suggestions that all centered around making multitrack virtual-ensemble-style recordings. Each time some version of this was suggested, the asker promptly replied “that is not chamber music!” I want to examine what chamber music is, and why I felt that remote chamber music performance required the creation of a new kind of repertoire.
In the simplest, most literal sense, chamber music is defined by the small size of the performing ensemble. It is the implications of that small size that make chamber music worth distinguishing from larger works. Conductorless musicians have greater responsibility for shaping the performance individually, listening, and reacting to one another, encouraging what James McCalla in the preface to his Twentieth-Century Chamber Music describes as “individuality as an essential part of [their] collectiveness”.3 This individual-collective dichotomy is what I find most appealing about chamber music. It is what allows chamber music to be subtle, and intimate, and exciting. However, these same features are also the first to falter when attempting remote performance over the Internet.
Reacting to sounds of other musicians in chamber music relies on the ability to hear nearly instantly every other musician in the ensemble. In a chamber ensemble setting where performers are positioned within a few feet of one another, the delay from the speed of sound traveling through the air is negligible, just a few milliseconds. Take those same players and move them to different locations connected over the popular Zoom videoconferencing platform, and the time it takes a player’s sound to reach their colleagues ears is likely in the hundreds of milliseconds, roughly equivalent to being spaced hundreds of feet apart. Even using specially engineered, low-latency solutions, it is difficult to achieve a level of precision most musicians would be comfortable with. The physical limitations of converting an audio signal to digital information, translating that over several network layers, and converting it back to audio, will likely always be too slow for live performance between musicians performing together from their respective homes. It’s possible that performances could be arranged by a synchronized click track, but that precludes flexibility and spontaneity in many of the same ways as virtual ensembles.
While latency is the largest and most prominent limitation for remote musical performance, it is not the only one. Services like Zoom, Skype, Google Meet, and others were designed to allow verbal conversations, which have a very different sound profile and performance characteristics to music performance. In addition to being more tolerant of latency, spoken communication does not have the same dynamic range or frequency range, and unlike music, conversations rarely have more than one or two simultaneous contributors. Because of these differences, videoconference platforms often compress the dynamic range and attenuate high and low frequencies of a music performance. They may also identify very soft sounds as noise, and attempt to remove them. When more than two players are sounding at the same time, the platform may decide to silence other audio feeds. Different platforms offer varying levels of control over the categories and degrees of audio processing, but none are built with the goals of music-making.
Working within extreme limitations often requires works to have a degree of flexibility. However, many popular examples of flexibly scored music—Terry Riley’s In C and Julius Eastman’s Stay On It—are open ended in certain parameters like texture, form, duration, and orchestration, but still require an extremely high degree of rhythmic precision (relative to a common pulse) and also ask each player to be acutely aware of every other player’s sound in real time.
I wrote Music for Social Distancing for unspecified remote ensemble, four or more players and optional conductor, in March, with the intention of making it available to school ensembles who were starting remote instruction and other ensembles struggling to find something to do with what remained of their season. The work calls for performers to play from home over a videoconference, and the composite of all the players is streamed live to an audience. In this medium, I chose to focus primarily on the issue of audio delay, which I see as the most obvious barrier to remote performance. I also wanted to make sure that players were truly making chamber music by listening and reacting, not just contributing their discrete layer at roughly the same time as the rest of the ensemble.
The opening and closing sections of the work move forward through latency and listening. All four parts play the same lines (with octave adjustments as needed). Each note or two- to three-note gesture is labeled as being initiated by one of the four parts, and the other players only move to that gesture after they hear someone else play it. For example, the piece opens with an F which is initiated by Player 1. The other three players wait to play the F until they hear it from Player 1. All players sustain this until Player 3 moves down a third to D, at which point the other three join the new unison. The result is a kind of floating cloud of canon-like imitations that drift based on who the leader is, how clearly they are heard by the other players, and what the player-to-player latency is at that moment.
The next section, mm. 6-9, allows each player to decide the pace at which they move through a succession of pitches over a given period of time. For example in m. 6, players are given eight to ten seconds to move through six pitches. The result is somewhat similar to the opening section, however the difference is that individuals have the opportunity to diverge from one another a bit further. The beginning and ending of each one-measure phrase will be relatively stable, but the middle is a point of maximum divergence, as some players may choose to start quickly and end slowly, while others choose the opposite and others do something in between. This gives the individual players yet more control over the unfolding of the work, and greater responsibility for listening and reacting.
After a short transition (mm. 10-11) which refers to the latency-driven idea of the opening section (this time without the canon-like imitation), I present the most conventional chamber music texture of the work: a chorale. In this remote performance setting, each chord is conducted visually. Since there is sometimes some drift in audio-video synchronization of the videoconference, this can show some other implications of the network and software environments of the performance. The not-quite synchrony of this section concludes with the loudest section of the work, which is also the moment where players are least likely to hear one another and the least reliant on aural cues.
The next passage is a brief variation on the time-bracketed phrases of mm. 6-9. In it, players choose a moment within a time window to quickly swell from soft to loud and back to soft. My goal in this section is to highlight any quirks of the audio processing happening within the videoconference application. Fast dynamic changes can trip up compressors and cause them to behave strangely. Receiving a suddenly loud sound from one feed might cause the conference to drop audio from another feed entirely. The uncertainty here is coming from a combination of player choice and unpredictable audio processing algorithms. I like to think of the ensemble as playing Zoom like a musical instrument during these moments. And the piece concludes with a textural recapitulation of the follow-the-changing-leader idea which opens the piece.
The techniques I use are central to my conception of the piece. They are musical solutions to technical problems, which make Music for Social Distancing not simply tolerant of network delay, but in some ways reliant on it. This piece would not work the same way for an ensemble of musicians all sitting in the same room.
Music for Social Distancing was written in response to physical isolation that musicians at all levels were dealing with in their own ways. Because of the flexible nature of so many of the key elements of the work, the technical demands are relatively low, which has allowed the work be be performed by high school, university, and professional musicians. I want to briefly discuss some of the reception that it received from performers.
The first four groups4 to rehearse and perform Music for Social Distancing all did so within about a month of one another. I was pleasantly surprised at the number of groups that were interested in taking on such an unusual experiment. The first thing that all three ensemble directors told me was “We needed this” or “It felt so good to make music together again.” This to me is the biggest indicator that the work achieves at least some of the goals that I set out regarding an expression of chamber music collaborative spontaneity over a physical distance.
In addition to this positive feedback, I also heard about—and experienced in my own performance—a few notable challenges that could be addressed either technologically or through further exploration of compositional techniques.
All of the performances that I’m aware of so far have used Zoom as the videoconference platform, and all of them have struggled to varying degrees with Zoom’s audio processing. As much as I tried to account for this with the composition techniques I described earlier, there are still some serious limitations, particularly around the way Zoom selects individuals to be the “main” speaker. Certain timbres or frequencies seem to regularly push out others from the mix in a way that can’t be mitigated by any of the currently available audio settings in the application. Even with Zoom’s “Original Sound” feature enabled, there is still a certain amount of echo cancelation, dynamic compression, and data compression that is inescapable.5
Relatedly, some particularly loud instruments struggle to play within the dynamic range that is suitable for built-in microphones on most laptops. In terms of objective audio quality, having all players use specialized audio equipment could improve these concerns, but at the same time, I find the uneven sounds to have a pleasing verisimilitude that reflects how these technologies are used in their intended contexts. Having better ways to control audio clipping for louder instruments and better microphone options for mobile devices could dramatically improve the overall audio quality of the performance.
The last challenge with the work goes well beyond musical performance but is worth mentioning in this presentation because of the number of student musicians performing this work. It seems unavoidable to me that a performance of a work that requires certain computers, audio hardware, or Internet connectivity will shine a bright light on any discrepancies in technology access. Performing Music for Social Distancing is quite challenging in ensembles where some players are using devices with desktop operating systems, and others are using mobile devices or Chromebooks. It would be deeply unjust to tell a group of students that a particular rehearsal or performance opportunity is only available to students who have an instrument at home, a laptop that meets x requirements, and a broadband Internet connection of at least y megabits per second.
Performances of Music for Social Distancing all dealt with at least some of these issues, and yet all were successful on the whole. Some of these concerns may improve over time—such as software options or broadband availability—but others will require yet more creative solutions both in and out of my control as the composer.
I am presenting this around four months after social distancing became a shared social and cultural experience, and it feels at this moment as though it may be here for quite a while, as intimidating and depressing as that might be. Music for Social Distancing was an experiment in making music remotely as a group. The compositional techniques that I have described today could be explored much more deeply, and there are many more possibilities. I hope that I can contribute music that is thoughtfully constructed for these times. I don’t want to give up on chamber music simply because we can’t make it in the same ways we are used to. Rather than forcing traditional compositional techniques, textures, and styles to fit into the limitations of socially distant performance, I want to take what is available and makes music with it.
As a shy midwesterner, I am very uncomfortable with writing a few thousand words about my own music, and I feel an appropriate amount of shame for posting it here on the blog. Ope. ↩
Plano West Senior High School String Quartet, Ryan Ross, director; Susquehana University Symphony Orchestra, Jordan Smith, director; a Milliken University faculty mixed chamber ensemble, Corey Seapey, director; and Lone Star Youth Orchestra, Kevin Pearce, director. ↩
As of this blog post in late August 2020, Zoom has announced in a blog post that they will be releasing a new “Advanced Audio” feature which eliminates even the echo cancelation and dynamic compression. The performance presented at the conference used Zoom for video with Cleanfeed for realtime(ish) audio to avoid compression. ↩
UPDATE 13 Jan 2021: OBS Studio 26.1 for Mac has since added a built-in virtual webcam, which means you no longer need to install the extra plugin. If you have already set up OBS with the plugin, you must remove the plugin.
For better or worse, the piano keyboard informs a lot of my teaching, and having a way to show a keyboard in realtime on Zoom has been something I’ve been trying to work out since we were exiled from campus back in March. Most of the pieces needed to make this happen have existed for a while, but it’s only in the last few weeks and months that they’ve been updated to make connecting them together relatively1 easy and reliable.
Here’s our end goal.
Live video from my webcam, plus an interactive animated keyboard that responds to my connected MIDI keyboard.
Now that we have our measurable learning outcome, let’s take a look at the next part of the syllabus.
Mac (Windows will also work, but I’m not set up to demo it.)
USB MIDI keyboard (mine: Yamaha P-115, for the price and size, I like this one.)
I’m going to describe this setup for the Mac, but it should work more-or-less the same way on Windows. If you’re following along on Windows, you’ll need this version of the OBS virtual camera plugin instead.Virtual camera is now built in on Windows and Mac.
I’ll give you a minute to download and install all that stuff. After you’ve installed the virtual camera plugin, you may have to restart Zoom or OBS Studio once or twice to make sure they’re seeing the new virtual camera.
We’re going to use OBS Studio to combine the camera feed and the VMPK window into a single video stream. Then, we’re going to send that stream to Zoom by creating a virtual3 camera. Finally, in Zoom, we’ll select that fake camera, instead of the one plugged into the computer. This will take a little bit of setup the first time, but everything will be saved automatically. You should only have to go through all these steps the first time.
Questions? No? Alright. Here we go.
Set up VMPK
VMPK should work pretty much right out of the box. It’s not beautiful, but it’s very functional. In preferences (vmpk > Preferences) you can change the number of keys and the colors that it uses. I like to stick with plain ol’ blue, but you can do whatever you like, including a multicolor setup. I have mine set to be 61 keys so they don’t get too small to see over Zoom. You may need to adjust the “Base Octave” in VMPK to get the register you expect. Mine is set to 3.
Test your keyboard to make sure it’s lighting up the keys you expect in VMPK. If you don’t see anything change when you play the keyboard, open Edit > MIDI Setup and make sure you have checked Enable MIDI Input and MIDI Omni Mode. That will ensure that VMPK will accept any incoming MIDI signal from any connected MIDI device. We’re done messing with VMPK for now, but leave it open in the background. To use it for our live presentation, we can put other windows on top of it, but we can’t minimize or close it.
Create a “Scene” in OBS Studio
Create a Scene that combines your camera and VMPK. There’s a lot going on in OBS Studio. Don’t pass out. First, we’ll need to create a Scene by clicking the + in the Scene’s panel in the bottom left. Name it whatever you like. Mine is called “Piano keyboard”. Have I mentioned that I am a creative professional?
Add your camera
With your newly created Scene selected, click the + in the Sources panel. We want to add a Video Capture Device. Select Create new and call it something clever like “webcam”. I’m going to call my Logitech C920 “C920”. In the window that comes up next select your camera (Device) and resolution (Preset). For my camera, I selected “high”, but you might see a different set of options depending on your camera. Click OK.
Your Source should now show up in the Sources list for the Scene you created in step 2. If for some reason it isn’t, click the eye icon next to it in the Sources panel. If your video output and your camera are different resolutions, you may need to click the camera feed in the Program panel (that’s where your camera’s picture should be) and resize it using the red transform controls (little red squares in the corner).
Add the VMPK window
Since VMPK is still running, we can grab a picture of it. Think of it like a continuously updating screenshot. In the same Sources panel where our camera feed is listed, click + again to add a new source. This time, we want a Window Capture. Select Create New and give it a name. We’re going to be capturing VMPK, so I’m going to call it “Steve”. Just kidding. I’m going to call it “VMPK”. In the Properties window that pops up, select “[Virtual MIDI Piano Keyboard] Virtual MIDI Piano Keyboard” and click OK.
You should see the VMPK window on top of your camera feed now, and it should be listed above your camera feed in the Sources panel. Use the transform controls to resize the VMPK window and drag it so that the lowest and highest keys fit within the video frame. If you want to position it in the same place as mine (at the bottom), drag it down so that it “snaps” in place. It’s ok to let the non-piano-key parts of the window hang off the left and right.
Now, we want to crop out all the settings nonsense from the VMPK window that’s above the piano keys. To do that hold down the option key (alt on Windows) while you drag the transform handle at the top of the VMPK capture in the Program panel. While you have the option key down, the transform controls will crop instead of scale the VMPK capture. Now your OBS program should show exactly what you want your students to see in Zoom.
I know this took a lot of effort, but remember that everything we just did is already saved. If you plan to use OBS for other things in the future, you can save this setup by naming it something memorable in Profile > Rename. We’re done with the hard stuff, I promise.
Start the virtual camera in OBS
To start the built-in virtual camera in OBS, you should see a button on the control palette (bottom right) that says Start Virtual Camera. You can assign a keyboard shortcut to this if you would like in OBS > Preferences > Hotkeys and finding the blank next to “Start Virtual Camera” and “Stop Virtual Camera”. I use the same keyboard shortcut ⇧⌘V for both.
If you’ve installed the virtual camera plugin, you can start it in OBS Studio by going to the Tools menu and selecting Start Virtual Camera. If you don’t see it there, that means the plugin isn’t installed. If you think you’ve already installed it, try restarting OBS and/or Zoom.
Show your work in Zoom
Once you’ve got the virtual camera started, you can switch over to Zoom and start (or join) a meeting. Start your video in Zoom and use the arrow on the video button to select OBS Virtual Camera. If you don’t see OBS Virtual Camera as an option, you may need to update Zoom by going to zoom.us in your menu bar and selecting Check for Updates. If you’re already up-to-date, try restarting Zoom.
At this point you should see your camera feed showing up in Zoom. If you see your image as mirrored, don’t worry. It only looks that way to you. Your meeting attendees will see everything normally, and it will record normally as well. If that freaks you out a little, you can go to Zoom > Preferences > Video and uncheckMirror my video. This will make your piano look normal to you, but it will make it a little awkward to fix your hair by looking at Zoom until you get used to it.
To recap: Your OBS Profile, Scene, and Camera/Window Capture will all be saved in OBS Studio automatically. You don’t have to click “Save” or anything. The next time you have class (or just want to show your friends how cool you are) just open VMPK, OBS, and Zoom; start the virtual camera (Controls palette > Start Virtual Camera); and then launch or join your Zoom session.
In your Zoom session, this isn’t going to explode your feed to fullscreen the way a screenshare does, so you may want to suggest to your attendees that they “pin” your video by clicking the three-dots in the top right corner of your video and selecting “Pin”. This also won’t share your audio. Audio routing is outside the scope of this particular post, but there are a number of ways to take care of this.4 Another small challenge is that if you are recording your sessions and don’t “pin” your own video in Zoom, you and your fancypants piano keyboard will just be one tiny part of the Brady Bunch grid.5
My favorite thing about this setup, in addition to it being free and relatively painless (especially after the first time) to set up, is that it represents a small but meaningful example of the kind of thing that suits a Zoomclass that is hard or impossible to replicate in meatspace.
If you try this out with your class, I’d love to hear how it goes!
EDIT, 7 September 2020: I’ve heard from a few folks who have gotten everything working except live input from a hardware keyboard into VMPK. There are a lot of variables that could possible cause this issue, but I think the most common one is the selection of a different “MIDI IN Driver” in the VMPK settings. In VMPK, if you go to Edit > MIDI Connections, you’ll get a dialog with a few options to fiddle with. I think for most people, you’ll need to check Enable MIDI Input, MIDI Omni Mode, and set your MIDI IN Driver to CoreMIDI on a Mac, or the equivalent on Windows. Here’s what my settings look like.
In this screenshot “Digital Piano” is the name of my hardware keyboard. (Not very creative, but descriptive.) Thanks to Danielle and others who asked!
Everything is relative. Buckle up, Buttercup. We’re getting nerdy. ↩
At the time of this writing (August 2020), please don’t buy a webcam from Amazon. 3rd-party Amazon sellers are price-gouging for all the work-from-home. If you really want to upgrade to a nice webcam, you’re better off watching places like B&H or Best Buy for when they get them in stock. ↩
Short version: open GarageBand or any other app that will make sound and share audio how you normally do. I use Loopback from Rogue Amoeba, but you can also use the built-in Zoom audio sharing features in Share Screen > Advanced > Music or Computer Sound Only. Loopback is $100 and worth every penny, but I totally get that not everyone is prepared to drop that kind of money on software that may only be used for Zoom music class. ↩
A possible workaround is to record your video directly from OBS, but then you’re recording two videos per class, which will either require some editing to assemble with the Zoom video (way too much time and effort to do each day) and a boatload of space for those video files. Pinning your video will help, but that makes it a little harder to see your attendees. As ever, we live in a world of compromise. ↩
We’ve all been there. You have a carefully planned lesson. Things are going great. Your students were on time to the Zoom; they have their cameras on so you can see them smiling and nodding. They have good questions. You’re killing this lesson. And then you play your musical example. Crickets. “I don’t hear anything. Does anyone else?” And just like that, it all falls apart, and you lose ten minutes to troubleshooting. Nobody wants that.
One of the most challenging things about teaching music on a Zoom1 conference is playing recordings for everyone. Audio over Zoom is going to be compressed and likely out of sync, that is if you can get audio from where it lives into Zoom to begin with. Showing YouTube videos is worse, because you’re getting another layer of compression with the video, and it’s just a mess if anyone (including the presenter) starts talking over the video.
The solution to these and other problems that I’ve really had fun with is called Watch2gether, a freemium2 service allows YouTube video playback to be synchronized across many different users.
It’s astonishingly simple and reliable. As a free user, I can create a “room”, where I can add YouTube videos to a playlist. I can give a link to that playlist to my students, and then when I press play on a video, it starts playing for everyone else, directly from YouTube. Need to start at 24:13? Need to scrub back a few seconds to catch that cadence again? All of my students’ video players will hop back too. Same with jumping to the next or previous video in the playlist. Rooms also include a text chat feature, but I’ve used it pretty minimally, since I usually have a Zoom conference running alongside it.
I used Watch2gether with my graduate seminar last spring when they presented their research at the end of the term. Since we were remote, I asked them all to record their presentations and upload them to YouTube. Then for the Q&A, we all watched the presentations using Watch2gether and then moved to a Q&A. This meant that our discussions weren’t at the mercy of any presenter’s connection speed, and nobody had to struggle with getting their musical examples to work. (Also, it was lower pressure for the presenters, since they could just re-record or edit as needed.) It’s possible to save a room or a playlist to re-use throughout a course as well.
One thing to look out for is that by default, anyone in the class can take control of the playlist. It could be a fun class project to send them out to YouTube and find examples of things to add to a playlist, but you may not want them putting the latest Cardi B tune in your playlist of Renaissance polyphony. So take note of the playlist settings (which can be changed at any time) if you decide to give it a go with your students.
I have only tried Watch2gether with YouTube, but the company also advertises that it works with other platforms, including Vimeo, Dailymotion, and Soundcloud. Watch2gether is free to use for any of the above features, and for a reasonable $3.49/mo. you can remove ads from the site for any room that you create, which I think is pretty reasonable. It’s not a perfect service—I would love to include Spotify or uploaded audio/video in a room—but it’s very handy and reliable. Super useful for music and performing arts classes who will be going remote.
In my head, and in many emails, I say/write it as ZooOOoom. It’s fun. Or at least it’s what amounts for fun in 2020. I’m a simple man. ↩
I’m going to start writing an occasional post here about a tool I’m using for my online teaching on the off-chance that someone might find it useful. This weekend I’m putting the finishing touches on the semester’s syllabi1. Most of my teaching for the foreseeable future2 will be online, and I’m still planning to hold student hours (the artist formerly known as “office hours”) in some form.
Last spring, after being exiled from campus, I actually ended up having more one-on-one interactions with my students than the rest of the year combined. One of the tools that helped me do that was Calendly, a service that allows anyone to offer bookable timeslots for sign-up.
In Calendly, I say when I’m available, how long the individual timeslots are, and it generates a link that I can send to my students to sign up for one-on-one meetings. This allows me to offer a lot more times than I ordinarily would be able to offer with in-person office hours, since I’m probably at my computer whether I’m at home or on campus during the day. Students with busy schedules have more options, and because they can also meet from home, there’s less friction in them getting the help they need.
A few other useful features: when offering your availability, you can also say how far in advance students need to sign up. I set this as narrowly as I can without allowing it to become a burden to always be at my computer. For me, that’s 2 hours ahead of time that I ask them to book. Second, you can put in an extra question that students have to answer when signing up, such as “What course are you enrolled in?” or “What would you like to discuss?”. I find that this is as much a benefit to them as it is to me; crafting a specific question might require reviewing materials, which might lead to independent discovery and learning.
The last feature I’ll discuss here is calendar integration. By connecting a Google or Exchange calendar (we use Exchange at WSU), Calendly can automatically put new events on your calendar, so they’ll show up in a place I’m probably already looking at regularly.
Calendly is, like seemingly everything else useful, free to start with paid upgrades for more features. I have considered the $72/yr.3upgrade to Premium—the middle tier—for Zoom integration, cancelation policies, text message reminders, etc.—but the free tier has worked well so far. And if I’m going to spend $72 on a calendar service, I would seriously consider Doodle Pro, the upgraded version of the calendar polling service, which I already use. Doodle Pro includes a “Bookable Calendar” feature which is very similar to Calendly.
To be fair, it’s also the first touches in some instances. ↩
“Foreseeable future” meaning about seven or eight minutes from now. ↩