I wrote a piece from social distance, for social distance, for open instrumentation. In a lot of classical music social media, people have been wondering how to do live music remotely. The speed of light is (for the moment) an insurmountable limitation preventing us from having our rehearsals and concerts over Zoom, at least for most repertoire.
With that in mind, I wanted to write this piece that would work in this teleconference environment. Instead of finding workarounds for the network latency, I made it an important part of the piece. In some ways, it’s a piece about network latency. You’re right, that’s not a very good tagline. I’ll keep workshopping it.
Speaking of workshopping it, there is a lot about this piece that I can’t really know how it will work. There are a lot of network, software, and hardware variables that I know I didn’t account for, as much as I tried. Because of that, I’m posting the materials on my site as a “beta”. Anybody that wants can download the materials for free (“buy” it for $0); I just ask that if you decide to give it a performance or a reading that you let me know about it so I can learn how it goes and make the piece better.
Over the last three weeks, I’ve been asked a lot of questions about audio gear by my friends and colleagues. I don’t think I’m an expert on this, but I know enough to be dangerous (ie. spend other people’s money). Teaching lessons over video chat is hard and weird and different and lower fidelity in almost every way compared to teaching in person. And the thing I’m asked most often is “what microphone should I buy?”. My answer to this question has changed somewhat.
First, the thing that hasn’t changed: You don’t need to buy a mic at all. You’ve already spent a lot of money on a lot of things, and there are a lot of uncertainty surrounding the future of the economy and institutional finances. Also, like digital cameras, your phone and laptop have been getting better and better, so cheap external microphones probably aren’t a ton better than what you already have (though a benefit is that you can control the placement a bit more if it’s not attached to a computer or phone). With that important caveat out of the way, here are my recommendations.
Initially, I was recommending the AudioTechnica ATR–2100X. It’s a dynamic mic very similar to a Shure SM57/58. It has the benefit of working both as a USB microphone (can plug directly into a computer) and a traditional analog microphone (can plug into standard professional audio gear). That means it can grow with you. It happens to also be an exceptionally good vocal mic should you decide that this is the time to start your hit true crime podcast. I’ve had its predecessor (the ATR–2100) for years, and it works great. It costs around $100 on Amazon—if you can find it—so it’s not exactly cheap, but still on the low end when it comes to microphones. This is a really good mic, and if you picked it up on my recommendation, I stand by it. Since then, though, I’ve had another idea that I think could be even more useful.
Ideally, if you’re going to spend money on something to help with your online remote lessons—and again, no one should feel that obligation—an even more useful option would be something that will continue to serve you in other situations as well. One thing that we all deal with is making recordings of rehearsals and concerts when we don’t want to lug a ton of gear or can’t reasonably sneak a laptop into a dark concert hall. For that reason, I think an even better purchase might be a small portable recorder. I really like the ones from Zoom (not the video chat service!). I’ve had a Zoom H4n for over a decade and it’s still going strong. For almost the same price as the ATR–2100X, you can pick up a new Zoom H1n portable recorder. These are amazing little multitools because they’re tiny, have really good microphones for the money, and they can work as a USB microphone when plugged into a computer or as standalone recorders for a time when we can all enjoy one another’s company again.
I’m sure I’ll have more thoughts on this stuff later. As I’ve said a bunch of times this week, every sentence I say or write these days has an implied “for now” at the end of it. These are my thoughts (for now).
If you don’t mind the older-style mini-USB port, you might be able to find a good deal on this model if it’s still in stock anywhere. The sound and build quality is identical. ↩
In the last few days, I’ve been to a lot of meetings and participated in a lot of online discourse about moving face-to-face classes online. If I could convince my peers and colleagues of one thing, it’s this: an online version of your class should not try to imitate a face-to-face version of the same class. Use the medium for what it’s good at.
I used to teach a lot of online classes at a previous institution. Much of the time, I was teaching a campus and online version of the same course, at the same time, and roughly at the same pace. While the concepts and outcomes were the same, the methods and assessments were different.
Here are a few things to consider:
If you have a class with a lot of 50- to 75-minute lectures, maybe you don’t really need to replicate this same thing. You’d be surprised at how short you can make a tightly scripted video that covers the same material. You don’t need to slow down or repeat yourself as much if students can pause, cross-reference, and rewatch. Perhaps even better, you might use a pre-existing video and focus your time and energy on another area of the course. For my music theory colleagues, I highly recommend Seth Monahan’s excellent YouTube channel.
If you have discussions, these can be even harder to manage in online videoconferences than they are in person. Even with video, the visual cues that a person is winding down or ready to jump in aren’t as apparent. Consider a text-chat platform like Slack that can allow realtime conversation that is threaded. If the face-to-face experience is important to you, consider ways to make the discussion group smaller. Perhaps divide the class in two and have the same discussion twice (maybe half as often or half as long) so that each person can contribute more. Or maybe have the discussions run concurrently (Zoom breakout rooms). It’s possible that your students could have an thoughtful, salient, and rigorous discussion without your calming, Socratic presence.
Consider assignments. Focus on your outcomes. What skills and content are you imparting. Maybe your students need more and smaller assignments when they’re working on their own. Maybe they need larger, scaffolded assignments. If you’re worried about academic honesty when all assignments are digital and instantly, infinitely copyable, consider making your assignments more open-ended and creative. Instead of dictating a melody, write a melody for another student to dictate. Instead analyzing a phrase of music, find a repertoire example that expresses the theoretical model. These kinds of assignments require students to think independently in a way that corresponds to the independence of a remote learning environment. When you are trying these new kinds of assignments, be very clear about what you’re asking students to do. In your campus class, you probably spend a couple of minutes talking through a homework assignment before students go off to work on it. You might be surprised at how much direction your students take from those few sentences. For remote classes, expectation clarity is something you might have to work harder at than you’re used to. As a very small example, I end almost every assignment I give on Blackboard with a “Deliverables” heading in which I list exactly what files I expect students to submit and in what formats, and I think it helps a lot.
Keep in touch. Your campus students are used to seeing you around. You might say hello or hear them perform. You might see them at lunch. The worst part of my remote teaching experience was the way it dehumanizes us. We forget that the person sending these emails and posting these files is a person. Post regular updates with your face and voice just to say hello and be a person. It’s easier for us in this transition since already know one another face-to-face, so that might make it easier to keep up. Anytime you’re on the phone or sending an email or posting an announcement or writing grade feedback, remember that the person writing it is a person, not an anonymous computer file, and encourage your students to do the same. I’m not saying you need to become the Cool Parent type of professor (unless that’s your thing). Just be a person and give your students space to do the same. Opportunities for doing this are built-in to the campus experience, but you might have to go out of your way a bit more to bring it to your remote class.
Teaching remotely can be just as fun, rewarding, student-centered, and rigorous as teaching face-to-face. We still need to keep in mind (myself as much as anyone) that these are different things. Some things that work great in one format won’t work at all in another and vice versa. Use each instrument for what it’s best at. Don’t try to play the viola like it’s a clarinet.
I know it might sound like more work, but a good plan and a good script will save you time in the long run, especially where captioning is concerned. Upload your video to YouTube, paste in the script, and you will have a much more accessible lesson. This avoids the pitfals of YouTube’s autocaptioner and the tedium of correcting it. ↩
A common thought experiment in studying music composition is to develop a new system of notation. Musicians generally acknowledge that our system of staff notation is imperfect, and imagining alternatives is a way of focusing on the musical parameters that you care about most, rather than the ones that are the easiest to identify in a score.
I have a student in my Theory 1 class at WSU who is blind, so I’ve been learning a lot about braille and braille music notation. She is an excellent pianist, and I’m thankful that she is comfortable with braille already. There are estimates that fewer than 10% of legally blind Americans can read braille. But even though my student has no problems reading music braille, teaching theory has already been a bit of a challenge.
Music braille, it turns out, is an ongoing experiment in developing a new form of music notation. The latest edition of the standard was published just a few years ago. If you’re familiar with staff notation, you’ll likely be quite surprised by how sounds are represented. Here are a few highlights:
There is no staff.
It uses the same characters as written braille, just interpreted in a different way.
Clefs are optional (used mostly to be academically faithful to the source). Notes are identified by letter name. ASA octave numbers are used to disambiguate when needed.
There are different versions of letter name characters used for different rhythm values.
Key signatures are often shown only by number of sharps or flats (“four sharps”).
Barlines are optional.
Beams do not exist in braille.
Simultaneous pitches are shown by giving one note, and then a stack of intervals from that note.
Music braille, like other forms of braille, usually takes a lot more space than staff notation. Because of this, supplementary annotations like measure numbers are often left out.
So much about how (royal) we teach music theory is tied to the staff notation we use to transmit it. In fact, I’m beginning to think that the way we think about how music is constructed has a bit of a heuristic bias informed by staff notation.
I still have to talk about staff notation in lectures, and use it in assignments, and as descriptive as I try to be, as demonstrative as I try to be singing or playing piano, there are inevitably things that get lost. A couple weeks ago, I was describing how and where on the staff to write accidentals, and this student raised her hand and politely asked if I could describe what sharps and flats looked like. I did my best, but I was a little stumped.
I did, however, recall hearing about the 3d-printing facilities in the library in one of the many, many, many orientation sessions from last month. I’m a nerd. While I’d never 3d-printed, but I’ve always thought it sounded like a cool thing. So after a couple of attempts, I figured out how to make a 3d model of a tactile major scale that I could hand my student so she would know how clefs, noteheads, and accidentals interact with the staff. She told me that the print helped her to understand things that she’d heard musicians discuss her whole life.
A few people have asked about the CAD files, and since they seemed to actually help the student, I’ll share the major scale file here. The braille is a written description, not music braille.
If anyone is curious about how I went about making the 3d model, I’m happy to share what I learned. Get in touch. Maybe I’ll do another post. I’m a total n00b, but I figured it out. In the meanwhile, let me know if you use the file above and how it goes.
To be clear, this population includes those who can see well enough to read print and screens, but the National Federation of the Blind still describes this as part of a larger literacy crisis ↩
“Staff notation” is the name I̵7;ve settled on for the kind of notation I grew up reading. “Visual notation” doesn’t seem specific enough, and the staff is more descriptive of what it actually is, rather than how it’s read. ↩
This is also true of letters and numbers. Special characters can precede a string indicating that it should be read as letters, numbers, or music. ↩
Today Avid released Sibelius 2018.4, announcing it at their Avid Connect event in Las Vegas, Nevada. This update, the second of the year and using the new year-dot-month version number introduced with 2018.1, is another broad release with many new features and some longstanding requests addressed. Areas of improvement include multi-edits for text, a new note spacing rule affecting multiple voices and other cases, deleting and adding bars at the beginning of the score, smarter ties, and many more other enhancements.
Avid also announced a new naming strategy in their product lines so that each line has the same three tiers: a very basic free entry-level version denoted by the “First” suffix; a consumer or student level with no suffix; and a pro-level version called “Ultimate”.
This new update sounds great, but the new name, “Sibelius Ultimate” is really, really dumb. The introductory product assuming the name of the former professional product is also going to be really, really confusing. I’m no marketing pro, but this seems like a change that will cause a lot of problems for no apparent benefit.
Even ignoring the fact that the name “Sibelius” has identified a different product for thirty years, “Ultimate” looks very dated to me.
The vaunted centennial season turns out to be a disappointing continuation of the status quo. Of the nearly 40 composers represented, every last one is a white man. Only four of those white men are still alive. Of those four, only one is American-born. Last season, all but one composer (composer-in-residence Anthony Cheung) were white, only four had a pulse, and a lone concerto by Augusta Read Thomas kept the Cleveland Orchestra off of the Women’s Philharmonic Advocacy’s list of top orchestras that didn’t include a single female composer in the 2016-17 season. The Cleveland Orchestra is far from alone in this regard. An analysis by the Baltimore Symphony Orchestra found female composers accounted for just 1.3 percent of all music performed by 85 American orchestras. How much longer will “America’s best orchestra,” as Cleveland was recently dubbed by The New York Times, set a worse example than its peers?
Scathing and earned. Programming like this needs to be dunked on loudly and often.
I’m a fan of beautiful scores, and part of any beautiful score is sharp, clean front matter: the cover, title page, and information pages. For years, I have used Microsoft Word, Apple Pages, and even Adobe InDesign for doing the text-heavy parts of scores and parts. Of course, Sibelius added text and layout tools for this several versions ago, but they were terrible and frustrating to use. Just last week, I wrote in a checklist on Scoring Notes “Do not try to do this in your scoring app! It will almost certainly end in tears.” I’m pleased to be reconsidering this advice so soon after publishing it!
After a discussion on document layouts on the always-interesting Music Engraving Tips, Steinberg’s John Barron offered to show how Dorico can be used to handle document layout. In this week’s Discover Dorico live stream, John used my piece, Linear Geometry, as an example of how to work with front matter. For fellow survivors of publishing tools like InDesign, you’ll find some delightfully familiar frame tools that you can use right inside your score file.
Thanks to John for showing me a new thing and using my work as the example, and thanks to the Dorico team for making such a delightful and powerful tool. I’m looking forward to trying this out in future projects.
If you’re a Dorico user, or just Dorico-curious, I can’t recommend John’s Discover Dorico series highly enough.
(Side note: The font John was trying to emulate in my front matter is Museo Slab, a slab serif in the Adobe TypeKit library. Guess I’ll need to find a substitute for that. )
A few years ago I was in a conversation about audience access to classical music. We weren’t talking about accessibility in the stylistic way, but in a more literal way. Who can afford to be in the room when a performance is given, thereby experiencing a work as intended? In popular music the “primary document” is usually a recording, which is pretty widely accessible, especially today, as it can likely be streamed for free or purchased as a download. In comparison, my music is among the least accessible music to most people around the world. Even though a recording of a performance might be widely distributed, that’s far from the same thing as being in a room for its performance.
On a continuum of audience access to audio media, my acoustic compositions are near one extreme. Another audio genre that I care deeply about is podcasts. I’ve been listening to podcasts since I got my first iPod1 around 2004, and I started making and publishing podcasts in 2011. Castbot is the first in what may become a series of electroacoustic works that are created expressly for the podcast format.
I first created Castbot in 2016, and it has been running on-and-off since then. It has gone through a number of iterations, and I’m pretty happy with its simple yet compelling output in the current version. Each night2, my little bot generates a new episode of the piece based on a narrowly defined set of conditions in the software, uploads it to a sever, and updates the corresponding podcast feed.
In each episode of Castbot, a small ecosystem of virtual “audio creatures” is created and runs its course. For some reason, I think of them as birds. They fly back and forth in the stereo field, playing a stuttering rhythm on a repeating pitch. Each time they cross the center, they change pitch within the defined scale. Eventually, the birds fly off the edge of the environment and do not return. The episode ends when all the birds have flown off. To set up each episode, Castbot picks the number of creatures, the scale, and the tempo. I think of this as the weather of the environment. And the piece plays out according to the whims of the drifting birds.
I mentioned that this little bot has been running since 2016, but I’ve recently submitted it to the iTunes and Google podcast directories, which means it is now much easier to find and subscribe to. If you’re podcast-inclined, give it a listen and let me know what you think.
It’s been so many years since I’ve looked at the word “iPod” in print that this looks wrong to me. My brain-autocorrect wants to make it into “iPad”. ↩
Ok, “each night” is a bit of a stretch. The whole shebang runs on a Raspberry Pi 3 in my home studio, and sometimes I accidentally let it overheat. When that happens, it will stop running until I notice and reboot the little guy. Also, if you catch any episodes that aren’t posted a few seconds after midnight, that’s me running Castbot manually to test something, or just for fun. ↩
About a week ago, I put together a short but dense presentation for my composition students at UCF on score preparation. With rare exceptions, composers today are expected to not only write the music, but also prepare and produce professional-quality parts. This is something we are often not explicitly trained to do. Instead, we’re expected to pick things up as we go1.
For my presentation I made a big outline of all the things I go over when I’m preparing scores and parts, which turned out to be a bigger checklist than I’d expected. So like any child of the Internet, when I make a thing, I post it online. Philip Rothman over at Scoring Notes generously took time to add links to stories on his blog and elsewhere to go into greater detail, which turns my skeletal outline into a genuinely useful reference.
If you do any score preparation or production, I would encourage you to bookmark this post. Thanks to Philip for all the work he does to support composers.
This is part of a larger untenable norm in music higher education: we often train every single student as though they will be a superstar that doesn’t need to worry about these minutiae. Sure, there are copyists, and they’re great. For most of us in concert music, we’ll rarely be in a circumstance to hire one. ↩
My day-to-day responsibilities have shifted over the last couple of years, and I’m not getting the value out of the Adobe applications that I used to. It’s not that the Creative Cloud apps have gotten worse. To the contrary, they’re better than they’ve ever been: more powerful and easier to use for beginners and design dilettantes like me. And recently, I’ve started a journey to find alternatives to each of the Adobe applications I use on a regular basis. If, like me, you feel you might not be getting the full value out of your Adobe subscriptions, you might be pleasantly surprised to discover some of the tools that may serve your needs just as well, for a fee that is a little easier to justify.