Just a Dawdream

Composing music on the computer isn’t as easy as it looks, but maybe it’s easier than it ought to be

0
Dave Kirby

Bar 40 is assaulting my will to live.

I’ve spent two hours clawing away at this brief descending figure in fifths, trying to harmonize and shade it against a minor key chord reflected in another track. The passage is about four seconds long.

I tried a fourth — a nice sound by itself, kind of a poised buoyancy, but it’s out of context with Bar 39 and Bar 41. An octave sounds boomy and domineering. I tried splitting the bar between the root/fourth and root/natural third — a common major key guitar move, a sus4 gesture — but now it sounds (predictably) like a common major key guitar move and completely out of character for the passage.

Bar 40 sits there, waiting.

A fifth sounds better… but it’s flat and predictable, redundant padding, filling harmonized sound just for the hell of it. That’s not always a bad thing, but this track wasn’t meant for that.

I take it out altogether; now it sounds bereft and unresolved, like Bar 39 left the baby at Starbucks.

Screw it. In defiance of basic music theory and owing to my usually repressed rebel id, I throw a dissonant in there, a natural third paired with a flatted sixth, and it lands like a rusty bolt in a souffle. There’s “interesting wrong,” and there’s “dumb wrong,” and this is “dumb wrong.” Thelonious Monk or Cecil Taylor or Art Tatum might have liked it, but in another key and in another tune, and guys like them made “interesting wrong” sound like music. I can’t.

Your rabbit is undercooked, pilgrim.

• • • •

Like a lot of other people, I found myself gifted with an unwelcome abundance of free time over the second half of 2020, and for mostly the same reasons. Hungry for inspiration, I warmly embraced Ivanka’s suggestion to “try something new” (thank you, Dauphine) and revived a long-dormant interest in digital music production.

It wouldn’t be wholly accurate to say this was a “new” pursuit; more than a decade ago, I fiddled haplessly with a cracked version of Reason 3, assembled some semi-structured noise casseroles masquerading as songs (pro tip: If it has a title, it’s a song), burned them to a CD, and lost the whole thing over a series of ill-managed PC upgrades. Come to think of it, I’m pretty sure I lost the CD, too. It belongs to the ages now.

More recently, a friend shared with me a slide show of his photography tour to Patagonia, and it prompted me to create my own from some of the thousands of photos my wife and I have amassed over several trips to the U.K. (We would have gone this year, but Mother Nature was busy culling the herd.) I bought a license for some consumer-grade video software and selected a couple of musical tracks from my MP3 collection, spending hours syncing transitions and finessing cadences.

Subject to the qualifier that most people have limited personal interest in looking at someone else’s vacation pictures, I was satisfied with it. I uploaded it to YouTube.

But the music I used (portions of Mike Oldfield’s Ommadawn and more contemporary music by Ian Boddy/Mark Shreeve) was rights restricted, and while the lawyers didn’t tell me to whack it from the public arena, it got blocked in the U.K. We have some family and friends over there who may have liked it but couldn’t watch it. Rookie mistake.

So earlier this summer, I tried my hand at another slide show, this time using an online music service provider called Epidemic Sound. Based in Stockholm, they curate hundreds of songlets from aspiring producers and offer them up for a $15 monthly subscription, licensed-cleared for YouTube. One supposes that the service is marketed, in part, toward commercial video producers who want to frost their sales pitch with a little dazzle or drop some dope house beats to their product rollout. Ga-thump, ga-thump, ga-thump.

The pieces I pulled down worked well enough, but the project didn’t quite feel like mine, soundtracked as it was by obscure (albeit talented) European producers. And mindful of my income-challenged station, I reasoned I couldn’t justify a monthly subscription for something I’d only use a few more times, and I dropped my account.

Having now addicted myself to creating these slideshows, I now confronted the obvious. Make my own music.

A quick bit of research into “best DAWs” led me to PreSonus Studio One Prime, the free version of the digital music company’s flagship workstation platform (DAW, by the way, stands for Digital Audio Workstation). Though feature and instrument restricted, the free version was an absolute beast, orders of magnitude beyond the old Reason platform I had fumbled through years before. I had more or less mastered the basics when I started bumping up against the free version’s limitations, and decided to jump in and subscribe to the whole enchilada, the mothership, the penultimate DAW experience… the professional version.

For… $15/month.

So much for reason.

• • • •

I had managed to assemble a collection of pieces in Prime — the free version — and typically realized after the fact that I was doing a lot of things the hard way. While I have covered many contemporary electronic artists for this newspaper as a reporter/critic, and am generally sympathetic to that world, my aesthetic instincts toward electronic music are firmly rooted in the ancient. The pounding sequencers and improvised quasi-classical meanderings of early Tangerine Dream, the hypnotic subspace wanderings of Klaus Schulze, the goofy madness of Margouleff and Cecil, the rich and mysterious classical interpretations of Isao Tomita and Wendy Carlos, the cheerful twitchiness of Larry Fast. Most of this stuff was composed and performed on lumbering and ill-mannered analog equipment (Tangerine Dream’s modular Moogs would drift out of tune when their performance venue was too warm, or too cold), and some of it was the result of happy accidents in the recording or patching process. To some extent, that was part of the electronic music experience, not knowing precisely where a certain sound or phrase or gesture came from; once rendered, it might not ever come back.

But for me, in front of my computer, with the uncertainties and ghosts-in-the-machine digitized out, I start at the ground floor. I just want to lay a 16th note sequencer pattern down and drape a D major chord over it. I access the Pattern function, where I can punch in the notes I want (say, a Dmaj arpeggio), hit a button and it will just play this pattern forever. Open up another track, find a nice pad patch. Hit record, and there it is.

Except, yeah, it gets pretty old, pretty fast. At Bar 17, I want to modulate to another key. Up a fifth, to Amaj. Swell… except the Pattern function doesn’t allow you to transpose. The notes are fixed. So I track to the bar where I want to transpose up, open up the pattern editor and punch in all the triggers to Amaj… essentially recreate the pattern, in another key. Run it… nope, missed a note, go back in and fix it. Run it again. Nope, fix it again. Does that sound right?

A little right-click magic and I learn that the pattern track can be converted to an instrument track, where the notes become editable as instrument triggers, rather than raw tone triggers. Once it’s an instrument track, a simple right-click allows for transposing by inputting a semitone value. Recreating the pattern from scratch in another key, and getting it right, took 30 minutes. Two mouse clicks to get the same result, eight seconds.

Subsequently, of course, I learned that the professional version offers a note effect plug-in called the Arpeggiator, which frees you from the tedium of making patterns altogether… but that has its own learning curve and may well lead you to a place where you didn’t know you wanted to be. That happens a lot.

Similarly, we want to fade in a passage, dovetailing against another passage as it fades out. Ham and cheese sandwich here, shouldn’t take a comp sci degree. Well, I couldn’t figure it out. I would have gone out to the community forum, access to which my subscription has granted me, but my ID is oddly borked and I can’t post out there, so I fumble. Aha. Take both instrument tracks, convert them to audio tracks, and then you can drag down the corners of the track image to lower the gain. Once it’s an audio track, though, it’s just a sound. You can’t edit the notes anymore. So if you need to fix the harmony, you’re screwed… unless you convert it back to an instrument track. But then you can’t fade.

I spent hours on that.

Until, duh, you activate automation, bring up the mixing console and mouse-slide your respective faders. I’m glad I can’t access the community forum — a dysfunctional ID stopped me from publicly beclowning myself.
By this time, after a lot of right-clicking, squinting at the glossary and manual, doing dumb things, watching awkward YouTube demos, pushing buttons and ctl-z’ing, I’ve gotten to the point of knowing more or less enough to be dangerous. My $15/month is now a bargain.

Now, pilgrim… make some actual music.

D Kirby

• • • •

Well, OK. Let’s be real here — despite my music theory references tossed out earlier in this piece, I’m basically a hack guitar player with only the most rudimentary grounding in music theory. I’m soundtracking images of 12th century castles and Gothic cathedrals, so big orchestration, soaring progressions, grand gestures… Game of Thrones-type deal. Ambitious.

It’s not that I don’t know what I’m doing, it’s that… well, I don’t really know what I’m doing. I’m dancing across the out-of-phase sine waves of two learning curves: composing keyboard-driven music and trying to get this bewildering software to render what’s in my head. I know what a bolt in a souffle sounds like, and I know it doesn’t belong there. The software brims with patches of sounds and clever loops, some distinctly wedded to harmony or BPM, some less obviously so. It is clearly designed to appeal to the contemporary EDM producer, which is where the commercial center of gravity of electronic music has settled these days. No one really wants to sound like Jean-Michel Jarre circa 1978 anymore, and why would they? Who the hell is Jean Michel-Jarre?

Or Alex Paterson or Aphex Twin or Toby Marx or Kraftwerk or the KLF or FSOL or Autechre? Or, for that matter, Morton Subotnik, often cited as the godfather of modern electronic music… unless you favor Karlheinz Stockhausen, who’s probably the real godfather of electronic music, and we’ll wager not many amateur DAW pikers use Studio One to sound or compose like Stockhausen.

But here is where we face the crossroad.

Crudely, you can make the distinction between music created electronically and electronic music. The earliest guys — Stockhausen, Subotnik — composed music that really could only be rendered by patch cords and oscillator knobs, exploiting the technology’s freedoms and embracing its limitations. But Wendy Carlos, in her early years (late 1960s), was known for interpreting the classics, rendered on the Moog but musically available to any competent pianist, organist or string section. Her landmark album Switched-On Bach (1969) topped the Billboard Classical Albums charts for almost three years. Similarly, Isao Tomita released a series of sonically dazzling classical interpretations (Stravinsky, Holst, Debussy) in the mid-’70s on the Moog that completely grossed out classical music critics but were great fun to listen to and incorporated soundscapes that even modern electronic artists, 40 years or more later, can’t decipher technically.

So what do I want to do? Borrow the aesthetic of deep-thinking, convention-defying composers… lamely ape the Berlin School stuff I listened to obsessively years ago… teach myself Bach…create pleasant soundtrack music that an inexplicably enthralled YouTube audience can tolerate while looking at my vacation pictures?

Am I creating music to support the images, or do the images give substance to the music? Is the music worth listening by itself? Software can do a lot, but it can’t answer fundamental questions about creating art. It just awaits its next mouse click.

SoundCloud is lousy with digitally produced stuff. Hundreds, thousands of hours of trap or dubstep or hip-hop or ambient or chill or space. As lovers of music, all of us, we have it in the front of our minds that more music is better for us, better for the world, better for humanity, better for music itself. Everybody jump in, the water’s just fine.

But the fact is, an awful lot of it (the instrumental stuff anyway) is facile and passive. Nice textures draped over digitized beats. Not everything needs to be a Brandenburg Concerto, but a musical equivalent of paint-by-numbers often struggles to find its depth, and usually fails.

So there’s a little guilt involved in composing on this software. A simple function in the software allows me to drop a thoughtless doodle on my midi keyboard, bring up the Chord Follow function, define a key, and the software then contours my meaningless five-note dingle — something my cat might have produced walking across the keys — into a harmonically consonant melody, rendered in the key I just selected. This is what you meant, right?

Is that melody mine, or did I merely give permission to the software to include it in my piece? The software doesn’t care that I haven’t done the heavy lifting to learn and apply harmony the old-fashioned way.

So the guilt comes in knowing that some of the music came from my own head, and some of it — harmony I wouldn’t know how to produce if I were writing for a string quartet — came merely from cleverly written computer code. Sure, a schooled and musically fluent composer can produce lofty, engaging and stirring music with this software without using the harmony-helper functions at all, and plenty do. But hacks like me can also produce perfectly palatable music that rides nicely with pictures of cathedrals. That’s the triumph, and the malign seduction, of DAW software. It can help you sound like you know what you’re doing, even (and especially) if you don’t.

The software even has a “humanize” function, that gently tweaks a note’s attack, rendering it as if played by an actual person, to compensate for the machine’s pitiless precision and, presumably, fool the listener into thinking he’s hearing an actual person playing rather than a machine.

Artificially applied imperfection in the service of authenticity. Think about that.

• • • •

Anyway, I grow weary of fighting with Bar 40. I didn’t like what the Chord Follow automation gave me, but I liked my edits to it even less, so I leave the fifth in place, mix down the piece and slap it onto its assigned passage in my next slide show, specifically St Lawrence Church in Cumbria, England. The whole piece is a minute and 20 seconds long; I’ve probably spent 11 hours on it.

I run it along with our photographs, tweak the transitions to align with the musical cadences, and it makes me smile. It sounds OK. It works. The pictures look different now.

You cook good rabbit, pilgrim.