06 August 2017

The fourth wave of electronic music

This articles is part of my ongoing series on Desktop Electronic Music (DEM). The landing page provides easy access.

Electronic music was originally the exclusive activity of those who could gain access to elite computer systems. Now it's an egalitarian process, a collaboration between boutique hardware firms, cottage industries, and musicians of all stripes. A performer might use a MIDI controller connected to a compact synth module. Or homebuilt sensors feeding an Arduino. We're the operators with our pocket calculators... which are actually tiny drum machines. We are the dreamers of dreams... implemented in esoteric Max patches.

The state of the art is fluid and multivalent. It's hard to see a context when you are embedded in it. So perhaps it's useful to share my musings, which outline four paradigms that have shaped our relationship to electronic music.

First wave: LEM

First, we visit the laboratory. After the Second World War, digital computers came into their own for military and business applications. They stood like sentinels in their special climate-controlled rooms, maintained by teams of experts. To write code you'd use a keypunch machine to stamp holes in thick paper. Enough holes in the correct places and you'd have a finished card. A few hundred (or thousand) of these and you'd have a short programme. Because these computers worked in batch mode rather than "real-time", you might have to return the next day to get the results.

This made is extraordinarily difficult to be creative. It wasn't until 1957 that Max Mathews, working at Bell Laboratories, popularised the use of computers for music, with his MUSIC I software. When John Chowning invented FM synthesis at Stanford University in 1967, the main driver was computational efficiency. Through the 1970s and 1980s, processing speed was still a bottleneck in creating music. But by then computers were being developed specifically for the task. The Synclavier from New England Digital and the Fairlight CMI enabled real-time control, but were still expensive behemoths.

Modular synthesisers took a different approach, being electrical circuits specifically designed for music. Though versatile, their bulk and power requirements inevitably ground them (pun intended) in studio environments. The Buchla 100, pictured above, is one example. It was developed for the San Francisco Tape Music Center, used by composers including Pauline Oliveros and Morton Subotnick.

Both the computer and circuit approaches can be described as Lab-based Electronic Music. LEM was based around institutions that could afford the equipment and the running costs.

Second paradigm: REM

As home organs became popular in the fifties, electrical and electronic instruments became more familiar to those outside the lab. Organs evolved to included electronic rhythm accompaniments, arepeggiators, and other components borrowed from experimental instruments such as the Ondes Martenot (1928) and the Trautonium (1929). These keyboard-based instruments evolved into synthesisers.

This heritage is clear if we look at Ace Electronic Industries. Founded in 1960 by Ikutaro Kakehashi, Ace made organs, rhythm boxes, and guitar effects. As domestic synths became viable, Kakehashi formed Roland Corporation (1972) and the rest is, as they say, history. Yamaha is another example of a Japanese company that leveraged their expertise in pianos to design first organs and later synthesisers.

By the late 1970s, music stores were stuffed with keyboards from ARP Instruments, Moog, and other fledgling manufacturers. The eighties brought the Casio CZ, Yamaha DX7, and other classics. These instruments were designed not for the lab, but for the stage. They were relatively portable and dependable, compared to their ancestors.

MIDI came along in 1983 to help synchronise and pass messages between units. Players realised they didn't need a keyboard on every single synthesiser. Sound modules were the logical result, rectangular metal things, racked in standardised housings.

Rack-based Electronic Music (REM) is recognisable immediately from the stacks of keyboards and racks of modules that fill the performance environment. Each instrument was (relatively) affordable and accessible. And they each had distinctive personalities. No-one would mistake an SH-101 for a Minimoog, a Putney for a Prophet. This is also what has made these tools so collectable in the current millenium. People obsess over different filter circuits and the minutae of hardware versions. Because such things matter.

Keith Emerson and Rick Wakeman typified the REM approach in progressive rock music, while any number of synthpop bands soon flooded Top of the Pops with their keyboards and drum machines.

Third wave: PCEM

Software was the gateway to the next big shift. The Digital Audio Workstation (DAW) was at first only a means to control MIDI devices more effectively. As processors got faster and memory larger, it became possible to store and manipulate digital audio itself. Plugins soon provided a software equivalent to each and every component of the recording studio. Soon the number of compressors or delay units you could use in a mix was only dependent on computing power, not how many you could afford to purchase.

Possibilities continued to expand. Visual programming environments like Cycling 77's Max and Native Instrument's Reaktor provide access to a virtual electronics playpen, where anything is possible and anyone can contribute. The Reaktor library contains over 5000 instruments, effects, mixers, sequencers, and other devices. Each has been contributed by performers and composers like you. This reflected the ethos of the open software movement, which changed forever how people saw their relationship to software (even proprietary software like Reaktor).

Personal Computer Electronic Music (PCEM) was driven by individual ownership of the means of production, and the fact that the PC was a powerful general-purpose tool like no other before it. Numerous digital and post-digital artists each found their own working method. Autechre, Aphex Twin, Orbital... such musicians combined instruments from the second wave with the open possibilities of the third wave.

Fourth wave: DEM

The PC affords us a bounty of sonic possibilities. But by tweaking our mouse endlessly from one pixel to another, might we lose touch with the physicality of music? Overwhelming possibilities might lead to inaction. Can we not have too much freedom of choice? In the fourth wave, performers and composers reacted against the computer, against the perfect simulacra of digital replication.

They did this by turning again to individual devices, though not necessarily the keyboards and drum machines of the past. Instead, the emphasis is often on small, cheap, and nasty components, connected in idiosyncratic ways.

Start with a Korg Monotron or a Pocket Operator from Teenage Engineering, plug baby patch cords into a Bastl Kastle or snap together a Modal Electronics Craft Synth. Each of these boutique instruments costs less than €100. For a little more you can tweak a Meeblip Triode or play a motion sequence into a Korg Volca. At the €500 price point, still less than what an instrument would have cost in the REM era, you can access esoteric synthesis with a Dreadbox Erebus, Waldorf Pulse, or Make Noise 0-COAST. Add your own favourites to this list.

Link these up with a controller from Novation, an Apple iPad, a few guitar effects pedals. Desktop Electronic Music (DEM) doesn't need a lab or a giant rack. And it doesn't rely on a centralised computer. Though you might need a bigger table!

Consider also that the middle initial might stand for "electric" instead of "electronic". This acknowledges those artists who integrate acoustic instruments with contact mics, old radios, record players, and other ephemera. Philip Jeck (UK) and Danny McCarthy (Ireland) typify this magpie approach.

And we must also mention the Arduino, Raspberry Pi, and other processor-based boards that enable rapid prototyping. Esoteric and individualistic controllers can now be designed by musicians without extensive electronics experience.

Characteristics of DEM

DEM has the characteristics of multiplicity, interoperability, portability, and egalitarian access. Multiplicity because of the enormous variety of sound-making and sound-mangling tools available. Interoperability because every device speaks to every other, using combinations of MIDI, control voltage, and so on. Have a look at the rear panel of an Arturia BeatStep Pro. That's a heck of a lot of ports, all focused on communication.

Portability is ensured by the small form factors. Many DEM devices work off battery power. Others leverage the current found in a USB port, or work well with a portable battery pack.

The egalitarian nature of these instruments is driven by both economics and accessibility. The low cost of entry allows all to participate. Esoteric hardware or software knowledge are not required. Nonetheless, DEM rewards learning with the discovery of new sonic possibilities.

The acronym DEM has several benefits. It is reminiscent of EDM, but focus doesn't have to be on dance. Indeed, DEM says nothing about what music will be produced, or what its purpose will be. It can be meditative or groovy, aggressive or ambient. This acronym, like those coined to describe the previous waves, only says something about the means of production.


When I started looking at the history of electronic music, my goal was not to categorise and limit. I have described four useful paradigms, but these are not entirely exclusive. Cheap DEM controllers talk to modular synths, and components from large monolithic systems make their way to the desktop. Computers are still integral to many setups. After all, isn't an iPad a computer, anyway?

Definitions are fluid things. By describing these patterns, I hope to present useful descriptive terms, rather than be prescriptive.

The conclusion is obvious. You've got the entire history of music at your fingertips, packaged in a myriad forms. All around you are accessible and convivial tools. I can't wait to hear what you'll do!

This research is part of a larger project funded by an Arts Council bursary.

All photos taken from Wikimedia Commons, the free media repository, except Danny McCarthy by the author. Mouse over a photo for credits.


  1. sorry but this is silly, David Tudor, Gordon Mumma etc. sat at desks doing live electronics in the 60s, what's the difference? and you've skipped laptronica completely - laptop musicians sat at desks too - and these days live coders also sit at desks. Not to mention that "desktop" suggests 'desktop computer' so that only adds to the inherent muddle here.

  2. The difference is that Tudor had to wire his own gear together and had a setup that covered a large expanse, sometimes an entire hall. Everything he and Mumma, etc. did was a unique piece of work, so doesn't fit the criteria I carefully outlined above.

    Laptop musicians sit at desks too, sure. But I think maybe you are fixating on the furniture a bit much!

    If you have a better term I would seriously love to hear it. But arguing only about the one word and ignoring the substantive content is not something I am particularly interested in.

    All labels are arbitrary.