Finally, a Computer that Writes Contemporary Music Without Human Help

Meet Iamus, a computer at the University of Malaga in Spain capable of composing contemporary classical music without human aid.

  • Share
  • Read Later
Gustavo Diaz Jerez / YouTube

The BBC ran a story yesterday about a computer at the University of Malaga in Spain, dubbed “Iamus” after a mythical Greek prophet who could translate birdsong, that’s capable of composing contemporary classical music without human aid (“contemporary” meaning you probably won’t walk away humming a melody unless you’re an aficionado of 20th century classical music).

I’ve since had a chance to listen to snippets of the tracks off Iamus’ eponymous first album, recorded by the London Symphony Orchestra and released last September, and here’s the surprise: It’s not as formless as you’d think, nor does it sound as if Iamus’ designers are employing the “contemporary” label as a gimmicky way to excuse the compositional mercurialness of their computer algorithm.

Take the opener, “Tránsitos,” which piles dense, gorgeously dissonant layers of instrumentation, punctuated by low bell-like sounds, to create a beautifully unsettling and yet entirely musical idea you’d be forgiven for confusing with something written for a despairing or terrifying moment in a film.

Or take the following more contemplative piece written for violin, clarinet and piano, also from the album, titled “Hello World!” after the decades-old computer-programming test message. (Here’s a link to the actual score.) Its proponents describe it as “the first composition entirely generated by a computer.”

[youtube=http://www.youtube.com/watch?v=bD7l4Kg1Rt8]

Like I said, not exactly the sort of thing you’re going to whistle, though I wouldn’t go as far as Guardian music writer Tom Service, who describes the piece as “slavishly manipulating pitch cells to generate melodies that have a kind of superficial coherence and relationship to one another, with all the dryness and grayness that suggests, despite the expressive commitment of the three performers.” That sounds more like the sort of description you might apply, with intellectual justification, to any number of contemporary classical works, your mood and receptiveness depending.

Aesthetics aside, Iamus isn’t the 21st century’s answer to Mozart, though that’s the sort of eye-catching headline this technology gives rise to. It’s rather a 21st century flag waver for something known as “melomics music technology” (melomics is a portmanteau of  “melody” and “genomics”). Genomics is the discipline of sequencing, assembling and analyzing all of an organism’s genetic material — its genome. Melomics, then, is an algorithm that fiddles with genome-like data structures to produce plausible music compositions.

According to the University of Malaga’s melomics page:

The algorithm operates on data structures (functioning as genomes) which indirectly encode the melodies: each genome undergoes an artificial developmental process to generate the corresponding melody. As melodies evolve, they can be rendered in several formats: playable (MP3), editable (MIDI and MusicXML) and readable (score in PDF). This diversity of formats ensures that the melodies can be enjoyed in computers and portable media players, edited with professional music software, or played in live performances.

Iamus itself is a computer cluster housing 352 processors in a half-cabinet surrounded by a stylish glowing shell that looks a bit like a giant cousin of the Horta from the original Star Trek series episode “Devil in the Dark.” (Yes, it’s a bit gimmicky, but then, so was the Tupac hologram.)

How long does it take Iamus to come up with a new composition? Eight minutes, apparently, in which time it’s able to output the data in multiple formats, including MIDI, MP3, PDF and XML.

Music is elementally mathematical, so it makes sense that with exponential increases in computer processing power, a computer could be designed that creates recognizable and even interesting music. We’ve been approaching something like Iamus for decades at least, from stuff like Sid Meier’s C.P.U. Bach (1994) for the 3DO, to programs like PG Music’s Band-in-a-Box, which can generate band-style musical accompaniment in various styles, on the fly, for solo players.

I remember bumping into dynamic music-generation tech for the first time back in the early 1990s, fooling with Yamaha’s Clavinova series of digital keyboards, which had been around for a decade but were just then taking off. At the time, the model I played included several accompaniment algorithms: an automated bass player, for instance, that changed its play pattern in concert with whatever notes you struck with your left hand. The algorithm’s harmonic vocabulary was primitive by today’s standards — if you modulated too quickly or your changes were harmonically complex, the bass player got lost — but you could see where things were headed.

I know what you’re thinking: How long before human composers become musically obsolete? It’s a complex and problematic question. Futurist Ray Kurzweil estimates that computers will be creating their own art and music in just six years (though the prediction is vague enough about what “their” actually means, in terms of aesthetic self-awareness, that you might claim it’s already happened, rendering it meaningless in my opinion). Assuming we’re not all floating around as foglets in Kurzweil’s fantasy future, it’s probably going to depend on how much we continue to value strictly human-created art for its own sake.

But the thing that’s often missed in discussions like this is that for all Iamus’ musical “intelligence,” it’s still at this point a human-crafted, human-programmed and fundamentally human-influenced device. The reason compositions like “Hello World!” are listenable is that Iamus was programmed to generate organized sound within certain mechanical parameters — what a human can play on a given musical instrument, for instance. (That, and each composition still has to be interpreted by human players using actual instruments.)

This is where I actually dovetail with the Guardian‘s Service when he says:

The real paradox of Iamus is why it’s being used to attempt to fool humanity in this way. If you’ve got a computer program of this sophistication, why bother trying to compose pieces that a human, and not a very good human at that — well, not a compositional genius anyway — could write? Why not use it to find new realms of sound, new kinds of musical ideas?

Why not indeed. What kinds of sounds haven’t we heard yet? Might we combine instruments in ways no one has? Create new kinds of acoustic instruments altogether? (Imagine pairing Iamus’ compositional capabilities with something like AutoCAD for musical instruments and 3D printing tech.) I’d like to think that some of those things, in addition to the obvious albeit less interesting ones — say, teaching Iamus to compose in more popular musical genres (look out, Justin Bieber!) — are on the agenda, too.