Skip main navigation

New offer! Get 30% off your first 2 months of Unlimited Monthly. Start your subscription for just £29.99 £19.99. New subscribers only. T&Cs apply

Find out more

Adaptive Music Engines

Dr Kenny McAlpine discusses the emergence of sound and music engines, beginning with Microsoft's DirectMusic through to WWise and FMOD.
In the 1990s, alongside the new gaming consoles from Nintendo, Sony, and Sega, the PC started to find its feet as a gaming platform. Intel’s new Pentium Processor, released in 1993, provided a powerful core, and the open design of PC products made it possible for users to take what, at the time, was inevitably a beige desktop box and soup it up with sound and graphics cards to make it more play friendly. It was the computing equivalent of taking a drab, grey business suit and zhooshing it up with lapel trims and a funky satin lining for a night on the town. But that open architecture came at a price.
Different components required different drivers, and game developers were unable to test every possible combination of processor, video card, and sound card with their games. The responsibility for getting everything working lay with the end user not with Microsoft, and not with the hardware manufacturers, and not with the developers. It wasn’t uncommon for a player to spend £30 on a game and get it home only to have to spend an hour or two fiddling with the AUTOEXEC batch file and the CONFIG system file before the game would run, if it would run properly at all.
There had to be a better way, and sure enough, with the arrival of Windows 95, Microsoft provided both a platform for gaming and the software tools that would turn the PC into a mainstream gaming platform. Microsoft launched Direct Music as the Interactive Music Architecture, or IMA, in 1996 and then incorporated it into DirectX in 1999. The software is a system for implementing dynamic and adaptive soundtracks, those that can change in real time and in response to player input and gameplay.
Now there had been games on the PC and on other platforms that featured adaptive music long before Direct Music came along. In the arcades, Space Invaders and Dig Dug both had scores whose music was tied to gameplay, and we looked at the music of Super Mario Bros. earlier. On the PC, LucasArts Interactive Music Streaming Engine, or iMUSE, provided the music in several of their point and click adventure games, including my own favourite, Monkey Island 2.
Those soundtracks were all based around proprietary technologies, code that was designed specifically for a particular game or for a particular company to use in its games. Microsoft’s approach was to create a set of universal development tools that would sit at a higher level than the hardware but handle all of the low-level processing, instead allowing developers to focus on conceiving and implementing innovative musical ideas without first having to design the technology to support them. This was really the beginning of the game engine model of development, one that separated the development of the core components of a game, like game physics, or rendering, or collision detection, from the higher level gameplay mechanics, graphics and sound.
It was an approach that was particularly noticeable with the first person 3D shooter.
Games like iD Software’s Doom and Quake proved so popular that other developers moved in to create similar titles, but rather than continually reinvent the wheel, the other developers licenced the core code and then designed their own graphics, characters, weapons, and levels – the game content or assets. Heretic and Hexen for example, both were built on the Doom engine, while Half-Life used a modified version of the Quake engine. Separating out those different aspects of development, the technical and the conceptual, meant the development teams could grow and specialise. Later games like Unreal were designed with that approach in mind, with the engine and the content developed separately.
At the very least, the engine made developing sequels much cheaper and easier, but by licencing the technology to other firms, developers could offset some of the initial development costs – crucial in a very competitive industry like the games industry. As game development became more specialised, new tools appeared to service and support these specialisms.
Of these, FMod and WWise have found a niche supporting video game composers who write real-time adaptive music. BioShock, LEGO Universe, and some of the Guitar Hero titles feature music that’s driven by one of these middleware engines. What we have with middleware engines, then, is a technical framework that supports the development of interactive and adaptive video game soundtracks. The underlying technology depends on complex rule systems to determine which sections of music to play and when, and it’s the job of the development team and the composer to find out how a game might unfold and create both the snippets of music and the rules that are required to piece them together.
In fact, the approach isn’t dissimilar to a musical parlour game called Musikalisches Würfelspiel, or musical dice, a game that’s been played for centuries and was reportedly enjoyed by Mozart. The idea was for the composer to write musical fragments that could be pieced back together in any order and then for a new composition to be composed from these fragments by rolling dice. Below I’ve included a document that presents some additional background to the musical dice game, some musical fragments, and a series of short compositions composed from them. What I’d like you to do is listen through to both the fragments and the finished compositions. Firstly, can you identify the fragments in the final compositions?
How different or similar are those final compositions? What are the changes between them? Is it the musical structure or is it the fine detail of the work? What, then, do you think are the strengths and the limitations of this sort of rule-driven approach to composition? Post your thoughts to the discussion group, and, if you’re feeling adventurous, you might like to have a go at writing some new musical fragments and playing your own game of musical dice.

In the 1990s and early 2000s, interactive music once again started to appear in video games.

Over time, specialist software packages known as middleware, or sound engines, were developed to provide a set of tools to help composers write nonlinear music that would adapt in real-time to gameplay. In this video, we’ll explore how that fusion of the underlying technology and the produced sound has given us the polished video game music that can be heard today.

Once you’ve completed the video step, follow the link to the article step that follows, where we’ll look at an approach to writing adaptive music, musikalisches wurfelspiel, or musical dice.

This approach allows a composer to write adaptive music by first writing a series of musical fragments that can be pieced back together in any order, and using a random process, like the roll of a dice, to sequence the fragments to create a longer composition.

You can find supporting documentation and a link to some music files on the next step.

This article is from the free online

Video Game Design and Development: A Bit-by-Bit History of Video Game Music: Video Game Sound and Music

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now