Skip main navigation
We use cookies to give you a better experience, if that’s ok you can close this message and carry on browsing. For more info read our cookies policy.
We use cookies to give you a better experience. Carry on browsing if you're happy with this, or read our cookies policy for more information.
Life on Azeroth
Getting sucked into the game!

Week 1 Recap and Look-Ahead to Week 2

Before we move on to look at this week’s activities, let’s just take a moment to recap last week’s material.

The Atari VCS

Atari’s VCS, launched in September 1977, was a very popular machine, and did a lot to popularise home gaming. The machine was constrained by both cost and technology, and its video and sound chip, the TIA, tied sound generation to the video display, limiting its musical possibilities. Nevertheless, composers developed innovative workarounds to the limitations to create some quite interesting game soundtracks. This period also saw the very first professional video game composers emerge.

The Sinclair ZX Spectrum

The ZX Spectrum was the machine that launched the home gaming craze in the UK. In terms of sound and music, the Spectrum was even more limited than the Atari, offering just a single channel of square wave tones, without any possibility of varying the loudness. Again, though, developers responded by developing some very innovative coding techniques that pushed the Spectrum’s beeper well beyond its limits. In particular, using a method known as pulse wave modulation, game developers were able to create sophisticated, multichannel music with percussion and effects.

The Commodore 64

The Spectrum’s big rival was Commodore’s 64. Boasting more colourful graphics, more memory, and a sound chip, the SID, that was in a completely different league to either the Atari or the Spectrum, the 64 transformed the sound of video game music. No longer was it enough to tack on a bleepy dirge to the title screen of a game. Composers like Rob Hubbard brought real musicianship to video games, creating soundtracks with catchy melodies, strong bass lines, and driving percussion. They riffed off popular music, something that was only really made possible by the synth-like capabilities of the 64’s SID chip, and in turn, that allowed game music to become an important meta-layer of narrative to support the gameplay.

If you have any thoughts or observations about what you’ve learned in week 1, please post them to the comments section below.

Looking ahead

Looking ahead to this week’s activities, we’re going to focus on how video game music changed from being a relatively fixed and linear format to something non-linear, that could adapt in real time to player input, to better reflect the gameplay, an approach that really came into its own on Nintendo’s NES.

We’ll investigate how video game music started to move away from synthetic sounds to more natural and expressive sounds, first of all, by triggering samples using the Paula chip on Commodore’s Amiga, and then by streaming pre-recorded music tracks directly from CD-ROM on consoles like Sony’s PlayStation.

As technology continued to evolve, the scope of those soundtracks increased and specialist tools, audio engines, and middleware, were developed to allow music specialists to compose and implement complex and non-linear music sequences.

We’ll look at how packages like FMOD and WWise fused that produced sound with the adaptive features of games like Super Mario Brothers.

We’ll conclude by looking at how these specialist technologies have created niches in the industry. It’s not just that there are individuals who specialise in video game sound and music, but whole production studios – a new sector of the market – have grown up to service this area of the games industry.

Share this article:

This article is from the free online course:

Game Design and Development: A Bit-by-Bit History of Video Game Music

Abertay University

Contact FutureLearn for Support