This page covers game audio engineering — from audio fundamentals to professional middleware like FMOD and Wwise.
For engine-specific audio nodes see Godot, Unity, Unreal Engine.
For audio assets and free sounds see Free Assets.
For game design context see Game Design.
History
How: Game audio evolved from single-channel beeps (Pong, 1972) to full orchestral scores, spatial 3D audio, and adaptive music systems that react to gameplay in real time.
Who: Pioneers include Koji Kondo (Mario, Zelda), Nobuo Uematsu (Final Fantasy), and audio middleware companies like Firelight Technologies (FMOD) and Audiokinetic (Wwise).
Why: Audio is 50% of the player experience. Poor audio breaks immersion instantly. Great audio makes players feel emotions they can’t explain — tension, joy, dread, triumph.
Timeline
timeline
title Game Audio Evolution
1972 : Pong
: Single beep sounds
: No music
1985 : NES Era
: Chiptune music
: 4-channel synthesis
1990s : CD-ROM Era
: Recorded audio
: Full orchestral scores
2000s : Middleware Era
: FMOD and Wwise emerge
: Adaptive music begins
2010s : Spatial Audio
: Dolby Atmos in games
: HRTF for headphones
2020s : Procedural and AI Audio
: Real-time synthesis
: AI-generated adaptive music
Introduction
Game audio is the discipline of creating, implementing, and optimizing all sound in a game — music, sound effects, voice acting, ambient sound, and UI audio.
Unlike film audio (linear, fixed), game audio must be interactive and adaptive — responding to player actions, game state, and environment in real time.
Game Audio Knowledge Map
mindmap
root((Game Audio))
Fundamentals
Digital Audio
Signal Flow
File Formats
Compression
Sound Design
SFX Creation
Foley
Voice Acting
Procedural Audio
Music
Adaptive Music
Vertical Remixing
Horizontal Resequencing
Stingers
Spatial Audio
3D Positioning
HRTF
Occlusion
Reverb Zones
Middleware
FMOD Studio
Wwise
Engine Native
Implementation
Audio Buses
Mixing
DSP Effects
Optimization
Writes and produces music — score, themes, adaptive systems
Audio Programmer
Implements audio in engine — middleware integration, DSP, optimization
Voice Director
Directs voice acting sessions, manages dialogue
Audio Lead
Oversees all audio, sets technical standards
Why Audio Matters
Audio Element
Player Emotion It Creates
Tense music + silence
Dread, anticipation
Satisfying hit sounds
Power, impact
Ambient environment
Immersion, presence
UI feedback sounds
Confidence, clarity
Adaptive music swell
Excitement, triumph
Silence at the right moment
Shock, weight
Audio Fundamentals
Digital Audio Basics
graph LR
Sound["🔊 Sound Wave\nAnalog pressure wave"] -->|"ADC\nAnalog to Digital"| Digital["💾 Digital Audio\nSamples over time"]
Digital -->|"DAC\nDigital to Analog"| Speaker["🔈 Speaker\nBack to analog"]
Global game state that affects audio (e.g., InCombat, Exploring)
RTPC
Real-Time Parameter Control — float value driving audio
Blend Container
Blend between sounds based on RTPC value
Music Segment
A section of interactive music
Music Playlist
Sequence or random selection of music segments
Music Switch
Transition between music states
Bus
Routing channel with effects
Aux Bus
Send effects bus (reverb, delay)
Wwise C++ Integration
#include <AK/SoundEngine/Common/AkSoundEngine.h>#include <AK/SoundEngine/Common/AkMemoryMgr.h>#include <AK/MusicEngine/Common/AkMusicEngine.h>// Initialize Wwisevoid AudioManager::Init() { // Memory manager AkMemSettings memSettings; AK::MemoryMgr::GetDefaultSettings(memSettings); AK::MemoryMgr::Init(&memSettings); // Sound engine AkInitSettings initSettings; AkPlatformInitSettings platformSettings; AK::SoundEngine::GetDefaultInitSettings(initSettings); AK::SoundEngine::GetDefaultPlatformInitSettings(platformSettings); AK::SoundEngine::Init(&initSettings, &platformSettings); // Music engine AkMusicSettings musicSettings; AK::MusicEngine::GetDefaultInitSettings(musicSettings); AK::MusicEngine::Init(&musicSettings); // Load init bank (always required) AK::SoundEngine::LoadBank(AKTEXT("Init.bnk"), AK_DEFAULT_POOL_ID);}// Register a game object (every sound source needs one)void AudioManager::RegisterObject(AkGameObjectID id, const char* name) { AK::SoundEngine::RegisterGameObj(id, name);}// Post an event (play a sound)void AudioManager::PostEvent(const char* eventName, AkGameObjectID objectID) { AK::SoundEngine::PostEvent(eventName, objectID);}// Set RTPC value (drive adaptive audio)void AudioManager::SetRTPC(const char* rtpcName, float value, AkGameObjectID objectID = AK_INVALID_GAME_OBJECT) { AK::SoundEngine::SetRTPCValue(rtpcName, value, objectID);}// Set Switch (e.g., surface type for footsteps)void AudioManager::SetSwitch(const char* switchGroup, const char* switchState, AkGameObjectID objectID) { AK::SoundEngine::SetSwitch(switchGroup, switchState, objectID);}// Set State (global game state)void AudioManager::SetState(const char* stateGroup, const char* state) { AK::SoundEngine::SetState(stateGroup, state);}// Update 3D positionvoid AudioManager::SetPosition(AkGameObjectID id, float x, float y, float z) { AkSoundPosition pos; pos.SetPosition(x, y, z); pos.SetOrientation(0, 0, 1, 0, 1, 0); // forward, up AK::SoundEngine::SetPosition(id, pos);}// Update — call every framevoid AudioManager::Update() { AK::SoundEngine::RenderAudio();}
Adaptive Music Systems
Adaptive Music Techniques
graph TD
subgraph Horizontal["🔄 Horizontal Resequencing"]
H1["Segment A\nExploration"] -->|"Enemy spotted"| H2["Segment B\nTension"]
H2 -->|"Combat starts"| H3["Segment C\nCombat"]
H3 -->|"Enemy dead"| H4["Segment D\nVictory sting"]
H4 -->|"Return to normal"| H1
end
subgraph Vertical["🎚️ Vertical Remixing"]
V1["Base Layer\nAlways playing"]
V2["Percussion Layer\nFades in during combat"]
V3["Melody Layer\nFades in at full intensity"]
V4["Choir Layer\nBoss fight only"]
V1 --- V2 --- V3 --- V4
end
subgraph Stingers["⚡ Stingers"]
S1["Short musical phrase\nplayed on top of current music"]
S2["Enemy spotted sting"]
S3["Pickup collected sting"]
S4["Level complete fanfare"]
end
Technique
Description
Best For
Horizontal Resequencing
Switch between pre-composed segments
Clear state changes (combat/explore)
Vertical Remixing
Layer tracks in/out based on intensity
Gradual tension building
Stingers
Short musical phrases on top of base music
Punctuating events
Generative Music
Procedurally generated based on parameters
Infinite variation, ambient
Tempo Sync
Music tempo matches gameplay speed
Racing games, rhythm games
Implementing Adaptive Music in FMOD
FMOD Studio Setup:
1. Create a Multi Instrument or Transition Timeline event
2. Add music segments as audio tracks
3. Create a "CombatIntensity" parameter (0.0 = calm, 1.0 = intense)
4. Add transition markers between segments
5. Set transition conditions based on parameter value
6. Add volume automation curves per layer
// In game code — update music based on gameplay statevoid GameAudio::UpdateMusicState(float combatIntensity, bool bossActive, float playerHealth) { // Drive intensity parameter (0 = calm, 1 = full combat) musicInstance.setParameterByName("CombatIntensity", combatIntensity); // Switch to boss music state if (bossActive) { musicInstance.setParameterByName("BossActive", 1.0f); } // Low health — add tension layer float tension = 1.0f - (playerHealth / maxHealth); musicInstance.setParameterByName("PlayerTension", tension);}
Music Transition Types
Transition Type
Description
Use Case
Immediate
Switch instantly
Sudden shock moments
Beat sync
Switch on next beat
Smooth musical transitions
Bar sync
Switch on next bar
Natural musical phrasing
Phrase sync
Switch on next phrase (4/8 bars)
Seamless music changes
Fade crossfade
Fade out old, fade in new
Gradual state changes
Stinger bridge
Play a connecting phrase then switch
Cinematic transitions
3D Spatial Audio
Spatial Audio Pipeline
graph LR
Source["🔊 Sound Source\n3D position in world"]
Atten["📉 Attenuation\nVolume by distance"]
Pan["↔️ Panning\nLeft/right by angle"]
Doppler["🚗 Doppler\nPitch by velocity"]
Occlude["🧱 Occlusion\nMuffled through walls"]
Reverb["🏛️ Reverb\nRoom acoustics"]
HRTF["🎧 HRTF\nHead-related transfer\nfor headphones"]
Output["🎧 Output"]
Source --> Atten --> Pan --> Doppler --> Occlude --> Reverb --> HRTF --> Output
Attenuation Models
Model
Formula
Behavior
Use Case
Linear
volume = 1 - (dist / maxDist)
Gradual fade
Simple games
Inverse
volume = minDist / dist
Realistic falloff
Most 3D games
Inverse Square
volume = minDist² / dist²
Physically accurate
Simulation
Custom curve
Designer-defined
Full control
AAA games
// FMOD — set 3D attributes on an event instanceFMOD_3D_ATTRIBUTES attributes = {};attributes.position = { x, y, z }; // world positionattributes.velocity = { vx, vy, vz }; // for Doppler effectattributes.forward = { 0, 0, 1 }; // facing directionattributes.up = { 0, 1, 0 }; // up vectoreventInstance->set3DAttributes(&attributes);// Set listener position (usually the camera/player)FMOD_3D_ATTRIBUTES listenerAttribs = {};listenerAttribs.position = { camX, camY, camZ };listenerAttribs.forward = { camFwdX, camFwdY, camFwdZ };listenerAttribs.up = { 0, 1, 0 };studioSystem->setListenerAttributes(0, &listenerAttribs);
HRTF (Head-Related Transfer Function)
HRTF Solution
Platform
Quality
Cost
Steam Audio
All
Excellent
Free
Resonance Audio (Google)
All
Very good
Free
FMOD Spatializer
All
Good
Included
Wwise Spatial Audio
All
Excellent
Included
Sony 360 Reality Audio
PlayStation
Excellent
Platform SDK
Dolby Atmos
All
Excellent
License required
Occlusion & Obstruction
graph TD
subgraph Occlusion["🧱 Occlusion — Sound through solid wall"]
O1["Sound source"] -->|"Wall blocks direct path"| O2["Low-pass filter applied\nMuffled, bass-heavy sound"]
end
subgraph Obstruction["🪨 Obstruction — Partial blocking"]
OB1["Sound source"] -->|"Partial obstacle"| OB2["Reduced volume\nSome high frequencies lost"]
end
subgraph Reverb["🏛️ Reverb Zones"]
R1["Cave"] -->|"Long reverb tail"| R2["Echoey, deep sound"]
R3["Open field"] -->|"No reverb"| R4["Dry, direct sound"]
R5["Cathedral"] -->|"Very long reverb"| R6["Massive, ethereal sound"]
end
// Steam Audio occlusion (Godot / Unity plugin)// Raycast from listener to source — if blocked, apply occlusionfloat occlusionFactor = 0.0f;if (Physics.Linecast(listenerPos, sourcePos, out hit)) { occlusionFactor = 1.0f; // fully occluded}// Apply low-pass filter based on occlusionaudioSource.SetOcclusionFactor(occlusionFactor);
DSP Effects
Essential DSP Effects
Effect
What It Does
Game Use Case
EQ (Equalizer)
Boost/cut specific frequencies
Muffled underwater sound, radio effect
Low-Pass Filter
Remove high frequencies
Occlusion, underwater, muffled
High-Pass Filter
Remove low frequencies
Thin, distant sounds
Reverb
Simulate room acoustics
Caves, cathedrals, open spaces
Delay/Echo
Repeat sound with time offset
Canyons, large spaces
Compression
Reduce dynamic range
Consistent volume, punch
Limiter
Hard ceiling on volume
Prevent clipping
Distortion
Harmonic saturation
Radio, damaged equipment
Chorus
Slight pitch/time variations
Thicken sounds, underwater
Flanger
Comb filtering effect
Sci-fi, metallic sounds
Pitch Shift
Change pitch without time
Slow-motion, speed effects
Convolution Reverb
Real room impulse responses
Photorealistic acoustics
Reverb Design by Environment
Environment
Pre-delay
Decay Time
Diffusion
Character
Small room
5–10ms
0.3–0.5s
High
Tight, intimate
Large hall
20–40ms
1.5–3s
Medium
Grand, spacious
Cathedral
40–80ms
4–8s
Low
Massive, ethereal
Cave
10–30ms
1–3s
Low
Dark, echoey
Open field
0–5ms
0.1–0.3s
High
Dry, natural
Underwater
0ms
0.5–1s
Very high
Muffled, swirling
Metal room
5ms
0.2–0.4s
Low
Bright, ringy
DSP in FMOD (GDScript)
# Apply DSP effects to FMOD buses in Godot# FMOD Studio handles most DSP in the designer tool# But you can also apply effects via snapshotsfunc enter_cave() -> void: # Activate cave reverb snapshot FMODStudio.set_parameter_by_name("Environment", 1.0) # 0=outside, 1=cavefunc go_underwater() -> void: # Activate underwater snapshot (low-pass + chorus) FMODStudio.set_parameter_by_name("Underwater", 1.0)func take_damage() -> void: # Activate low-health snapshot (muffled, heartbeat) FMODStudio.set_parameter_by_name("PlayerHealth", float(current_health) / float(max_health))
DSP in Godot (Native)
# Godot native audio effects on buses# Project → Project Settings → Audio → Add busesextends Nodefunc _ready() -> void: # Get the SFX bus index var sfx_bus = AudioServer.get_bus_index("SFX") # Add a reverb effect to the SFX bus var reverb = AudioEffectReverb.new() reverb.room_size = 0.8 reverb.damping = 0.5 reverb.wet = 0.3 AudioServer.add_bus_effect(sfx_bus, reverb) # Add a low-pass filter (for occlusion) var lowpass = AudioEffectLowPassFilter.new() lowpass.cutoff_hz = 800.0 # muffled AudioServer.add_bus_effect(sfx_bus, lowpass)func set_underwater(active: bool) -> void: var sfx_bus = AudioServer.get_bus_index("SFX") # Enable/disable the low-pass filter effect AudioServer.set_bus_effect_enabled(sfx_bus, 1, active)
Engine Native Audio
Godot Audio System
graph TD
subgraph Nodes["Audio Nodes"]
ASP["AudioStreamPlayer\nNon-positional\nMusic, UI sounds"]
ASP2["AudioStreamPlayer2D\nPositional 2D audio"]
ASP3["AudioStreamPlayer3D\nPositional 3D audio"]
end
subgraph Buses["Audio Bus Layout"]
Master["Master Bus"]
Music["Music Bus"]
SFX["SFX Bus"]
Voice["Voice Bus"]
Ambient["Ambient Bus"]
Music --> Master
SFX --> Master
Voice --> Master
Ambient --> Master
end
Set max simultaneous voices (e.g., 64 SFX, 4 music)
Voice stealing
Stop lowest priority voice when limit reached
Distance culling
Don’t play sounds beyond max audible distance
Priority system
High priority (player, boss) never stolen
Virtualization
Distant sounds tracked but not mixed — resume when close
// FMOD — set max voices per event// In FMOD Studio: Event → Max Instances → set limit// In code: check if event is already playingFMOD::Studio::EventDescription* desc;studioSystem->getEvent("event:/SFX/Footstep", &desc);int instanceCount;desc->getInstanceCount(&instanceCount);if (instanceCount < 4) { // max 4 simultaneous footsteps // Create and play new instance}