About This Page

This page covers game audio engineering — from audio fundamentals to professional middleware like FMOD and Wwise. For engine-specific audio nodes see Godot, Unity, Unreal Engine. For audio assets and free sounds see Free Assets. For game design context see Game Design.

History

  • How: Game audio evolved from single-channel beeps (Pong, 1972) to full orchestral scores, spatial 3D audio, and adaptive music systems that react to gameplay in real time.
  • Who: Pioneers include Koji Kondo (Mario, Zelda), Nobuo Uematsu (Final Fantasy), and audio middleware companies like Firelight Technologies (FMOD) and Audiokinetic (Wwise).
  • Why: Audio is 50% of the player experience. Poor audio breaks immersion instantly. Great audio makes players feel emotions they can’t explain — tension, joy, dread, triumph.

Timeline

timeline
    title Game Audio Evolution
    1972 : Pong
         : Single beep sounds
         : No music
    1985 : NES Era
         : Chiptune music
         : 4-channel synthesis
    1990s : CD-ROM Era
         : Recorded audio
         : Full orchestral scores
    2000s : Middleware Era
         : FMOD and Wwise emerge
         : Adaptive music begins
    2010s : Spatial Audio
         : Dolby Atmos in games
         : HRTF for headphones
    2020s : Procedural and AI Audio
         : Real-time synthesis
         : AI-generated adaptive music

Introduction

  • Game audio is the discipline of creating, implementing, and optimizing all sound in a game — music, sound effects, voice acting, ambient sound, and UI audio.
  • Unlike film audio (linear, fixed), game audio must be interactive and adaptive — responding to player actions, game state, and environment in real time.

Game Audio Knowledge Map

mindmap
  root((Game Audio))
    Fundamentals
      Digital Audio
      Signal Flow
      File Formats
      Compression
    Sound Design
      SFX Creation
      Foley
      Voice Acting
      Procedural Audio
    Music
      Adaptive Music
      Vertical Remixing
      Horizontal Resequencing
      Stingers
    Spatial Audio
      3D Positioning
      HRTF
      Occlusion
      Reverb Zones
    Middleware
      FMOD Studio
      Wwise
      Engine Native
    Implementation
      Audio Buses
      Mixing
      DSP Effects
      Optimization

Audio vs Music vs Sound Design

RoleResponsibility
Sound DesignerCreates SFX — weapons, footsteps, UI, ambient, foley
ComposerWrites and produces music — score, themes, adaptive systems
Audio ProgrammerImplements audio in engine — middleware integration, DSP, optimization
Voice DirectorDirects voice acting sessions, manages dialogue
Audio LeadOversees all audio, sets technical standards

Why Audio Matters

Audio ElementPlayer Emotion It Creates
Tense music + silenceDread, anticipation
Satisfying hit soundsPower, impact
Ambient environmentImmersion, presence
UI feedback soundsConfidence, clarity
Adaptive music swellExcitement, triumph
Silence at the right momentShock, weight

Audio Fundamentals

Digital Audio Basics

graph LR
    Sound["🔊 Sound Wave\nAnalog pressure wave"] -->|"ADC\nAnalog to Digital"| Digital["💾 Digital Audio\nSamples over time"]
    Digital -->|"DAC\nDigital to Analog"| Speaker["🔈 Speaker\nBack to analog"]
ConceptDescriptionCommon Values
Sample RateSamples per second44,100 Hz (CD), 48,000 Hz (game/film standard)
Bit DepthBits per sample — dynamic range16-bit (CD), 24-bit (production)
ChannelsNumber of audio streamsMono (1), Stereo (2), 5.1 (6), 7.1 (8)
BitrateData per second (compressed)128–320 kbps (MP3), lossless (WAV/FLAC)
LatencyDelay from trigger to sound< 10ms ideal for games

Audio File Formats

FormatTypeQualitySizeUse Case
WAVUncompressedLosslessLargeSFX, source files, short clips
AIFFUncompressedLosslessLargeApple equivalent of WAV
OGG VorbisCompressedNear-losslessSmallMusic, long ambient loops
MP3CompressedLossySmallAvoid — gap at loop point
FLACCompressedLosslessMediumArchival, high-quality music
OPUSCompressedExcellentVery smallVoice, streaming, web games
XMACompressedGoodSmallXbox platform audio
ADPCMCompressedGoodSmallMobile, low-memory platforms

Signal Flow

graph LR
    Source["🎵 Audio Source\nSound file / synthesis"]
    Effects["🎛️ DSP Effects\nEQ, reverb, compression"]
    Bus["🚌 Audio Bus\nGroup channel"]
    Master["🔊 Master Bus\nFinal mix"]
    Output["🎧 Output\nSpeakers / headphones"]
    Source --> Effects --> Bus --> Master --> Output
    subgraph Buses["Bus Hierarchy"]
        SFX["SFX Bus"]
        Music["Music Bus"]
        Voice["Voice Bus"]
        Ambient["Ambient Bus"]
        SFX --> Master
        Music --> Master
        Voice --> Master
        Ambient --> Master
    end

Decibels (dB) Reference

dB LevelPerceptionUse Case
0 dBMaximum (clipping threshold)Never exceed in final mix
-6 dBHalf amplitudeHeadroom for peaks
-12 dBComfortable listening levelMusic in gameplay
-18 dBBackground levelAmbient sounds
-24 dBQuiet backgroundDistant sounds
-∞ dBSilenceMuted

FMOD Studio

FMOD Architecture

graph TD
    subgraph Studio["🎛️ FMOD Studio (Designer Tool)"]
        Events["Events\nSound containers with logic"]
        Banks["Banks\nPackaged audio files"]
        Params["Parameters\nGameplay variables"]
        Mixer["Mixer\nBuses, effects, routing"]
    end
    subgraph API["💻 FMOD API (Programmer)"]
        System["FMOD::Studio::System\nInitialize FMOD"]
        EventInst["EventInstance\nPlay an event"]
        SetParam["setParameterByName\nDrive adaptive audio"]
    end
    Banks -->|"Load at runtime"| System
    Events --> EventInst
    Params --> SetParam

FMOD Core Concepts

ConceptDescription
EventA sound container — can hold multiple audio tracks, logic, and parameters
BankA compiled package of events and audio assets loaded at runtime
ParameterA float value set by game code that drives audio behavior
SnapshotA mixer state — apply reverb, ducking, or EQ changes globally
BusA routing channel — group sounds for volume control and effects
VCAVolume Control Automation — control volume of a group without routing
TimelineLinear playback within an event — like a DAW timeline
TransitionLogic for switching between audio states based on parameters

FMOD C++ Integration

#include "fmod_studio.hpp"
#include "fmod.hpp"
 
FMOD::Studio::System* studioSystem = nullptr;
 
// Initialize FMOD Studio
void AudioManager::Init() {
    FMOD::Studio::System::create(&studioSystem);
    studioSystem->initialize(
        512,                              // max channels
        FMOD_STUDIO_INIT_NORMAL,          // studio flags
        FMOD_INIT_NORMAL,                 // core flags
        nullptr                           // extra driver data
    );
 
    // Load master bank (always load this first)
    FMOD::Studio::Bank* masterBank = nullptr;
    studioSystem->loadBankFile("Master.bank",
        FMOD_STUDIO_LOAD_BANK_NORMAL, &masterBank);
 
    // Load strings bank (for event path lookup)
    FMOD::Studio::Bank* stringsBank = nullptr;
    studioSystem->loadBankFile("Master.strings.bank",
        FMOD_STUDIO_LOAD_BANK_NORMAL, &stringsBank);
}
 
// Play a one-shot sound event
void AudioManager::PlayOneShot(const char* eventPath) {
    FMOD::Studio::EventDescription* eventDesc = nullptr;
    studioSystem->getEvent(eventPath, &eventDesc);
 
    FMOD::Studio::EventInstance* instance = nullptr;
    eventDesc->createInstance(&instance);
 
    instance->start();
    instance->release(); // auto-release when done
}
 
// Play a persistent event (music, ambient)
FMOD::Studio::EventInstance* AudioManager::PlayPersistent(const char* eventPath) {
    FMOD::Studio::EventDescription* eventDesc = nullptr;
    studioSystem->getEvent(eventPath, &eventDesc);
 
    FMOD::Studio::EventInstance* instance = nullptr;
    eventDesc->createInstance(&instance);
    instance->start();
    return instance; // caller manages lifetime
}
 
// Set a parameter to drive adaptive audio
void AudioManager::SetParameter(FMOD::Studio::EventInstance* instance,
                                 const char* name, float value) {
    instance->setParameterByName(name, value);
}
 
// Update — call every frame
void AudioManager::Update() {
    studioSystem->update();
}
 
// Cleanup
void AudioManager::Shutdown() {
    studioSystem->unloadAll();
    studioSystem->release();
}

FMOD GDScript (Godot)

# Using FMOD GDNative plugin for Godot
# https://github.com/alessandrofama/fmod-for-godot
 
extends Node
 
var music_instance: FMODStudioEventInstance
 
func _ready() -> void:
    # Load banks
    FMODStudio.load_bank("res://audio/Master.bank",
        FMODStudio.LOAD_BANK_NORMAL)
    FMODStudio.load_bank("res://audio/Master.strings.bank",
        FMODStudio.LOAD_BANK_NORMAL)
 
    # Start persistent music event
    music_instance = FMODStudio.create_event_instance(
        "event:/Music/MainTheme")
    music_instance.start()
 
func play_sfx(event_path: String) -> void:
    FMODStudio.play_one_shot(event_path)
 
func set_music_intensity(value: float) -> void:
    # Drive adaptive music with gameplay parameter
    music_instance.set_parameter_by_name("Intensity", value)
 
func _process(_delta: float) -> void:
    FMODStudio.update()

FMOD C# (Unity)

using FMODUnity;
using FMOD.Studio;
 
public class AudioManager : MonoBehaviour
{
    [SerializeField] EventReference musicEvent;
    [SerializeField] EventReference footstepEvent;
 
    private EventInstance musicInstance;
 
    void Start() {
        // Start persistent music
        musicInstance = RuntimeManager.CreateInstance(musicEvent);
        musicInstance.start();
    }
 
    public void PlayFootstep(string surface) {
        // One-shot with parameter
        EventInstance sfx = RuntimeManager.CreateInstance(footstepEvent);
        sfx.setParameterByName("Surface", surface == "grass" ? 0f : 1f);
        sfx.start();
        sfx.release(); // auto-release
    }
 
    public void SetCombatIntensity(float intensity) {
        // Drive adaptive music (0 = calm, 1 = intense combat)
        musicInstance.setParameterByName("CombatIntensity", intensity);
    }
 
    void OnDestroy() {
        musicInstance.stop(FMOD.Studio.STOP_MODE.ALLOWFADEOUT);
        musicInstance.release();
    }
}

Wwise (Audiokinetic)

FMOD vs Wwise Comparison

FeatureFMOD StudioWwise
Learning curveMediumHigh
UI complexitySimplerMore complex
Adaptive musicGoodExcellent
Interactive musicGoodIndustry-leading
Profiling toolsGoodExcellent
Spatial audioGoodExcellent
Platform supportAll majorAll major
Free tierUnder $200K revenueUnder $150K revenue
Best forIndie to mid-sizeMid-size to AAA
Used byHollow Knight, CelesteGod of War, Witcher 3

Wwise Core Concepts

ConceptDescription
Sound ObjectBasic audio container — holds audio files
Actor-MixerHierarchy for organizing and routing sounds
EventTriggered action — Play, Stop, Pause, Set Switch
SwitchState-based audio selection (e.g., surface type)
StateGlobal game state that affects audio (e.g., InCombat, Exploring)
RTPCReal-Time Parameter Control — float value driving audio
Blend ContainerBlend between sounds based on RTPC value
Music SegmentA section of interactive music
Music PlaylistSequence or random selection of music segments
Music SwitchTransition between music states
BusRouting channel with effects
Aux BusSend effects bus (reverb, delay)

Wwise C++ Integration

#include <AK/SoundEngine/Common/AkSoundEngine.h>
#include <AK/SoundEngine/Common/AkMemoryMgr.h>
#include <AK/MusicEngine/Common/AkMusicEngine.h>
 
// Initialize Wwise
void AudioManager::Init() {
    // Memory manager
    AkMemSettings memSettings;
    AK::MemoryMgr::GetDefaultSettings(memSettings);
    AK::MemoryMgr::Init(&memSettings);
 
    // Sound engine
    AkInitSettings initSettings;
    AkPlatformInitSettings platformSettings;
    AK::SoundEngine::GetDefaultInitSettings(initSettings);
    AK::SoundEngine::GetDefaultPlatformInitSettings(platformSettings);
    AK::SoundEngine::Init(&initSettings, &platformSettings);
 
    // Music engine
    AkMusicSettings musicSettings;
    AK::MusicEngine::GetDefaultInitSettings(musicSettings);
    AK::MusicEngine::Init(&musicSettings);
 
    // Load init bank (always required)
    AK::SoundEngine::LoadBank(AKTEXT("Init.bnk"), AK_DEFAULT_POOL_ID);
}
 
// Register a game object (every sound source needs one)
void AudioManager::RegisterObject(AkGameObjectID id, const char* name) {
    AK::SoundEngine::RegisterGameObj(id, name);
}
 
// Post an event (play a sound)
void AudioManager::PostEvent(const char* eventName, AkGameObjectID objectID) {
    AK::SoundEngine::PostEvent(eventName, objectID);
}
 
// Set RTPC value (drive adaptive audio)
void AudioManager::SetRTPC(const char* rtpcName, float value,
                            AkGameObjectID objectID = AK_INVALID_GAME_OBJECT) {
    AK::SoundEngine::SetRTPCValue(rtpcName, value, objectID);
}
 
// Set Switch (e.g., surface type for footsteps)
void AudioManager::SetSwitch(const char* switchGroup, const char* switchState,
                              AkGameObjectID objectID) {
    AK::SoundEngine::SetSwitch(switchGroup, switchState, objectID);
}
 
// Set State (global game state)
void AudioManager::SetState(const char* stateGroup, const char* state) {
    AK::SoundEngine::SetState(stateGroup, state);
}
 
// Update 3D position
void AudioManager::SetPosition(AkGameObjectID id, float x, float y, float z) {
    AkSoundPosition pos;
    pos.SetPosition(x, y, z);
    pos.SetOrientation(0, 0, 1, 0, 1, 0); // forward, up
    AK::SoundEngine::SetPosition(id, pos);
}
 
// Update — call every frame
void AudioManager::Update() {
    AK::SoundEngine::RenderAudio();
}

Adaptive Music Systems

Adaptive Music Techniques

graph TD
    subgraph Horizontal["🔄 Horizontal Resequencing"]
        H1["Segment A\nExploration"] -->|"Enemy spotted"| H2["Segment B\nTension"]
        H2 -->|"Combat starts"| H3["Segment C\nCombat"]
        H3 -->|"Enemy dead"| H4["Segment D\nVictory sting"]
        H4 -->|"Return to normal"| H1
    end
    subgraph Vertical["🎚️ Vertical Remixing"]
        V1["Base Layer\nAlways playing"]
        V2["Percussion Layer\nFades in during combat"]
        V3["Melody Layer\nFades in at full intensity"]
        V4["Choir Layer\nBoss fight only"]
        V1 --- V2 --- V3 --- V4
    end
    subgraph Stingers["⚡ Stingers"]
        S1["Short musical phrase\nplayed on top of current music"]
        S2["Enemy spotted sting"]
        S3["Pickup collected sting"]
        S4["Level complete fanfare"]
    end
TechniqueDescriptionBest For
Horizontal ResequencingSwitch between pre-composed segmentsClear state changes (combat/explore)
Vertical RemixingLayer tracks in/out based on intensityGradual tension building
StingersShort musical phrases on top of base musicPunctuating events
Generative MusicProcedurally generated based on parametersInfinite variation, ambient
Tempo SyncMusic tempo matches gameplay speedRacing games, rhythm games

Implementing Adaptive Music in FMOD

FMOD Studio Setup:
1. Create a Multi Instrument or Transition Timeline event
2. Add music segments as audio tracks
3. Create a "CombatIntensity" parameter (0.0 = calm, 1.0 = intense)
4. Add transition markers between segments
5. Set transition conditions based on parameter value
6. Add volume automation curves per layer
// In game code — update music based on gameplay state
void GameAudio::UpdateMusicState(float combatIntensity,
                                  bool bossActive,
                                  float playerHealth) {
    // Drive intensity parameter (0 = calm, 1 = full combat)
    musicInstance.setParameterByName("CombatIntensity", combatIntensity);
 
    // Switch to boss music state
    if (bossActive) {
        musicInstance.setParameterByName("BossActive", 1.0f);
    }
 
    // Low health — add tension layer
    float tension = 1.0f - (playerHealth / maxHealth);
    musicInstance.setParameterByName("PlayerTension", tension);
}

Music Transition Types

Transition TypeDescriptionUse Case
ImmediateSwitch instantlySudden shock moments
Beat syncSwitch on next beatSmooth musical transitions
Bar syncSwitch on next barNatural musical phrasing
Phrase syncSwitch on next phrase (4/8 bars)Seamless music changes
Fade crossfadeFade out old, fade in newGradual state changes
Stinger bridgePlay a connecting phrase then switchCinematic transitions

3D Spatial Audio

Spatial Audio Pipeline

graph LR
    Source["🔊 Sound Source\n3D position in world"]
    Atten["📉 Attenuation\nVolume by distance"]
    Pan["↔️ Panning\nLeft/right by angle"]
    Doppler["🚗 Doppler\nPitch by velocity"]
    Occlude["🧱 Occlusion\nMuffled through walls"]
    Reverb["🏛️ Reverb\nRoom acoustics"]
    HRTF["🎧 HRTF\nHead-related transfer\nfor headphones"]
    Output["🎧 Output"]
    Source --> Atten --> Pan --> Doppler --> Occlude --> Reverb --> HRTF --> Output

Attenuation Models

ModelFormulaBehaviorUse Case
Linearvolume = 1 - (dist / maxDist)Gradual fadeSimple games
Inversevolume = minDist / distRealistic falloffMost 3D games
Inverse Squarevolume = minDist² / dist²Physically accurateSimulation
Custom curveDesigner-definedFull controlAAA games
// FMOD — set 3D attributes on an event instance
FMOD_3D_ATTRIBUTES attributes = {};
attributes.position = { x, y, z };        // world position
attributes.velocity = { vx, vy, vz };     // for Doppler effect
attributes.forward  = { 0, 0, 1 };        // facing direction
attributes.up       = { 0, 1, 0 };        // up vector
 
eventInstance->set3DAttributes(&attributes);
 
// Set listener position (usually the camera/player)
FMOD_3D_ATTRIBUTES listenerAttribs = {};
listenerAttribs.position = { camX, camY, camZ };
listenerAttribs.forward  = { camFwdX, camFwdY, camFwdZ };
listenerAttribs.up       = { 0, 1, 0 };
 
studioSystem->setListenerAttributes(0, &listenerAttribs);
HRTF SolutionPlatformQualityCost
Steam AudioAllExcellentFree
Resonance Audio (Google)AllVery goodFree
FMOD SpatializerAllGoodIncluded
Wwise Spatial AudioAllExcellentIncluded
Sony 360 Reality AudioPlayStationExcellentPlatform SDK
Dolby AtmosAllExcellentLicense required

Occlusion & Obstruction

graph TD
    subgraph Occlusion["🧱 Occlusion — Sound through solid wall"]
        O1["Sound source"] -->|"Wall blocks direct path"| O2["Low-pass filter applied\nMuffled, bass-heavy sound"]
    end
    subgraph Obstruction["🪨 Obstruction — Partial blocking"]
        OB1["Sound source"] -->|"Partial obstacle"| OB2["Reduced volume\nSome high frequencies lost"]
    end
    subgraph Reverb["🏛️ Reverb Zones"]
        R1["Cave"] -->|"Long reverb tail"| R2["Echoey, deep sound"]
        R3["Open field"] -->|"No reverb"| R4["Dry, direct sound"]
        R5["Cathedral"] -->|"Very long reverb"| R6["Massive, ethereal sound"]
    end
// Steam Audio occlusion (Godot / Unity plugin)
// Raycast from listener to source — if blocked, apply occlusion
float occlusionFactor = 0.0f;
if (Physics.Linecast(listenerPos, sourcePos, out hit)) {
    occlusionFactor = 1.0f; // fully occluded
}
// Apply low-pass filter based on occlusion
audioSource.SetOcclusionFactor(occlusionFactor);

DSP Effects

Essential DSP Effects

EffectWhat It DoesGame Use Case
EQ (Equalizer)Boost/cut specific frequenciesMuffled underwater sound, radio effect
Low-Pass FilterRemove high frequenciesOcclusion, underwater, muffled
High-Pass FilterRemove low frequenciesThin, distant sounds
ReverbSimulate room acousticsCaves, cathedrals, open spaces
Delay/EchoRepeat sound with time offsetCanyons, large spaces
CompressionReduce dynamic rangeConsistent volume, punch
LimiterHard ceiling on volumePrevent clipping
DistortionHarmonic saturationRadio, damaged equipment
ChorusSlight pitch/time variationsThicken sounds, underwater
FlangerComb filtering effectSci-fi, metallic sounds
Pitch ShiftChange pitch without timeSlow-motion, speed effects
Convolution ReverbReal room impulse responsesPhotorealistic acoustics

Reverb Design by Environment

EnvironmentPre-delayDecay TimeDiffusionCharacter
Small room5–10ms0.3–0.5sHighTight, intimate
Large hall20–40ms1.5–3sMediumGrand, spacious
Cathedral40–80ms4–8sLowMassive, ethereal
Cave10–30ms1–3sLowDark, echoey
Open field0–5ms0.1–0.3sHighDry, natural
Underwater0ms0.5–1sVery highMuffled, swirling
Metal room5ms0.2–0.4sLowBright, ringy

DSP in FMOD (GDScript)

# Apply DSP effects to FMOD buses in Godot
# FMOD Studio handles most DSP in the designer tool
# But you can also apply effects via snapshots
 
func enter_cave() -> void:
    # Activate cave reverb snapshot
    FMODStudio.set_parameter_by_name("Environment", 1.0)  # 0=outside, 1=cave
 
func go_underwater() -> void:
    # Activate underwater snapshot (low-pass + chorus)
    FMODStudio.set_parameter_by_name("Underwater", 1.0)
 
func take_damage() -> void:
    # Activate low-health snapshot (muffled, heartbeat)
    FMODStudio.set_parameter_by_name("PlayerHealth",
        float(current_health) / float(max_health))

DSP in Godot (Native)

# Godot native audio effects on buses
# Project → Project Settings → Audio → Add buses
 
extends Node
 
func _ready() -> void:
    # Get the SFX bus index
    var sfx_bus = AudioServer.get_bus_index("SFX")
 
    # Add a reverb effect to the SFX bus
    var reverb = AudioEffectReverb.new()
    reverb.room_size = 0.8
    reverb.damping = 0.5
    reverb.wet = 0.3
    AudioServer.add_bus_effect(sfx_bus, reverb)
 
    # Add a low-pass filter (for occlusion)
    var lowpass = AudioEffectLowPassFilter.new()
    lowpass.cutoff_hz = 800.0  # muffled
    AudioServer.add_bus_effect(sfx_bus, lowpass)
 
func set_underwater(active: bool) -> void:
    var sfx_bus = AudioServer.get_bus_index("SFX")
    # Enable/disable the low-pass filter effect
    AudioServer.set_bus_effect_enabled(sfx_bus, 1, active)

Engine Native Audio

Godot Audio System

graph TD
    subgraph Nodes["Audio Nodes"]
        ASP["AudioStreamPlayer\nNon-positional\nMusic, UI sounds"]
        ASP2["AudioStreamPlayer2D\nPositional 2D audio"]
        ASP3["AudioStreamPlayer3D\nPositional 3D audio"]
    end
    subgraph Buses["Audio Bus Layout"]
        Master["Master Bus"]
        Music["Music Bus"]
        SFX["SFX Bus"]
        Voice["Voice Bus"]
        Ambient["Ambient Bus"]
        Music --> Master
        SFX --> Master
        Voice --> Master
        Ambient --> Master
    end
extends Node
 
@onready var music_player = $AudioStreamPlayer
@onready var sfx_player   = $SFXPlayer
 
func _ready() -> void:
    # Load and play music
    music_player.stream = preload("res://audio/music/main_theme.ogg")
    music_player.bus = "Music"
    music_player.volume_db = -6.0
    music_player.play()
 
func play_sfx(stream: AudioStream, pitch_variation: float = 0.1) -> void:
    sfx_player.stream = stream
    sfx_player.bus = "SFX"
    # Random pitch variation prevents repetitive sounds
    sfx_player.pitch_scale = randf_range(1.0 - pitch_variation,
                                          1.0 + pitch_variation)
    sfx_player.play()
 
func set_music_volume(db: float) -> void:
    AudioServer.set_bus_volume_db(
        AudioServer.get_bus_index("Music"), db)
 
func mute_sfx(muted: bool) -> void:
    AudioServer.set_bus_mute(
        AudioServer.get_bus_index("SFX"), muted)

Unity Audio System

using UnityEngine;
using UnityEngine.Audio;
 
public class AudioManager : MonoBehaviour
{
    [SerializeField] AudioMixer masterMixer;
    [SerializeField] AudioSource musicSource;
    [SerializeField] AudioSource sfxSource;
 
    // Play music with crossfade
    public void PlayMusic(AudioClip clip, float fadeTime = 1f) {
        StartCoroutine(CrossfadeMusic(clip, fadeTime));
    }
 
    IEnumerator CrossfadeMusic(AudioClip newClip, float duration) {
        float startVol = musicSource.volume;
        // Fade out
        for (float t = 0; t < duration; t += Time.deltaTime) {
            musicSource.volume = Mathf.Lerp(startVol, 0, t / duration);
            yield return null;
        }
        musicSource.clip = newClip;
        musicSource.Play();
        // Fade in
        for (float t = 0; t < duration; t += Time.deltaTime) {
            musicSource.volume = Mathf.Lerp(0, startVol, t / duration);
            yield return null;
        }
    }
 
    // Play SFX at position (pooled)
    public void PlaySFX(AudioClip clip, Vector3 position,
                         float volume = 1f, float pitchVariation = 0.1f) {
        AudioSource.PlayClipAtPoint(clip, position, volume);
    }
 
    // Control mixer groups via exposed parameters
    public void SetMusicVolume(float normalizedVolume) {
        // Convert 0-1 to dB (-80 to 0)
        float db = normalizedVolume > 0.001f
            ? Mathf.Log10(normalizedVolume) * 20f
            : -80f;
        masterMixer.SetFloat("MusicVolume", db);
    }
}

Unreal Engine Audio

// Unreal Engine 5 — MetaSound and audio components
#include "Components/AudioComponent.h"
#include "Kismet/GameplayStatics.h"
#include "Sound/SoundBase.h"
 
// Play sound at location (fire and forget)
UGameplayStatics::PlaySoundAtLocation(
    this, FootstepSound, GetActorLocation(),
    FRotator::ZeroRotator,
    1.0f,   // volume multiplier
    FMath::RandRange(0.9f, 1.1f)  // pitch variation
);
 
// Spawn persistent audio component (music, ambient)
UAudioComponent* MusicComp = UGameplayStatics::SpawnSound2D(
    this, MusicAsset);
MusicComp->SetVolumeMultiplier(0.8f);
 
// Fade out music
MusicComp->FadeOut(2.0f, 0.0f); // 2 second fade to 0 volume
 
// Set audio parameter (for MetaSound)
MusicComp->SetFloatParameter(FName("CombatIntensity"), 0.8f);

Audio Optimization

Performance Bottlenecks

graph TD
    Bottleneck["Audio Performance Issue?"]
    CPU["CPU Bound\nToo many voices\nExpensive DSP"]
    Memory["Memory Bound\nToo many loaded banks\nUncompressed audio"]
    Streaming["Streaming Issues\nDisk I/O bottleneck\nHDD latency"]
    Bottleneck --> CPU
    Bottleneck --> Memory
    Bottleneck --> Streaming
    CPU -->|Fix| CPUFix["Voice limiting\nDSP optimization\nVoice stealing"]
    Memory -->|Fix| MemFix["Compress audio\nStream large files\nUnload unused banks"]
    Streaming -->|Fix| StreamFix["SSD storage\nPreload critical audio\nBuffer size tuning"]

Voice Management

StrategyDescription
Voice limitSet max simultaneous voices (e.g., 64 SFX, 4 music)
Voice stealingStop lowest priority voice when limit reached
Distance cullingDon’t play sounds beyond max audible distance
Priority systemHigh priority (player, boss) never stolen
VirtualizationDistant sounds tracked but not mixed — resume when close
// FMOD — set max voices per event
// In FMOD Studio: Event → Max Instances → set limit
// In code: check if event is already playing
FMOD::Studio::EventDescription* desc;
studioSystem->getEvent("event:/SFX/Footstep", &desc);
 
int instanceCount;
desc->getInstanceCount(&instanceCount);
 
if (instanceCount < 4) { // max 4 simultaneous footsteps
    // Create and play new instance
}

Compression Settings

Audio TypeRecommended FormatSample RateChannelsNotes
MusicOGG (Vorbis Q6-8)44,100 HzStereoStream from disk
Long ambientOGG (Vorbis Q4-6)44,100 HzStereoStream from disk
Short SFXADPCM or PCM44,100 HzMonoLoad into memory
Voice actingOGG (Vorbis Q5-7)44,100 HzMonoStream from disk
UI soundsPCM (uncompressed)44,100 HzMonoLoad into memory
FootstepsADPCM22,050 HzMonoLoad into memory

Audio Optimization Checklist

Sound Design Techniques

Layering & Variation

TechniqueHowExample
Pitch variation±5–15% random pitchFootsteps, gunshots
Volume variation±2–6 dB randomAmbient sounds
Sample pool3–8 variations, random pickFootsteps, impacts
LayeringCombine 2–4 soundsExplosion = boom + debris + rumble
RandomizationRandom start position in fileAmbient loops

Procedural Audio

Use CaseApproach
Engine soundsSynthesize based on RPM parameter
WindFiltered noise based on speed
RainGranular synthesis based on intensity
FootstepsSynthesize based on surface material + weight
DestructionPhysical simulation drives audio parameters
# Simple procedural engine sound in Godot
extends AudioStreamPlayer
 
@export var min_pitch: float = 0.5
@export var max_pitch: float = 2.0
 
func update_engine_sound(rpm_normalized: float) -> void:
    # rpm_normalized: 0.0 = idle, 1.0 = redline
    pitch_scale = lerp(min_pitch, max_pitch, rpm_normalized)
    volume_db   = lerp(-12.0, 0.0, rpm_normalized)

Foley & SFX Categories

CategoryExamplesDesign Notes
FootstepsWalk, run, jump, landSurface-dependent, pitch variation
WeaponsFire, reload, impact, shell casingLayer: mechanical + impact + tail
UIButton click, hover, confirm, errorShort, punchy, distinct
AmbientWind, rain, crowd, forestLoop seamlessly, layer multiple
CharacterBreathing, grunts, voiceEmotional state-dependent
EnvironmentDoors, switches, machineryMechanical feel, weight
Magic/Sci-fiSpells, lasers, portalsDesigned, not realistic

Logseq Graph Connections

  • Related pages:
    • Game Development — audio concepts overview (spatial audio, audio middleware basics)
    • Game Design — how audio serves game design goals (emotion, feedback, immersion)
    • Godot — Godot AudioStreamPlayer, buses, and audio effects
    • Unity — Unity AudioSource, AudioMixer, and FMOD/Wwise integration
    • Unreal Engine — Unreal MetaSound, Sound Cues, and Wwise integration
    • Advanced Graphics — rendering pipeline that audio must sync with
    • Free Assets — free audio assets, SFX packs, and music resources

More Learn

Official Documentation

Free Learning Resources

Free Audio Assets