[Share] 3D Audio Solution for Cocos Creator 3.x


Hello everyone, I’m Mr. Mashed Potato, and I’ve recently been working on a 3D multiplayer tank battle game using Cocos Creator 3.8. Due to project requirements, I’ve implemented a 3D spatial audio solution in Cocos Creator, and I’m sharing it with you here, hoping it can be of help.


In 3D games, a reasonable combination of sound effects can significantly enhance the game’s immersion. Using dual-channel pipelines and custom audio mixing to create audio with a 3D spatial effect has been used in most 3D game scenes, especially in FPS games.

By adding footsteps and gunshots with 3D effects to the scene, players can “locate by sound” with the help of professional headphones, thereby enriching the gameplay.

This tutorial is designed to guide you on how to use the browser’s built-in AudioContext API to implement 3D sound effects in Cocos.

For more details, please refer to the official documentation: Web audio spatialization basics - Web APIs | MDN


Audio context is a node graph that connects audio module nodes to form an audio processing flowchart. It is similar to a Shader Graph or an animation graph (as shown in the figure below).

There is no visual tool provided here, but in the code, we can connect all the audio effect nodes through aNode.connect(bNode) to achieve a similar mixing and arrangement effect.

To process audio, we first need to create an instance of AudioContext, as follows:

this.ac = new AudioContext();


In Cocos, we recommend obtaining the audio source in the form of a Buffer to avoid creating HTML elements. We can use the AudioClip property of Cocos to quickly get the AudioBuffer data of the audio source in the asset library.


     type: AudioClip,
     displayName: "Audio Source",
audioClip: AudioClip;
this.source = this.ac.createBufferSource();
this.source.buffer = this.audioClip._player._player._audioBuffer;


In game development, the camera is the eye, and the Listener is similar to the ear. In most cases, there is only one active Listener in our scene.

this.listener = this.ac.listener;

Like the camera, we can set the position and direction of the listener. (Please note that the forward and up vectors of the Cocos node are passed to setOrientation here).

Generally, the listenerNode here can be passed to the camera node (because the positions of the eyes and ears are very close).

The corresponding is to calculate the spatial audio channel and attenuation model based on the position of the camera as the listener.

The code is as follows:

    type: Node,
    displayName: "Audio Listener",
  listenerNode: Node;
      this.listenerNode.forward.x || 0,
      this.listenerNode.forward.y || 0,
      this.listenerNode.forward.z || 0,
      this.listenerNode.up.x || 0,
      this.listenerNode.up.y || 1,
      this.listenerNode.up.z || 0

      this.listenerNode.worldPosition.x || 0,
      this.listenerNode.worldPosition.y || 0,
      this.listenerNode.worldPosition.z || 0


Corresponding to the listener, the panner corresponds to the “sound source” in 3D space (such as the sound of a horn, gunshot, or footsteps).

Common parameters we can set include the position of the panner (position), the maximum distance ( maxDistance), etc., as follows:

 type: Node,
 displayName: "Sound Source",
pannerNode: Node;


this.panner = this.ac.createPanner();
this.panner.panningModel = "equalpower"; // Audio spatialization algorithm model
this.panner.distanceModel = "linear"; // Volume attenuation algorithm when moving away
this.panner.maxDistance = this.maxDistance; // Maximum distance
this.panner.refDistance = 5; // Reference distance for attenuation start
this.panner.rolloffFactor = 3; // Attenuation rate
this.panner.coneInnerAngle = 360; // Sound diffusion at 360 degrees
this.panner.orientationX.value = 1; // Sound source facing x component
this.panner.orientationY.value = 0;
this.panner.orientationZ.value = 0;


At the beginning of the article, we said that AudioContext is a “node graph”, so here we add a GainNode (gain node), the gain is a unitless value that will increase (multiply) all input channels of the audio accordingly.

Here we use it to modify the overall volume of the audio. The effect of gain modification will affect the result of panner.

For example: If the volume of the radio is very low, then it will not be heard clearly when it is too far away, on the contrary, if the volume is very large, it can be heard even when it is far away.

The code is as follows:

this.gainNode = this.ac.createGain();
//modify the volume
this.gainNode.gain.value = 0.2;


Connect the nodes we created with the connect method.



Since we manually create the audio context nodes, Cocos will not automatically release these audio instances when the component is destroyed, which may cause the audio of the previous scene to continue playing even after we have switched scenes.

To solve this problem, we just need to manually release the audio instances we created in the lifecycle hook function onDestroy.

The code is as follows:

onDestroy() {
  try {


  } catch (e) {

Source Code

audio3DComponent.zip (1.3 KB)

That’s all for the application of 3D sound effects in Cocos. Welcome to check for omissions and discuss in the comments area. I hope this tutorial can be helpful for your work.

About author

Vegetable mashed potato, a front-end engineer from the ancient city of Xi’an, is passionate about game development, and enjoys exploring various Web 3D effects. Skilled in the development and design of e-commerce interactive mini-games and interactive advertisements.

Currently, I am developing 3D online games with Cocos Creator and will also share my experiences and insights from time to time. I hope to communicate more with everyone and make progress together.

1 Like