diff --git a/index.bs b/index.bs index 4375c7e6b..2702ec085 100644 --- a/index.bs +++ b/index.bs @@ -1,46 +1,36 @@
 Title: Web Audio API 1.1
-Shortname: webaudio
-Level: 1.1
+Shortname: webaudio11
+Level: none
 Group: audiowg
-Status: ED
+Status: FPWD
+Prepare for TR: yes
+Date:2024-10-17
 ED: https://webaudio.github.io/web-audio-api/
-TR: https://www.w3.org/TR/webaudio-11/
+TR: https://www.w3.org/TR/webaudio/
 Favicon: favicon.png
-Previous Version: https://www.w3.org/TR/2021/REC-webaudio-20210617/
-Previous Version: https://www.w3.org/TR/2021/CR-webaudio-20210114/
-Previous Version: https://www.w3.org/TR/2020/CR-webaudio-20200611/
-Previous Version: https://www.w3.org/TR/2018/CR-webaudio-20180918/
-Previous Version: https://www.w3.org/TR/2018/WD-webaudio-20180619/
-Previous Version: https://www.w3.org/TR/2015/WD-webaudio-20151208/
-Previous Version: https://www.w3.org/TR/2013/WD-webaudio-20131010/
-Previous Version: https://www.w3.org/TR/2012/WD-webaudio-20121213/
-Previous Version: https://www.w3.org/TR/2012/WD-webaudio-20120802/
-Previous Version: https://www.w3.org/TR/2012/WD-webaudio-20120315/
-Previous Version: https://www.w3.org/TR/2011/WD-webaudio-20111215/
 Editor: Paul Adenot, Mozilla (https://www.mozilla.org/), padenot@mozilla.com, w3cid 62410
 Editor: Hongchan Choi, Google (https://www.google.com/), hongchan@google.com, w3cid 74103
 Former Editor: Raymond Toy (until Oct 2018)
 Former Editor: Chris Wilson (Until Jan 2016)
 Former Editor: Chris Rogers (Until Aug 2013)
-Implementation Report: implementation-report.html
 Test Suite: https://github.com/web-platform-tests/wpt/tree/master/webaudio
 Repository: WebAudio/web-audio-api
 Abstract: This specification describes a high-level Web API
-    for processing and synthesizing audio in web applications.
-    The primary paradigm is of an audio routing graph,
-    where a number of {{AudioNode}} objects are connected together to define the overall audio rendering.
-    The actual processing will primarily take place in the underlying implementation
-    (typically optimized Assembly / C / C++ code),
-    but [[#AudioWorklet|direct script processing and synthesis]] is also supported.
-
-    The [[#introductory]] section covers the motivation behind this specification.
-
-    This API is designed to be used in conjunction with other APIs and elements on the web platform, notably:
-    XMLHttpRequest [[XHR]] (using the responseType and response attributes).
-    For games and interactive applications,
-    it is anticipated to be used with the canvas 2D [[2dcontext]]
-    and WebGL [[WEBGL]] 3D graphics APIs.
+	for processing and synthesizing audio in web applications.
+	The primary paradigm is of an audio routing graph,
+	where a number of {{AudioNode}} objects are connected together to define the overall audio rendering.
+	The actual processing will primarily take place in the underlying implementation
+	(typically optimized Assembly / C / C++ code),
+	but [[#audioworklet|direct script processing and synthesis]] is also supported.
+
+	The [[#introductory]] section covers the motivation behind this specification.
+
+	This API is designed to be used in conjunction with other APIs and elements on the web platform, notably:
+	XMLHttpRequest [[XHR]] (using the responseType and response attributes).
+	For games and interactive applications,
+	it is anticipated to be used with the canvas 2D [[2dcontext]]
+	and WebGL [[WEBGL]] 3D graphics APIs.
 Markup Shorthands: markdown on, dfn on, css off
 
@@ -48,207 +38,157 @@ Markup Shorthands: markdown on, dfn on, css off spec: ECMAScript; url: https://tc39.github.io/ecma262/#sec-data-blocks; type: dfn; text: data block; url: https://www.w3.org/TR/mediacapture-streams/#dom-mediadevices-getusermedia; type: method; for: MediaDevices; text: getUserMedia() - - + + + + + + + @@ -289,68 +229,68 @@ Features The API supports these primary features: * [[#ModularRouting|Modular routing]] for simple or complex - mixing/effect architectures. + mixing/effect architectures. * High dynamic range, using 32-bit floats for internal processing. * [[#AudioParam|Sample-accurate scheduled sound playback]] - with low [[#latency|latency]] for musical applications - requiring a very high degree of rhythmic precision such as drum - machines and sequencers. This also includes the possibility of - [[#DynamicLifetime|dynamic creation]] of effects. + with low [[#latency|latency]] for musical applications + requiring a very high degree of rhythmic precision such as drum + machines and sequencers. This also includes the possibility of + [[#DynamicLifetime|dynamic creation]] of effects. * Automation of audio parameters for envelopes, fade-ins / - fade-outs, granular effects, filter sweeps, LFOs etc. + fade-outs, granular effects, filter sweeps, LFOs etc. * Flexible handling of channels in an audio stream, allowing them - to be split and merged. + to be split and merged. * Processing of audio sources from an <{audio}> or <{video}> - {{MediaElementAudioSourceNode|media element}}. + {{MediaElementAudioSourceNode|media element}}. * Processing live audio input using a {{MediaStreamTrackAudioSourceNode|MediaStream}} from - {{getUserMedia()}}. + {{getUserMedia()}}. * Integration with WebRTC - * Processing audio received from a remote peer using a - {{MediaStreamTrackAudioSourceNode}} and - [[!webrtc]]. + * Processing audio received from a remote peer using a + {{MediaStreamTrackAudioSourceNode}} and + [[!webrtc]]. - * Sending a generated or processed audio stream to a remote - peer using a {{MediaStreamAudioDestinationNode}} - and [[!webrtc]]. + * Sending a generated or processed audio stream to a remote + peer using a {{MediaStreamAudioDestinationNode}} + and [[!webrtc]]. -* Audio stream synthesis and processing [[#AudioWorklet|directly using scripts]]. +* Audio stream synthesis and processing [[#audioworklet|directly using scripts]]. * [[#Spatialization|Spatialized audio]] supporting a wide - range of 3D games and immersive environments: + range of 3D games and immersive environments: - * Panning models: equalpower, HRTF, pass-through - * Distance Attenuation - * Sound Cones - * Obstruction / Occlusion - * Source / Listener based + * Panning models: equalpower, HRTF, pass-through + * Distance Attenuation + * Sound Cones + * Obstruction / Occlusion + * Source / Listener based * A convolution engine for a wide - range of linear effects, especially very high-quality room effects. - Here are some examples of possible effects: - - * Small / large room - * Cathedral - * Concert hall - * Cave - * Tunnel - * Hallway - * Forest - * Amphitheater - * Sound of a distant room through a doorway - * Extreme filters - * Strange backwards effects - * Extreme comb filter effects + range of linear effects, especially very high-quality room effects. + Here are some examples of possible effects: + + * Small / large room + * Cathedral + * Concert hall + * Cave + * Tunnel + * Hallway + * Forest + * Amphitheater + * Sound of a distant room through a doorway + * Extreme filters + * Strange backwards effects + * Extreme comb filter effects * Dynamics compression for overall control and sweetening of the mix -* Efficient [[#AnalyserNode|real-time time-domain and frequency-domain analysis / music visualizer support]]. +* Efficient [[#analysernode|real-time time-domain and frequency-domain analysis / music visualizer support]]. * Efficient biquad filters for lowpass, highpass, and other common filters. @@ -379,10 +319,10 @@ All routing occurs within an {{AudioContext}} containing a single {{AudioDestinationNode}}:
- modular routing -
- A simple example of modular routing. -
+ modular routing +
+ A simple example of modular routing. +
Illustrating this simple routing, here's a simple example playing a single sound: @@ -391,10 +331,10 @@ Illustrating this simple routing, here's a simple example playing a single sound const context = new AudioContext(); function playSound() { - const source = context.createBufferSource(); - source.buffer = dogBarkingBuffer; - source.connect(context.destination); - source.start(0); + const source = context.createBufferSource(); + source.buffer = dogBarkingBuffer; + source.connect(context.destination); + source.start(0); } @@ -402,10 +342,10 @@ Here's a more complex example with three sources and a convolution reverb send with a dynamics compressor at the final output stage:
- modular routing2 -
- A more complex example of modular routing. -
+ modular routing2 +
+ A more complex example of modular routing. +
@@ -426,69 +366,69 @@ let mainDry;
 let mainWet;
 
 function setupRoutingGraph () {
-    context = new AudioContext();
-
-    // Create the effects nodes.
-    lowpassFilter = context.createBiquadFilter();
-    waveShaper = context.createWaveShaper();
-    panner = context.createPanner();
-    compressor = context.createDynamicsCompressor();
-    reverb = context.createConvolver();
-
-    // Create main wet and dry.
-    mainDry = context.createGain();
-    mainWet = context.createGain();
-
-    // Connect final compressor to final destination.
-    compressor.connect(context.destination);
-
-    // Connect main dry and wet to compressor.
-    mainDry.connect(compressor);
-    mainWet.connect(compressor);
-
-    // Connect reverb to main wet.
-    reverb.connect(mainWet);
-
-    // Create a few sources.
-    source1 = context.createBufferSource();
-    source2 = context.createBufferSource();
-    source3 = context.createOscillator();
-
-    source1.buffer = manTalkingBuffer;
-    source2.buffer = footstepsBuffer;
-    source3.frequency.value = 440;
-
-    // Connect source1
-    dry1 = context.createGain();
-    wet1 = context.createGain();
-    source1.connect(lowpassFilter);
-    lowpassFilter.connect(dry1);
-    lowpassFilter.connect(wet1);
-    dry1.connect(mainDry);
-    wet1.connect(reverb);
-
-    // Connect source2
-    dry2 = context.createGain();
-    wet2 = context.createGain();
-    source2.connect(waveShaper);
-    waveShaper.connect(dry2);
-    waveShaper.connect(wet2);
-    dry2.connect(mainDry);
-    wet2.connect(reverb);
-
-    // Connect source3
-    dry3 = context.createGain();
-    wet3 = context.createGain();
-    source3.connect(panner);
-    panner.connect(dry3);
-    panner.connect(wet3);
-    dry3.connect(mainDry);
-    wet3.connect(reverb);
-
-    // Start the sources now.
-    source1.start(0);
-    source2.start(0);
-    source3.start(0);
+	context = new AudioContext();
+
+	// Create the effects nodes.
+	lowpassFilter = context.createBiquadFilter();
+	waveShaper = context.createWaveShaper();
+	panner = context.createPanner();
+	compressor = context.createDynamicsCompressor();
+	reverb = context.createConvolver();
+
+	// Create main wet and dry.
+	mainDry = context.createGain();
+	mainWet = context.createGain();
+
+	// Connect final compressor to final destination.
+	compressor.connect(context.destination);
+
+	// Connect main dry and wet to compressor.
+	mainDry.connect(compressor);
+	mainWet.connect(compressor);
+
+	// Connect reverb to main wet.
+	reverb.connect(mainWet);
+
+	// Create a few sources.
+	source1 = context.createBufferSource();
+	source2 = context.createBufferSource();
+	source3 = context.createOscillator();
+
+	source1.buffer = manTalkingBuffer;
+	source2.buffer = footstepsBuffer;
+	source3.frequency.value = 440;
+
+	// Connect source1
+	dry1 = context.createGain();
+	wet1 = context.createGain();
+	source1.connect(lowpassFilter);
+	lowpassFilter.connect(dry1);
+	lowpassFilter.connect(wet1);
+	dry1.connect(mainDry);
+	wet1.connect(reverb);
+
+	// Connect source2
+	dry2 = context.createGain();
+	wet2 = context.createGain();
+	source2.connect(waveShaper);
+	waveShaper.connect(dry2);
+	waveShaper.connect(wet2);
+	dry2.connect(mainDry);
+	wet2.connect(reverb);
+
+	// Connect source3
+	dry3 = context.createGain();
+	wet3 = context.createGain();
+	source3.connect(panner);
+	panner.connect(dry3);
+	panner.connect(wet3);
+	dry3.connect(mainDry);
+	wet3.connect(reverb);
+
+	// Start the sources now.
+	source1.start(0);
+	source2.start(0);
+	source3.start(0);
 }
 
@@ -500,36 +440,36 @@ output of a node can act as a modulation signal rather than an input signal.
- modular routing3 -
- Modular routing illustrating one Oscillator modulating the - frequency of another. -
+ modular routing3 +
+ Modular routing illustrating one Oscillator modulating the + frequency of another. +
 function setupRoutingGraph() {
-    const context = new AudioContext();
-
-    // Create the low frequency oscillator that supplies the modulation signal
-    const lfo = context.createOscillator();
-    lfo.frequency.value = 1.0;
-
-    // Create the high frequency oscillator to be modulated
-    const hfo = context.createOscillator();
-    hfo.frequency.value = 440.0;
-
-    // Create a gain node whose gain determines the amplitude of the modulation signal
-    const modulationGain = context.createGain();
-    modulationGain.gain.value = 50;
-
-    // Configure the graph and start the oscillators
-    lfo.connect(modulationGain);
-    modulationGain.connect(hfo.detune);
-    hfo.connect(context.destination);
-    hfo.start(0);
-    lfo.start(0);
+	const context = new AudioContext();
+
+	// Create the low frequency oscillator that supplies the modulation signal
+	const lfo = context.createOscillator();
+	lfo.frequency.value = 1.0;
+
+	// Create the high frequency oscillator to be modulated
+	const hfo = context.createOscillator();
+	hfo.frequency.value = 440.0;
+
+	// Create a gain node whose gain determines the amplitude of the modulation signal
+	const modulationGain = context.createGain();
+	modulationGain.gain.value = 50;
+
+	// Configure the graph and start the oscillators
+	lfo.connect(modulationGain);
+	modulationGain.connect(hfo.detune);
+	hfo.connect(context.destination);
+	hfo.start(0);
+	lfo.start(0);
 }
 
@@ -539,139 +479,139 @@ API Overview The interfaces defined are: * An AudioContext - interface, which contains an audio signal graph representing - connections between {{AudioNode}}s. + interface, which contains an audio signal graph representing + connections between {{AudioNode}}s. * An {{AudioNode}} interface, which represents - audio sources, audio outputs, and intermediate processing modules. - {{AudioNode}}s can be dynamically connected together - in a [[#ModularRouting|modular fashion]]. - {{AudioNode}}s exist in the context of an - {{AudioContext}}. + audio sources, audio outputs, and intermediate processing modules. + {{AudioNode}}s can be dynamically connected together + in a [[#ModularRouting|modular fashion]]. + {{AudioNode}}s exist in the context of an + {{AudioContext}}. * An {{AnalyserNode}} interface, an - {{AudioNode}} for use with music visualizers, or - other visualization applications. + {{AudioNode}} for use with music visualizers, or + other visualization applications. * An {{AudioBuffer}} interface, for working with - memory-resident audio assets. These can represent one-shot sounds, or - longer audio clips. + memory-resident audio assets. These can represent one-shot sounds, or + longer audio clips. * An {{AudioBufferSourceNode}} interface, an - {{AudioNode}} which generates audio from an - AudioBuffer. + {{AudioNode}} which generates audio from an + AudioBuffer. * An {{AudioDestinationNode}} interface, an - {{AudioNode}} subclass representing the final - destination for all rendered audio. + {{AudioNode}} subclass representing the final + destination for all rendered audio. * An {{AudioParam}} interface, for controlling an - individual aspect of an {{AudioNode}}'s functioning, - such as volume. + individual aspect of an {{AudioNode}}'s functioning, + such as volume. * An {{AudioListener}} interface, which works with - a {{PannerNode}} for spatialization. + a {{PannerNode}} for spatialization. * An {{AudioWorklet}} interface representing a - factory for creating custom nodes that can process audio directly - using scripts. + factory for creating custom nodes that can process audio directly + using scripts. * An {{AudioWorkletGlobalScope}} interface, the - context in which AudioWorkletProcessor processing scripts run. + context in which AudioWorkletProcessor processing scripts run. * An {{AudioWorkletNode}} interface, an - {{AudioNode}} representing a node processed in an - AudioWorkletProcessor. + {{AudioNode}} representing a node processed in an + AudioWorkletProcessor. * An {{AudioWorkletProcessor}} interface, - representing a single node instance inside an audio worker. + representing a single node instance inside an audio worker. * A {{BiquadFilterNode}} interface, an - {{AudioNode}} for common low-order filters such as: + {{AudioNode}} for common low-order filters such as: - * Low Pass - * High Pass - * Band Pass - * Low Shelf - * High Shelf - * Peaking - * Notch - * Allpass + * Low Pass + * High Pass + * Band Pass + * Low Shelf + * High Shelf + * Peaking + * Notch + * Allpass * A {{ChannelMergerNode}} interface, an - {{AudioNode}} for combining channels from multiple - audio streams into a single audio stream. + {{AudioNode}} for combining channels from multiple + audio streams into a single audio stream. * A {{ChannelSplitterNode}} interface, an {{AudioNode}} for accessing the individual channels of an - audio stream in the routing graph. + audio stream in the routing graph. * A {{ConstantSourceNode}} interface, an - {{AudioNode}} for generating a nominally constant output value - with an {{AudioParam}} to allow automation of the value. + {{AudioNode}} for generating a nominally constant output value + with an {{AudioParam}} to allow automation of the value. * A {{ConvolverNode}} interface, an - {{AudioNode}} for applying a - real-time linear effect (such as the sound of - a concert hall). + {{AudioNode}} for applying a + real-time linear effect (such as the sound of + a concert hall). * A {{DelayNode}} interface, an - {{AudioNode}} which applies a dynamically adjustable - variable delay. + {{AudioNode}} which applies a dynamically adjustable + variable delay. * A {{DynamicsCompressorNode}} interface, an - {{AudioNode}} for dynamics compression. + {{AudioNode}} for dynamics compression. * A {{GainNode}} interface, an - {{AudioNode}} for explicit gain control. + {{AudioNode}} for explicit gain control. * An {{IIRFilterNode}} interface, an - {{AudioNode}} for a general IIR filter. + {{AudioNode}} for a general IIR filter. * A {{MediaElementAudioSourceNode}} interface, an - {{AudioNode}} which is the audio source from an - <{audio}>, <{video}>, or other media element. + {{AudioNode}} which is the audio source from an + <{audio}>, <{video}>, or other media element. * A {{MediaStreamAudioSourceNode}} interface, an - {{AudioNode}} which is the audio source from a - {{MediaStream}} such as live audio input, or from a remote peer. + {{AudioNode}} which is the audio source from a + {{MediaStream}} such as live audio input, or from a remote peer. * A {{MediaStreamTrackAudioSourceNode}} interface, - an {{AudioNode}} which is the audio source from a - {{MediaStreamTrack}}. + an {{AudioNode}} which is the audio source from a + {{MediaStreamTrack}}. * A {{MediaStreamAudioDestinationNode}} interface, - an {{AudioNode}} which is the audio destination to a - {{MediaStream}} sent to a remote peer. + an {{AudioNode}} which is the audio destination to a + {{MediaStream}} sent to a remote peer. * A {{PannerNode}} interface, an - {{AudioNode}} for spatializing / positioning audio in - 3D space. + {{AudioNode}} for spatializing / positioning audio in + 3D space. * A {{PeriodicWave}} interface for specifying - custom periodic waveforms for use by the - {{OscillatorNode}}. + custom periodic waveforms for use by the + {{OscillatorNode}}. * An {{OscillatorNode}} interface, an - {{AudioNode}} for generating a periodic waveform. + {{AudioNode}} for generating a periodic waveform. * A {{StereoPannerNode}} interface, an - {{AudioNode}} for equal-power positioning of audio - input in a stereo stream. + {{AudioNode}} for equal-power positioning of audio + input in a stereo stream. * A {{WaveShaperNode}} interface, an - {{AudioNode}} which applies a non-linear waveshaping - effect for distortion and other more subtle warming effects. + {{AudioNode}} which applies a non-linear waveshaping + effect for distortion and other more subtle warming effects. There are also several features that have been deprecated from the Web Audio API but not yet removed, pending implementation experience of their replacements: * A {{ScriptProcessorNode}} interface, an {{AudioNode}} for generating or processing audio directly - using scripts. + using scripts. * An {{AudioProcessingEvent}} interface, which is - an event type used with {{ScriptProcessorNode}} - objects. + an event type used with {{ScriptProcessorNode}} + objects.

The Audio API

@@ -686,7 +626,7 @@ The Audio API ████████ ██ ██ ██████ ████████ ██ ██ ██████ --> -

+

The {{BaseAudioContext}} Interface

This interface represents a set of {{AudioNode}} @@ -703,136 +643,90 @@ but is instead extended by the concrete interfaces [[pending promises]] that is an initially empty ordered list of promises. -Each {{BaseAudioContext}} has a unique - -media element event task source. -Additionally, a {{BaseAudioContext}} has several private slots [[rendering thread state]] and [[control thread state]] that take values from -{{AudioContextState}}, and that are both initially set to "suspended" -, and a private slot -[[render quantum size]] that is an unsigned integer.
 enum AudioContextState {
-    "suspended",
-    "running",
-    "closed"
+	"suspended",
+	"running",
+	"closed"
 };
 
- - - - - - - - -
{{AudioContextState}} enumeration description
Enum valueDescription
- "suspended" - - This context is currently suspended (context time is not - proceeding, audio hardware may be powered down/released). -
- "running" - - Audio is being processed. -
- "closed" - - This context has been released, and can no longer be used to - process audio. All system audio resources have been released. -
-
- -
- enum AudioContextRenderSizeCategory {
-    "default",
-    "hardware"
-};
-
- -
- - - - - - - - - - - + + + + + + +
- Enumeration description -
- "default" - - The AudioContext's render quantum size is the default value of 128 - frames. -
- "hardware" - - The User-Agent picks a render quantum size that is best for the - current configuration. - - Note: This exposes information about the host and can be used for fingerprinting. -
+ Enumeration description +
+ "suspended" + + This context is currently suspended (context time is not + proceeding, audio hardware may be powered down/released). +
+ "running" + + Audio is being processed. +
+ "closed" + + This context has been released, and can no longer be used to + process audio. All system audio resources have been released.
-callback DecodeErrorCallback = undefined (DOMException error); +callback DecodeErrorCallback = void (DOMException error); -callback DecodeSuccessCallback = undefined (AudioBuffer decodedData); +callback DecodeSuccessCallback = void (AudioBuffer decodedData); [Exposed=Window] interface BaseAudioContext : EventTarget { - readonly attribute AudioDestinationNode destination; - readonly attribute float sampleRate; - readonly attribute double currentTime; - readonly attribute AudioListener listener; - readonly attribute AudioContextState state; - readonly attribute unsigned long renderQuantumSize; - [SameObject, SecureContext] - readonly attribute AudioWorklet audioWorklet; - attribute EventHandler onstatechange; - - AnalyserNode createAnalyser (); - BiquadFilterNode createBiquadFilter (); - AudioBuffer createBuffer (unsigned long numberOfChannels, - unsigned long length, - float sampleRate); - AudioBufferSourceNode createBufferSource (); - ChannelMergerNode createChannelMerger (optional unsigned long numberOfInputs = 6); - ChannelSplitterNode createChannelSplitter ( - optional unsigned long numberOfOutputs = 6); - ConstantSourceNode createConstantSource (); - ConvolverNode createConvolver (); - DelayNode createDelay (optional double maxDelayTime = 1.0); - DynamicsCompressorNode createDynamicsCompressor (); - GainNode createGain (); - IIRFilterNode createIIRFilter (sequence<double> feedforward, - sequence<double> feedback); - OscillatorNode createOscillator (); - PannerNode createPanner (); - PeriodicWave createPeriodicWave (sequence<float> real, - sequence<float> imag, - optional PeriodicWaveConstraints constraints = {}); - ScriptProcessorNode createScriptProcessor( - optional unsigned long bufferSize = 0, - optional unsigned long numberOfInputChannels = 2, - optional unsigned long numberOfOutputChannels = 2); - StereoPannerNode createStereoPanner (); - WaveShaperNode createWaveShaper (); - - Promise<AudioBuffer> decodeAudioData ( - ArrayBuffer audioData, - optional DecodeSuccessCallback? successCallback, - optional DecodeErrorCallback? errorCallback); + readonly attribute AudioDestinationNode destination; + readonly attribute float sampleRate; + readonly attribute double currentTime; + readonly attribute AudioListener listener; + readonly attribute AudioContextState state; + [SameObject, SecureContext] + readonly attribute AudioWorklet audioWorklet; + attribute EventHandler onstatechange; + + AnalyserNode createAnalyser (); + BiquadFilterNode createBiquadFilter (); + AudioBuffer createBuffer (unsigned long numberOfChannels, + unsigned long length, + float sampleRate); + AudioBufferSourceNode createBufferSource (); + ChannelMergerNode createChannelMerger (optional unsigned long numberOfInputs = 6); + ChannelSplitterNode createChannelSplitter ( + optional unsigned long numberOfOutputs = 6); + ConstantSourceNode createConstantSource (); + ConvolverNode createConvolver (); + DelayNode createDelay (optional double maxDelayTime = 1.0); + DynamicsCompressorNode createDynamicsCompressor (); + GainNode createGain (); + IIRFilterNode createIIRFilter (sequence<double> feedforward, + sequence<double> feedback); + OscillatorNode createOscillator (); + PannerNode createPanner (); + PeriodicWave createPeriodicWave (sequence<float> real, + sequence<float> imag, + optional PeriodicWaveConstraints constraints = {}); + ScriptProcessorNode createScriptProcessor( + optional unsigned long bufferSize = 0, + optional unsigned long numberOfInputChannels = 2, + optional unsigned long numberOfOutputChannels = 2); + StereoPannerNode createStereoPanner (); + WaveShaperNode createWaveShaper (); + + Promise<AudioBuffer> decodeAudioData ( + ArrayBuffer audioData, + optional DecodeSuccessCallback? successCallback, + optional DecodeErrorCallback? errorCallback); }; @@ -840,489 +734,469 @@ interface BaseAudioContext : EventTarget { Attributes
- : audioWorklet - :: - Allows access to the Worklet object that can import - a script containing {{AudioWorkletProcessor}} - class definitions via the algorithms defined by [[!HTML]] - and {{AudioWorklet}}. - - : currentTime - :: - This is the time in seconds of the sample frame immediately - following the last sample-frame in the block of audio most - recently processed by the context's rendering graph. If the - context's rendering graph has not yet processed a block of - audio, then {{BaseAudioContext/currentTime}} has a value of - zero. - - In the time coordinate system of {{BaseAudioContext/currentTime}}, the value of - zero corresponds to the first sample-frame in the first block - processed by the graph. Elapsed time in this system corresponds - to elapsed time in the audio stream generated by the - {{BaseAudioContext}}, which may not be - synchronized with other clocks in the system. (For an - {{OfflineAudioContext}}, since the stream is - not being actively played by any device, there is not even an - approximation to real time.) - - All scheduled times in the Web Audio API are relative to the - value of {{BaseAudioContext/currentTime}}. - - When the {{BaseAudioContext}} is in the - "{{AudioContextState/running}}" state, the - value of this attribute is monotonically increasing and is - updated by the rendering thread in uniform increments, - corresponding to one render quantum. Thus, for a running - context, currentTime increases steadily as the - system processes audio blocks, and always represents the time - of the start of the next audio block to be processed. It is - also the earliest possible time when any change scheduled in - the current state might take effect. - - currentTime MUST be read atomically on the control thread before being - returned. - - : destination - :: - An {{AudioDestinationNode}} - with a single input representing the final destination for all - audio. Usually this will represent the actual audio hardware. - All {{AudioNode}}s actively rendering audio - will directly or indirectly connect to {{BaseAudioContext/destination}}. - - : listener - :: - An {{AudioListener}} - which is used for 3D spatialization. - - : onstatechange - :: - A property used to set an [=event handler=] for an - event that is dispatched to - {{BaseAudioContext}} when the state of the - AudioContext has changed (i.e. when the corresponding promise - would have resolved). The event type of this event handler is - statechange. An event that uses the - {{Event}} interface will be dispatched to the event - handler, which can query the AudioContext's state directly. A - newly-created AudioContext will always begin in the - suspended state, and a state change event will be - fired whenever the state changes to a different state. This - event is fired before the {{complete}} event - is fired. - - : sampleRate - :: - The sample rate (in sample-frames per second) at which the - {{BaseAudioContext}} handles audio. It is assumed that all - {{AudioNode}}s in the context run at this rate. In making this - assumption, sample-rate converters or "varispeed" processors are - not supported in real-time processing. - The Nyquist frequency is half this sample-rate value. - - : state - :: - Describes the current state of the {{BaseAudioContext}}. Getting this - attribute returns the contents of the {{[[control thread state]]}} slot. - - : renderQuantumSize - :: - Getting this attribute returns the value of {{BaseAudioContext/[[render - quantum size]]}} slot. + : audioWorklet + :: + Allows access to the Worklet object that can import + a script containing {{AudioWorkletProcessor}} + class definitions via the algorithms defined by [[!worklets-1]] + and {{AudioWorklet}}. + + : currentTime + :: + This is the time in seconds of the sample frame immediately + following the last sample-frame in the block of audio most + recently processed by the context's rendering graph. If the + context's rendering graph has not yet processed a block of + audio, then {{BaseAudioContext/currentTime}} has a value of + zero. + + In the time coordinate system of {{BaseAudioContext/currentTime}}, the value of + zero corresponds to the first sample-frame in the first block + processed by the graph. Elapsed time in this system corresponds + to elapsed time in the audio stream generated by the + {{BaseAudioContext}}, which may not be + synchronized with other clocks in the system. (For an + {{OfflineAudioContext}}, since the stream is + not being actively played by any device, there is not even an + approximation to real time.) + + All scheduled times in the Web Audio API are relative to the + value of {{BaseAudioContext/currentTime}}. + + When the {{BaseAudioContext}} is in the + "{{AudioContextState/running}}" state, the + value of this attribute is monotonically increasing and is + updated by the rendering thread in uniform increments, + corresponding to one render quantum. Thus, for a running + context, currentTime increases steadily as the + system processes audio blocks, and always represents the time + of the start of the next audio block to be processed. It is + also the earliest possible time when any change scheduled in + the current state might take effect. + + currentTime MUST be read atomically on the control thread before being + returned. + + : destination + :: + An {{AudioDestinationNode}} + with a single input representing the final destination for all + audio. Usually this will represent the actual audio hardware. + All {{AudioNode}}s actively rendering audio + will directly or indirectly connect to {{BaseAudioContext/destination}}. + + : listener + :: + An {{AudioListener}} + which is used for 3D spatialization. + + : onstatechange + :: + A property used to set the EventHandler for an + event that is dispatched to + {{BaseAudioContext}} when the state of the + AudioContext has changed (i.e. when the corresponding promise + would have resolved). An event of type + {{Event}} will be dispatched to the event + handler, which can query the AudioContext's state directly. A + newly-created AudioContext will always begin in the + suspended state, and a state change event will be + fired whenever the state changes to a different state. This + event is fired before the {{oncomplete}} event + is fired. + + : sampleRate + :: + The sample rate (in sample-frames per second) at which the + {{BaseAudioContext}} handles audio. It is assumed that all + {{AudioNode}}s in the context run at this rate. In making this + assumption, sample-rate converters or "varispeed" processors are + not supported in real-time processing. + The Nyquist frequency is half this sample-rate value. + + : state + :: + Describes the current state of the {{AudioContext}}. Its value is identical + to control thread state.

Methods

- : createAnalyser() - :: - Factory method for an {{AnalyserNode}}. -
- No parameters. -
-
- Return type: {{AnalyserNode}} -
- - : createBiquadFilter() - :: - Factory method for a {{BiquadFilterNode}} - representing a second order filter which can be configured as one - of several common filter types. -
- No parameters. -
-
- Return type: {{BiquadFilterNode}} -
- - : createBuffer(numberOfChannels, length, sampleRate) - :: - Creates an AudioBuffer of the given size. The audio data in the - buffer will be zero-initialized (silent). - A {{NotSupportedError}} exception MUST be - thrown if any of the arguments is negative, zero, or outside its - nominal range. - -
-        numberOfChannels: Determines how many channels the buffer will have. An implementation MUST support at least 32 channels.
-        length: Determines the size of the buffer in sample-frames.  This MUST be at least 1.
-        sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. An implementation MUST support sample rates in at least the range 8000 to 96000.
-        
-
- Return type: {{AudioBuffer}} -
- - : createBufferSource() - :: - Factory method for a - {{AudioBufferSourceNode}}. -
- No parameters. -
-
- Return type: {{AudioBufferSourceNode}} -
- - : createChannelMerger(numberOfInputs) - :: - Factory method for a - {{ChannelMergerNode}} representing a channel - merger. An - {{IndexSizeError}} exception MUST be thrown if - {{BaseAudioContext/createChannelMerger(numberOfInputs)/numberOfInputs}} is less than 1 or is greater than the number of supported channels. - -
-        numberOfInputs: Determines the number of inputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used.
-        
- -
- Return type: {{ChannelMergerNode}} -
- - : createChannelSplitter(numberOfOutputs) - :: - Factory method for a - {{ChannelSplitterNode}} representing a channel - splitter. An - {{IndexSizeError}} exception MUST be thrown if - {{BaseAudioContext/createChannelSplitter(numberOfOutputs)/numberOfOutputs}} is less than 1 or is greater than the number of supported channels. - -
-        numberOfOutputs: The number of outputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used.
-        
- -
- Return type: {{ChannelSplitterNode}} -
- - : createConstantSource() - :: - Factory method for a - {{ConstantSourceNode}}. -
- No parameters. -
-
- Return type: {{ConstantSourceNode}} -
- - : createConvolver() - :: - Factory method for a {{ConvolverNode}}. -
- No parameters. -
-
- Return type: {{ConvolverNode}} -
- - : createDelay(maxDelayTime) - :: - Factory method for a {{DelayNode}}. The initial default - delay time will be 0 seconds. - -
-        maxDelayTime: Specifies the maximum delay time in seconds allowed for the delay line. If specified, this value MUST be greater than zero and less than three minutes or a {{NotSupportedError}} exception MUST be thrown. If not specified, then `1` will be used.
-        
- -
- Return type: {{DelayNode}} -
- - : createDynamicsCompressor() - :: - Factory method for a - {{DynamicsCompressorNode}}. -
- No parameters. -
-
- Return type: {{DynamicsCompressorNode}} -
- - : createGain() - :: - Factory method for {{GainNode}}. -
- No parameters. -
-
- Return type: {{GainNode}} -
- - : createIIRFilter(feedforward, feedback) - :: -
-        feedforward: An array of the feedforward (numerator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If all of the values are zero, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20.
-        feedback: An array of the feedback (denominator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If the first element of the array is 0, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20.
-        
- -
- Return type: {{IIRFilterNode}} -
- - : createOscillator() - :: - Factory method for an {{OscillatorNode}}. -
- No parameters. -
-
- Return type: {{OscillatorNode}} -
- - : createPanner() - :: - Factory method for a {{PannerNode}}. -
- No parameters. -
-
- Return type: {{PannerNode}} -
- - : createPeriodicWave(real, imag, constraints) - :: - Factory method to create a - {{PeriodicWave}}. - -
- When calling this method, - execute these steps: - - 1. If {{BaseAudioContext/createPeriodicWave()/real}} and {{BaseAudioContext/createPeriodicWave()/imag}} are not of the same - length, an {{IndexSizeError}} MUST be thrown. - - 2. Let o be a new object of type - {{PeriodicWaveOptions}}. - - 3. Respectively set the {{BaseAudioContext/createPeriodicWave()/real}} and - {{BaseAudioContext/createPeriodicWave()/imag}} parameters passed to this factory method to - the attributes of the same name on o. - - 4. Set the {{PeriodicWaveConstraints/disableNormalization}} attribute on - o to the value of the - {{PeriodicWaveConstraints/disableNormalization}} attribute of the - constraints attribute passed to the factory - method. - - 5. Construct a new {{PeriodicWave}} - p, passing the {{BaseAudioContext}} this factory - method has been called on as a first argument, and - o. - 6. Return p. -
- -
-        real: A sequence of cosine parameters. See its {{PeriodicWaveOptions/real}} constructor argument for a more detailed description.
-        imag: A sequence of sine parameters. See its {{PeriodicWaveOptions/imag}} constructor argument for a more detailed description.
-        constraints: If not given, the waveform is normalized. Otherwise, the waveform is normalized according the value given by constraints.
-        
- -
- Return type: {{PeriodicWave}} -
- - : createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels) - :: - Factory method for a {{ScriptProcessorNode}}. - This method is DEPRECATED, as it is intended to be replaced by {{AudioWorkletNode}}. - Creates a {{ScriptProcessorNode}} for direct audio processing using scripts. - An {{IndexSizeError}} exception MUST be thrown if - {{bufferSize!!argument}} or {{numberOfInputChannels}} or {{numberOfOutputChannels}} - are outside the valid range. - - It is invalid for both {{numberOfInputChannels}} and {{numberOfOutputChannels}} to be zero. - In this case an {{IndexSizeError}} MUST be thrown. - -
-        bufferSize: The {{ScriptProcessorNode/bufferSize}} parameter determines the buffer size in units of sample-frames. If it's not passed in, or if the value is 0, then the implementation will choose the best buffer size for the given environment, which will be constant power of 2 throughout the lifetime of the node. Otherwise if the author explicitly specifies the bufferSize, it MUST be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. This value controls how frequently the {{ScriptProcessorNode/audioprocess}} event is dispatched and how many sample-frames need to be processed each call. Lower values for {{ScriptProcessorNode/bufferSize}} will result in a lower (better) latency. Higher values will be necessary to avoid audio breakup and glitches. It is recommended for authors to not specify this buffer size and allow the implementation to pick a good buffer size to balance between latency and audio quality. If the value of this parameter is not one of the allowed power-of-2 values listed above, an {{IndexSizeError}} MUST be thrown.
-        numberOfInputChannels: This parameter determines the number of channels for this node's input. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported.
-        numberOfOutputChannels: This parameter determines the number of channels for this node's output. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported.
-        
- -
- Return type: {{ScriptProcessorNode}} -
- - : createStereoPanner() - :: - Factory method for a {{StereoPannerNode}}. -
- No parameters. -
-
- Return type: {{StereoPannerNode}} -
- - : createWaveShaper() - :: - Factory method for a {{WaveShaperNode}} - representing a non-linear distortion. -
- No parameters. -
-
- Return type: {{WaveShaperNode}} -
- - : decodeAudioData(audioData, successCallback, errorCallback) - :: - Asynchronously decodes the audio file data contained in the - {{ArrayBuffer}}. The {{ArrayBuffer}} can, for - example, be loaded from an XMLHttpRequest's - response attribute after setting the - responseType to "arraybuffer". Audio - file data can be in any of the formats supported by the - <{audio}> element. The buffer passed to - {{BaseAudioContext/decodeAudioData()}} has its - content-type determined by sniffing, as described in - [[!mimesniff]]. - - Although the primary method of interfacing with this function - is via its promise return value, the callback parameters are - provided for legacy reasons. - - Encourage implementation to warn authors in case of a corrupted file. It - isn't possible to throw because this would be a breaking change. - -
- Note: If the compressed audio data byte-stream is corrupted but the - decoding can otherwise proceed, implementations are encouraged to warn - authors for example via the developer tools. -
- -
- When decodeAudioData is - called, the following steps MUST be performed on the control - thread: - - 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. - - 2. Let promise be a new Promise. - - 3. If {{BaseAudioContext/decodeAudioData(audioData, successCallback, errorCallback)/audioData!!argument}} - is [=BufferSource/detached=], execute the following steps: - - 1. Append promise to {{BaseAudioContext/[[pending promises]]}}. - - 2. [=ArrayBuffer/Detach=] - the {{BaseAudioContext/decodeAudioData(audioData, successCallback, errorCallback)/audioData!!argument}} {{ArrayBuffer}}. - If this operations throws, jump to the step 3. - - 3. Queue a decoding operation to be performed on another thread. - - 4. Else, execute the following error steps: - - 1. Let error be a {{DataCloneError}}. - 2. Reject promise with error, and remove it from - {{BaseAudioContext/[[pending promises]]}}. - - 3. - Queue a media element task to invoke - {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with |error|. - - 5. Return promise. -
- -
- When queuing a decoding operation to be performed on another - thread, the following steps MUST happen on a thread that is not - the control thread nor the rendering thread, - called the decoding thread. - - Note: Multiple {{decoding thread}}s can run in parallel to - service multiple calls to decodeAudioData. - - 1. Let can decode be a boolean flag, initially set to true. - - 2. Attempt to determine the MIME type of - {{BaseAudioContext/decodeAudioData(audioData, successCallback, - errorCallback)/audioData!!argument}}, using - [[mimesniff#matching-an-audio-or-video-type-pattern]]. If the audio or - video type pattern matching algorithm returns {{undefined}}, - set can decode to false. - - 3. If can decode is true, attempt to decode the encoded - {{BaseAudioContext/decodeAudioData(audioData, successCallback, - errorCallback)/audioData!!argument}} into [=linear PCM=]. In case of - failure, set can decode to false. - - If the media byte-stream contains multiple audio tracks, only decode the - first track to [=linear pcm=]. - - Note: Authors who need more control over the decoding process can use - [[WEBCODECS]]. - - 4. If |can decode| is `false`, - - queue a media element task to execute the following steps: - - 1. Let error be a DOMException - whose name is {{EncodingError}}. - - 2. Reject promise with error, and remove it from - {{BaseAudioContext/[[pending promises]]}}. - - 3. If {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} is - not missing, invoke - {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with - error. - - 5. Otherwise: - 1. Take the result, representing the decoded [=linear PCM=] - audio data, and resample it to the sample-rate of the - {{BaseAudioContext}} if it is different from - the sample-rate of {{BaseAudioContext/decodeAudioData(audioData, - successCallback, errorCallback)/audioData!!argument}}. - - 2. - queue a media element task to execute the following steps: + : createAnalyser() + :: + Factory method for an {{AnalyserNode}}. +
+ No parameters. +
+
+ Return type: {{AnalyserNode}} +
+ + : createBiquadFilter() + :: + Factory method for a {{BiquadFilterNode}} + representing a second order filter which can be configured as one + of several common filter types. +
+ No parameters. +
+
+ Return type: {{BiquadFilterNode}} +
+ + : createBuffer(numberOfChannels, length, sampleRate) + :: + Creates an AudioBuffer of the given size. The audio data in the + buffer will be zero-initialized (silent). + A {{NotSupportedError}} exception MUST be + thrown if any of the arguments is negative, zero, or outside its + nominal range. + +
+		numberOfChannels: Determines how many channels the buffer will have. An implementation MUST support at least 32 channels.
+		length: Determines the size of the buffer in sample-frames.  This MUST be at
+			least 1.
+		sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. An implementation MUST support sample rates in at least the range 8000 to 96000.
+		
+
+ Return type: {{AudioBuffer}} +
+ + : createBufferSource() + :: + Factory method for a + {{AudioBufferSourceNode}}. +
+ No parameters. +
+
+ Return type: {{AudioBufferSourceNode}} +
+ + : createChannelMerger(numberOfInputs) + :: + Factory method for a + {{ChannelMergerNode}} representing a channel + merger. An + {{IndexSizeError}} exception MUST be thrown if + {{BaseAudioContext/createChannelMerger(numberOfInputs)/numberOfInputs}} is less than 1 or is greater than the number of supported channels. + +
+		numberOfInputs: Determines the number of inputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used.
+		
+ +
+ Return type: {{ChannelMergerNode}} +
+ + : createChannelSplitter(numberOfOutputs) + :: + Factory method for a + {{ChannelSplitterNode}} representing a channel + splitter. An + {{IndexSizeError}} exception MUST be thrown if + {{BaseAudioContext/createChannelSplitter(numberOfOutputs)/numberOfOutputs}} is less than 1 or is greater than the number of supported channels. + +
+		numberOfOutputs: The number of outputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used.
+		
+ +
+ Return type: {{ChannelSplitterNode}} +
+ + : createConstantSource() + :: + Factory method for a + {{ConstantSourceNode}}. +
+ No parameters. +
+
+ Return type: {{ConstantSourceNode}} +
+ + : createConvolver() + :: + Factory method for a {{ConvolverNode}}. +
+ No parameters. +
+
+ Return type: {{ConvolverNode}} +
+ + : createDelay(maxDelayTime) + :: + Factory method for a {{DelayNode}}. The initial default + delay time will be 0 seconds. + +
+		maxDelayTime: Specifies the maximum delay time in seconds allowed for the delay line. If specified, this value MUST be greater than zero and less than three minutes or a {{NotSupportedError}} exception MUST be thrown. If not specified, then `1` will be used.
+		
+ +
+ Return type: {{DelayNode}} +
+ + : createDynamicsCompressor() + :: + Factory method for a + {{DynamicsCompressorNode}}. +
+ No parameters. +
+
+ Return type: {{DynamicsCompressorNode}} +
+ + : createGain() + :: + Factory method for {{GainNode}}. +
+ No parameters. +
+
+ Return type: {{GainNode}} +
+ + : createIIRFilter(feedforward, feedback) + :: +
+		feedforward: An array of the feedforward (numerator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If all of the values are zero, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20.
+		feedback: An array of the feedback (denominator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If the first element of the array is 0, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20.
+		
+ +
+ Return type: {{IIRFilterNode}} +
+ + : createOscillator() + :: + Factory method for an {{OscillatorNode}}. +
+ No parameters. +
+
+ Return type: {{OscillatorNode}} +
+ + : createPanner() + :: + Factory method for a {{PannerNode}}. +
+ No parameters. +
+
+ Return type: {{PannerNode}} +
+ + : createPeriodicWave(real, imag, constraints) + :: + Factory method to create a + {{PeriodicWave}}. + +
+ When calling this method, + execute these steps: + + 1. If {{BaseAudioContext/createPeriodicWave()/real}} and {{BaseAudioContext/createPeriodicWave()/imag}} are not of the same + length, an {{IndexSizeError}} MUST be thrown. + + 2. Let o be a new object of type + {{PeriodicWaveOptions}}. + + 3. Respectively set the {{BaseAudioContext/createPeriodicWave()/real}} and + {{BaseAudioContext/createPeriodicWave()/imag}} parameters passed to this factory method to + the attributes of the same name on o. + + 4. Set the {{PeriodicWaveConstraints/disableNormalization}} attribute on + o to the value of the + {{PeriodicWaveConstraints/disableNormalization}} attribute of the + constraints attribute passed to the factory + method. + + 5. Construct a new {{PeriodicWave}} + p, passing the {{BaseAudioContext}} this factory + method has been called on as a first argument, and + o. + 6. Return p. +
+ +
+		real: A sequence of cosine parameters. See its {{PeriodicWaveOptions/real}} constructor argument for a more detailed description.
+		imag: A sequence of sine parameters. See its {{PeriodicWaveOptions/imag}} constructor argument for a more detailed description.
+		constraints: If not given, the waveform is normalized. Otherwise, the waveform is normalized according the value given by constraints.
+		
+ +
+ Return type: {{PeriodicWave}} +
+ + : createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels) + :: + Factory method for a {{ScriptProcessorNode}}. + This method is DEPRECATED, as it is intended to be replaced by {{AudioWorkletNode}}. + Creates a {{ScriptProcessorNode}} for direct audio processing using scripts. + An {{IndexSizeError}} exception MUST be thrown if + {{bufferSize!!argument}} or {{numberOfInputChannels}} or {{numberOfOutputChannels}} + are outside the valid range. + + It is invalid for both {{numberOfInputChannels}} and {{numberOfOutputChannels}} to be zero. + In this case an {{IndexSizeError}} MUST be thrown. + +
+		bufferSize: The {{ScriptProcessorNode/bufferSize}} parameter determines the buffer size in units of sample-frames. If it's not passed in, or if the value is 0, then the implementation will choose the best buffer size for the given environment, which will be constant power of 2 throughout the lifetime of the node. Otherwise if the author explicitly specifies the bufferSize, it MUST be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. This value controls how frequently the {{ScriptProcessorNode/onaudioprocess}} event is dispatched and how many sample-frames need to be processed each call. Lower values for {{ScriptProcessorNode/bufferSize}} will result in a lower (better) latency. Higher values will be necessary to avoid audio breakup and glitches. It is recommended for authors to not specify this buffer size and allow the implementation to pick a good buffer size to balance between latency and audio quality. If the value of this parameter is not one of the allowed power-of-2 values listed above, an {{IndexSizeError}} MUST be thrown.
+		numberOfInputChannels: This parameter determines the number of channels for this node's input. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported.
+		numberOfOutputChannels: This parameter determines the number of channels for this node's output. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported.
+		
+ +
+ Return type: {{ScriptProcessorNode}} +
+ + : createStereoPanner() + :: + Factory method for a {{StereoPannerNode}}. +
+ No parameters. +
+
+ Return type: {{StereoPannerNode}} +
+ + : createWaveShaper() + :: + Factory method for a {{WaveShaperNode}} + representing a non-linear distortion. +
+ No parameters. +
+
+ Return type: {{WaveShaperNode}} +
+ + : decodeAudioData(audioData, successCallback, errorCallback) + :: + Asynchronously decodes the audio file data contained in the + {{ArrayBuffer}}. The {{ArrayBuffer}} can, for + example, be loaded from an XMLHttpRequest's + response attribute after setting the + responseType to "arraybuffer". Audio + file data can be in any of the formats supported by the + <{audio}> element. The buffer passed to + {{BaseAudioContext/decodeAudioData()}} has its + content-type determined by sniffing, as described in + [[mimesniff]]. + + Although the primary method of interfacing with this function + is via its promise return value, the callback parameters are + provided for legacy reasons. + +
+ When decodeAudioData is + called, the following steps MUST be performed on the control + thread: + + 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. + + 2. Let promise be a new Promise. + + 3. If the operation IsDetachedBuffer + (described in [[!ECMASCRIPT]]) on {{BaseAudioContext/decodeAudioData(audioData, successCallback, errorCallback)/audioData!!argument}} is + false, execute the following steps: + + 1. Append promise to {{BaseAudioContext/[[pending promises]]}}. + + 2. + Detach the {{BaseAudioContext/decodeAudioData(audioData, + successCallback, errorCallback)/audioData!!argument}} {{ArrayBuffer}}. + This operation is described in [[!ECMASCRIPT]]. If this operations + throws, jump to the step 3. + + 3. Queue a decoding operation to be performed on another thread. + + 4. Else, execute the following error steps: + + 1. Let error be a {{DataCloneError}}. + 2. Reject promise with error, and remove it from + {{BaseAudioContext/[[pending promises]]}}. + 3. Queue a task to invoke {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with error. + + 5. Return promise. +
+ +
+ When queuing a decoding operation to be performed on another + thread, the following steps MUST happen on a thread that is not + the control thread nor the rendering thread, + called the decoding thread. + + Note: Multiple {{decoding thread}}s can run in parallel to + service multiple calls to decodeAudioData. + + 1. Let can decode be a boolean flag, initially set to true. + + 2. Attempt to determine the MIME type of + {{BaseAudioContext/decodeAudioData(audioData, successCallback, + errorCallback)/audioData!!argument}}, using + [[mimesniff#matching-an-audio-or-video-type-pattern]]. If the audio or + video type pattern matching algorithm returns undefined, + set can decode to false. + + 3. If can decode is true, attempt to decode the encoded + {{BaseAudioContext/decodeAudioData(audioData, successCallback, + errorCallback)/audioData!!argument}} into [=linear PCM=]. In case of + failure, set can decode to false. + + 4. If can decode is false, queue a task to + execute the following step, on the control thread's + event loop: + + 1. Let error be a DOMException + whose name is {{EncodingError}}. + + 2. Reject promise with error, and remove it from + {{BaseAudioContext/[[pending promises]]}}. + + 3. If {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} is + not missing, invoke + {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with + error. + + 5. Otherwise: + 1. Take the result, representing the decoded [=linear PCM=] + audio data, and resample it to the sample-rate of the + {{AudioContext}} if it is different from + the sample-rate of {{BaseAudioContext/decodeAudioData(audioData, + successCallback, errorCallback)/audioData!!argument}}. + + 2. Queue a task on the control thread's event loop + to execute the following steps: + + 1. Let buffer be an + {{AudioBuffer}} containing the final result + (after possibly performing sample-rate conversion). + + 2. Resolve promise with buffer. + + 3. If {{BaseAudioContext/decodeAudioData()/successCallback!!argument}} + is not missing, invoke + {{BaseAudioContext/decodeAudioData()/successCallback!!argument}} + with buffer. +
- 1. Let buffer be an - {{AudioBuffer}} containing the final result - (after possibly performing sample-rate conversion). +
+		audioData: An ArrayBuffer containing compressed audio data.
+		successCallback: A callback function which will be invoked when the decoding is finished. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data.
+		errorCallback: A callback function which will be invoked if there is an error decoding the audio file.
+		
- 2. Resolve promise with buffer. - - 3. If {{BaseAudioContext/decodeAudioData()/successCallback!!argument}} - is not missing, invoke - {{BaseAudioContext/decodeAudioData()/successCallback!!argument}} - with buffer. -
- -
-        audioData: An ArrayBuffer containing compressed audio data.
-        successCallback: A callback function which will be invoked when the decoding is finished. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data.
-        errorCallback: A callback function which will be invoked if there is an error decoding the audio file.
-        
- -
- Return type: {{Promise}}<{{AudioBuffer}}> -
+
+ Return type: {{Promise}}<{{AudioBuffer}}> +
@@ -1330,16 +1204,16 @@ Methods Callback {{DecodeSuccessCallback()}} Parameters
- : {{DecodeSuccessCallback/decodedData!!argument}}, of type {{AudioBuffer}} - :: The AudioBuffer containing the decoded audio data. + : decodedData, of type {{AudioBuffer}} + :: The AudioBuffer containing the decoded audio data.

Callback {{DecodeErrorCallback()}} Parameters

- : {{DecodeErrorCallback/error!!argument}}, of type {{DOMException}} - :: The error that occurred while decoding. + : error, of type {{DOMException}} + :: The error that occurred while decoding.

@@ -1403,7 +1277,7 @@ processing. ██ ██ ██████ --> -

+

The {{AudioContext}} Interface

This interface represents an audio graph whose @@ -1414,966 +1288,595 @@ document.
 enum AudioContextLatencyCategory {
-        "balanced",
-        "interactive",
-        "playback"
+		"balanced",
+		"interactive",
+		"playback"
 };
 
- - - - - - - - - + + + + + +
{{AudioContextLatencyCategory}} enumeration description
Enum valueDescription
- "balanced" - - Balance audio output latency and power consumption. -
- "interactive" - - Provide the lowest audio output latency possible without - glitching. This is the default. -
- "playback" - - Prioritize sustained playback without interruption over audio - output latency. Lowest power consumption. +
+ Enumeration description +
+ "balanced" + + Balance audio output latency and power consumption. +
+ "interactive" + + Provide the lowest audio output latency possible without + glitching. This is the default. +
+ "playback" + + Prioritize sustained playback without interruption over audio + output latency. Lowest power consumption.
-
-enum AudioSinkType {
-    "none"
-};
-
- -
- - - - - - - -
{{AudioSinkType}} Enumeration description
Enum ValueDescription
- "none" - - The audio graph will be processed without being played - through an audio output device. -
-
- [Exposed=Window] interface AudioContext : BaseAudioContext { - constructor (optional AudioContextOptions contextOptions = {}); - readonly attribute double baseLatency; - readonly attribute double outputLatency; - [SecureContext] readonly attribute (DOMString or AudioSinkInfo) sinkId; - [SecureContext] readonly attribute AudioRenderCapacity renderCapacity; - attribute EventHandler onsinkchange; - attribute EventHandler onerror; - AudioTimestamp getOutputTimestamp (); - Promise<undefined> resume (); - Promise<undefined> suspend (); - Promise<undefined> close (); - [SecureContext] Promise<undefined> setSinkId ((DOMString or AudioSinkOptions) sinkId); - MediaElementAudioSourceNode createMediaElementSource (HTMLMediaElement mediaElement); - MediaStreamAudioSourceNode createMediaStreamSource (MediaStream mediaStream); - MediaStreamTrackAudioSourceNode createMediaStreamTrackSource ( - MediaStreamTrack mediaStreamTrack); - MediaStreamAudioDestinationNode createMediaStreamDestination (); + constructor (optional AudioContextOptions contextOptions = {}); + readonly attribute double baseLatency; + readonly attribute double outputLatency; + AudioTimestamp getOutputTimestamp (); + Promise<void> resume (); + Promise<void> suspend (); + Promise<void> close (); + MediaElementAudioSourceNode createMediaElementSource (HTMLMediaElement mediaElement); + MediaStreamAudioSourceNode createMediaStreamSource (MediaStream mediaStream); + MediaStreamTrackAudioSourceNode createMediaStreamTrackSource ( + MediaStreamTrack mediaStreamTrack); + MediaStreamAudioDestinationNode createMediaStreamDestination (); }; An {{AudioContext}} is said to be allowed to start if the user agent allows the context state to transition from "{{AudioContextState/suspended}}" to -"{{AudioContextState/running}}". A user agent may disallow this initial transition, -and to allow it only when the {{AudioContext}}'s [=relevant global object=] has +"{{AudioContextState/running}}". A user agent may delay this initial transition, +to allow it only when the {{AudioContext}}'s [=relevant global object=] has [=sticky activation=]. -{{AudioContext}} has following internal slots: +{{AudioContext}} has an internal slot:
- : [[suspended by user]] - :: - A boolean flag representing whether the context is suspended by user code. - The initial value is false. - - : [[sink ID]] - :: - A {{DOMString}} or an {{AudioSinkInfo}} representing the identifier - or the information of the current audio output device respectively. The - initial value is "", which means the default audio output - device. - - : [[pending resume promises]] - :: - An ordered list to store pending {{Promise}}s created by - {{AudioContext/resume()}}. It is initially empty. + : [[suspended by user]] + :: + A boolean flag representing whether the context is suspended by user code. + The initial value is false.

Constructors

- : AudioContext(contextOptions) - :: -
- -

- If the [=current settings object=]'s [=relevant global object=]'s - [=associated Document=] is NOT [=fully active=], throw an - "{{InvalidStateError}}" and abort these steps. -

- When creating an {{AudioContext}}, - execute these steps: - - 1. Let |context| be a new {{AudioContext}} object. - - 1. Set a {{[[control thread state]]}} to suspended on - |context|. - - 1. Set a {{[[rendering thread state]]}} to suspended on - |context|. - - 1. Let |messageChannel| be a new {{MessageChannel}}. - - 1. Let |controlSidePort| be the value of - |messageChannel|'s {{MessageChannel/port1}} attribute. - - 1. Let |renderingSidePort| be the value of - |messageChannel|'s {{MessageChannel/port2}} attribute. - - 1. Let |serializedRenderingSidePort| be the result of - [$StructuredSerializeWithTransfer$](|renderingSidePort|, - « |renderingSidePort| »). - - 1. Set this {{BaseAudioContext/audioWorklet}}'s {{AudioWorklet/port}} - to |controlSidePort|. - - 1. Queue a control message to set the - MessagePort on the AudioContextGlobalScope, with - |serializedRenderingSidePort|. - - 1. If contextOptions is given, perform the following - substeps: - - 1. If {{AudioContextOptions/sinkId}} is specified, let |sinkId| be - the value of - contextOptions.{{AudioContextOptions/sinkId}} and - run the following substeps: - - 1. If both |sinkId| and {{AudioContext/[[sink ID]]}} are a type of - {{DOMString}}, and they are equal to each other, abort these - substeps. - - 1. If |sinkId| is a type of {{AudioSinkOptions}} and - {{AudioContext/[[sink ID]]}} is a type of {{AudioSinkInfo}}, and - {{AudioSinkOptions/type}} in |sinkId| and {{AudioSinkInfo/type}} - in {{AudioContext/[[sink ID]]}} are equal, abort these substeps. - - 1. Let |validationResult| be the return value of - sink identifier validation - of |sinkId|. - - 1. If |validationResult| is a type of {{DOMException}}, throw an - exception with |validationResult| and abort these substeps. - - 1. If |sinkId| is a type of {{DOMString}}, set - {{AudioContext/[[sink ID]]}} to |sinkId| and abort these - substeps. - - 1. If |sinkId| is a type of {{AudioSinkOptions}}, set - {{AudioContext/[[sink ID]]}} to a new instance of - {{AudioSinkInfo}} created with the value of - {{AudioSinkOptions/type}} of |sinkId|. - - 1. Set the internal latency of |context| according to - contextOptions.{{AudioContextOptions/latencyHint}}, - as described in {{AudioContextOptions/latencyHint}}. - - 1. If contextOptions.{{AudioContextOptions/sampleRate}} - is specified, set the {{BaseAudioContext/sampleRate}} of - |context| to this value. Otherwise, follow these substeps: - - 1. If |sinkId| is the empty string or a type of - {{AudioSinkOptions}}, use the sample rate of the default output - device. Abort these substeps. - - 1. If |sinkId| is a {{DOMString}}, use the sample rate of the - output device identified by |sinkId|. Abort these substeps. - - If contextOptions.{{AudioContextOptions/sampleRate}} - differs from the sample rate of the output device, the user agent - MUST resample the audio output to match the sample rate of the - output device. - - Note: If resampling is required, the latency of |context| may be - affected, possibly by a large amount. - - 1. If |context| is allowed to start, send a - control message to start processing. - - 1. Return |context|. -
- -
- Sending a control message to start processing means - executing the following steps: - - 1. Let |document| be the [=current settings object=]'s [=relevant global object=]'s - [=associated Document=]. - - 1. Attempt to [=acquire system resources=] to use a following audio output device - based on {{AudioContext/[[sink ID]]}} for rendering: - - * The default audio output device for the empty string. - * A audio output device identified by {{AudioContext/[[sink ID]]}}. - - 1. If resource acquisition fails, execute the following steps: - - 1. If |document| is not allowed to use the feature identified by - "speaker-selection", abort these substeps. - - 1. [=Queue a media element task=] to [=fire an event=] named - {{AudioContext/error}} at the {{AudioContext}}, and abort the following - steps. - - 1. Set [=this=] {{[[rendering thread state]]}} to running on the - {{AudioContext}}. - - 1. [=Queue a media element task=] to execute the following steps: - - 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} - to "{{AudioContextState/running}}". - - 1. [=fire an event=] named {{BaseAudioContext/statechange}} at the - {{AudioContext}}. -
- - NOTE: In cases where an {{AudioContext}} is constructed with no arguments and resource - acquisition fails, the User-Agent will attempt to silently render the audio graph using - a mechanism that emulates an audio output device. - -
- Sending a control message to set the {{MessagePort}} on the - {{AudioWorkletGlobalScope}} means executing the following steps, on - the rendering thread, with - |serializedRenderingSidePort|, that has been transfered to - the {{AudioWorkletGlobalScope}}: - - 1. Let |deserializedPort| be the result of - [$StructuredDeserialize$](|serializedRenderingSidePort|, - the current Realm). - - 1. Set {{AudioWorkletGlobalScope/port}} to - |deserializedPort|. -
- -
-contextOptions: User-specified options controlling how the {{AudioContext}} should be constructed.
-
- -
- -

-Attributes

- -
- : baseLatency - :: - This represents the number of seconds of processing latency - incurred by the {{AudioContext}} passing the audio from the - {{AudioDestinationNode}} to the audio subsystem. It does not - include any additional latency that might be caused by any - other processing between the output of the - {{AudioDestinationNode}} and the audio hardware and - specifically does not include any latency incurred the audio - graph itself. - - For example, if the audio context is running at 44.1 kHz with default render - quantum size, and the {{AudioDestinationNode}} implements double buffering - internally and can process and output audio each render quantum, then - the processing latency is \((2\cdot128)/44100 = 5.805 \mathrm{ ms}\), - approximately. - - : outputLatency - :: - The estimation in seconds of audio output latency, i.e., the - interval between the time the UA requests the host system to - play a buffer and the time at which the first sample in the - buffer is actually processed by the audio output device. For - devices such as speakers or headphones that produce an acoustic - signal, this latter time refers to the time when a sample's - sound is produced. - - An {{outputLatency}} attribute value depends on the platform and - the connected audio output device hardware. The - {{outputLatency}} attribute value may change while the context - is running or the associated audio output device changes. It is - useful to query this value frequently when accurate - synchronization is required. - - : renderCapacity - :: - Returns an {{AudioRenderCapacity}} instance associated with - an {{AudioContext}}. - - : sinkId - :: - Returns the value of {{AudioContext/[[sink ID]]}} internal slot. This - attribute is cached upon update, and it returns the same object after - caching. - - : onsinkchange - :: - An [=event handler=] for {{AudioContext/setSinkId()}}. The event type of - this event handler is sinkchange. This event will be - dispatched when changing the output device is completed. - - NOTE: This is not dispatched for the initial device selection in the - construction of {{AudioContext}}. The {{BaseAudioContext/statechange}} event - is available to check the readiness of the initial output device. - - : onerror - :: - An [=event handler=] for the {{Event}} dispatched from an {{AudioContext}}. The event - type of this handler is error and the user agent can - dispatch this event in the following cases: - - * When initializing and activating a selected audio device encounters failures. - * When the audio output device associated with an {{AudioContext}} is disconnected while - the context is {{AudioContextState/running}}. - * When the operating system reports an audio device malfunction. - -
- -

-Methods

- -
- : close() - :: - Closes the {{AudioContext}}, [=release system resources|releasing the system - resources=] being used. This will not automatically release - all {{AudioContext}}-created objects, but will suspend the - progression of the {{AudioContext}}'s - {{BaseAudioContext/currentTime}}, and stop - processing audio data. - -
- When close is called, execute these steps: - - 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. - - 1. Let promise be a new Promise. - - 1. If the {{[[control thread state]]}} flag on the - {{AudioContext}} is closed reject the promise - with {{InvalidStateError}}, abort these steps, - returning promise. - - 1. Set the {{[[control thread state]]}} flag on the {{AudioContext}} to - closed. - - 1. Queue a control message to close the {{AudioContext}}. - - 1. Return promise. -
- -
- Running a control message to close an - {{AudioContext}} means running these steps on the - rendering thread: - - 1. Attempt to release system resources. - - 2. Set the {{[[rendering thread state]]}} to suspended. -
- This will stop rendering. -
- - 3. If this control message is being run in a reaction to the - document being unloaded, abort this algorithm. -
- There is no need to notify the control thread in this case. -
- - 4. - queue a media element task to execute the following steps: - - 1. Resolve promise. - 2. If the {{BaseAudioContext/state}} attribute of the {{AudioContext}} is not already "{{AudioContextState/closed}}": - 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/closed}}". - - 1. - queue a media element task to fire - an event named {{BaseAudioContext/statechange}} at the {{AudioContext}}. -
- - When an {{AudioContext}} is closed, any - {{MediaStream}}s and {{HTMLMediaElement}}s - that were connected to an {{AudioContext}} will have their - output ignored. That is, these will no longer cause any output - to speakers or other output devices. For more flexibility in - behavior, consider using - HTMLMediaElement.captureStream(). - - Note: When an {{AudioContext}} has been closed, implementation can - choose to aggressively release more resources than when - suspending. - -
- No parameters. -
-
- Return type: {{Promise}}<{{undefined}}> -
- - : createMediaElementSource(mediaElement) - :: - Creates a {{MediaElementAudioSourceNode}} - given an {{HTMLMediaElement}}. As a consequence of calling this - method, audio playback from the {{HTMLMediaElement}} will be - re-routed into the processing graph of the - {{AudioContext}}. - -
-        mediaElement: The media element that will be re-routed.
-        
- -
- Return type: {{MediaElementAudioSourceNode}} -
- - : createMediaStreamDestination() - :: - Creates a {{MediaStreamAudioDestinationNode}} - -
- No parameters. -
-
- Return type: - {{MediaStreamAudioDestinationNode}} -
- - : createMediaStreamSource(mediaStream) - :: - Creates a {{MediaStreamAudioSourceNode}}. - -
-        mediaStream: The media stream that will act as source.
-        
- -
- Return type: {{MediaStreamAudioSourceNode}} -
- - : createMediaStreamTrackSource(mediaStreamTrack) - :: - Creates a {{MediaStreamTrackAudioSourceNode}}. - -
-        mediaStreamTrack: The {{MediaStreamTrack}} that will act as source. The value of its kind attribute must be equal to "audio", or an {{InvalidStateError}} exception MUST be thrown.
-        
- -
- Return type: - {{MediaStreamTrackAudioSourceNode}} -
- - : getOutputTimestamp() - :: - Returns a new {{AudioTimestamp}} instance - containing two related audio stream position - values for the context: the {{AudioTimestamp/contextTime}} member contains - the time of the sample frame which is currently being rendered - by the audio output device (i.e., output audio stream - position), in the same units and origin as context's - {{BaseAudioContext/currentTime}}; the - {{AudioTimestamp/performanceTime}} member - contains the time estimating the moment when the sample frame - corresponding to the stored contextTime value was - rendered by the audio output device, in the same units and - origin as performance.now() (described in - [[!hr-time-3]]). - - If the context's rendering graph has not yet processed a block - of audio, then {{getOutputTimestamp}} call - returns an {{AudioTimestamp}} instance with both - members containing zero. - - After the context's rendering graph has started processing of - blocks of audio, its {{BaseAudioContext/currentTime}} attribute value - always exceeds the {{AudioTimestamp/contextTime}} value obtained - from {{AudioContext/getOutputTimestamp}} method call. - -
- The value returned from {{getOutputTimestamp}} - method can be used to get performance time estimation for the - slightly later context's time value: - -
-                function outputPerformanceTime(contextTime) {
-                    const timestamp = context.getOutputTimestamp();
-                    const elapsedTime = contextTime - timestamp.contextTime;
-                    return timestamp.performanceTime + elapsedTime * 1000;
-                }
-            
- - In the above example the accuracy of the estimation depends on - how close the argument value is to the current output audio - stream position: the closer the given contextTime - is to timestamp.contextTime, the better the - accuracy of the obtained estimation. -
- - Note: The difference between the values of the context's - {{BaseAudioContext/currentTime}} and the - {{AudioTimestamp/contextTime}} - obtained from {{AudioContext/getOutputTimestamp}} method call - cannot be considered as a reliable output latency estimation - because {{BaseAudioContext/currentTime}} may be - incremented at non-uniform time intervals, so {{AudioContext/outputLatency}} attribute should - be used instead. - -
- No parameters. -
-
- Return type: {{AudioTimestamp}} -
- - : resume() - :: - Resumes the progression of the {{AudioContext}}'s - {{BaseAudioContext/currentTime}} when it has - been suspended. - -
- When resume is called, - execute these steps: - - 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. - - 1. Let promise be a new Promise. - - 2. If the {{[[control thread state]]}} on the - {{AudioContext}} is closed reject the - promise with {{InvalidStateError}}, abort these steps, - returning promise. - - 3. Set {{[[suspended by user]]}} to false. - - 4. If the context is not allowed to start, append - promise to {{BaseAudioContext/[[pending promises]]}} and - {{AudioContext/[[pending resume promises]]}} and abort these steps, returning - promise. - - 5. Set the {{[[control thread state]]}} on the - {{AudioContext}} to running. - - 6. Queue a control message to resume the {{AudioContext}}. - - 7. Return promise. -
- -
- Running a control message to resume an - {{AudioContext}} means running these steps on the - rendering thread: - - 1. Attempt to acquire system resources. + : AudioContext(contextOptions) + :: +
- 2. Set the {{[[rendering thread state]]}} on the {{AudioContext}} to running. +

+ If the current + settings object's associated Document + is NOT + fully active, throw an InvalidStateError and + abort these steps. +

+ When creating an {{AudioContext}}, + execute these steps: - 3. Start rendering the audio graph. + 1. Set a control thread state to suspended on the {{AudioContext}}. - 4. In case of failure, - - queue a media element task to execute the following steps: + 2. Set a rendering thread state to suspended on the {{AudioContext}}. - 1. Reject all promises from {{AudioContext/[[pending resume promises]]}} - in order, then clear {{AudioContext/[[pending resume promises]]}}. + 3. Let [[pending resume promises]] be a + slot on this {{AudioContext}}, that is an initially empty ordered list of + promises. - 2. Additionally, remove those promises from {{BaseAudioContext/[[pending - promises]]}}. + 4. If contextOptions is given, apply the options: - 5. - queue a media element task to execute the following steps: + 1. Set the internal latency of this {{AudioContext}} + according to contextOptions.{{AudioContextOptions/latencyHint}}, as described + in {{AudioContextOptions/latencyHint}}. - 1. Resolve all promises from {{AudioContext/[[pending resume promises]]}} in order. - 1. Clear {{AudioContext/[[pending resume promises]]}}. Additionally, remove those - promises from {{BaseAudioContext/[[pending promises]]}}. + 2. If contextOptions.{{AudioContextOptions/sampleRate}} is specified, + set the {{BaseAudioContext/sampleRate}} + of this {{AudioContext}} to this value. Otherwise, use + the sample rate of the default output device. If the + selected sample rate differs from the sample rate of the + output device, this {{AudioContext}} MUST resample the + audio output to match the sample rate of the output device. - 2. Resolve promise. + Note: If resampling is required, the latency of the + AudioContext may be affected, possibly by a large + amount. - 3. If the {{BaseAudioContext/state}} attribute of the {{AudioContext}} is not already "{{AudioContextState/running}}": + 5. If the context is allowed to start, send a + control message to start processing. - 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/running}}". + 6. Return this {{AudioContext}} object. +
- 1. [=Queue a media element task=] to [=fire an event=] named - {{BaseAudioContext/statechange}} at the {{AudioContext}}. -
+
+ Sending a control message to start processing means + executing the following steps: -
- No parameters. -
+ 1. Attempt to acquire system resources. + In case of failure, abort the following steps. -
- Return type: {{Promise}}<{{undefined}}> -
+ 3. Set the rendering thread state to running on the {{AudioContext}}. - : suspend() - :: - Suspends the progression of {{AudioContext}}'s - {{BaseAudioContext/currentTime}}, allows any - current context processing blocks that are already processed to - be played to the destination, and then allows the system to - release its claim on audio hardware. This is generally useful - when the application knows it will not need the - {{AudioContext}} for some time, and wishes to temporarily - release system resource associated with the - {{AudioContext}}. The promise resolves when the frame buffer - is empty (has been handed off to the hardware), or immediately - (with no other effect) if the context is already - suspended. The promise is rejected if the context - has been closed. + 4. Queue a task on the control thread event loop, to execute these steps: -
- When suspend is called, execute these steps: + 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/running}}". - 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. + 2. Queue a task to fire a simple event named statechange at the {{AudioContext}}. +
- 1. Let promise be a new Promise. + Note: It is unfortunately not possible to programatically notify + authors that the creation of the {{AudioContext}} failed. + User-Agents are encouraged to log an informative message if + they have access to a logging mechanism, such as a developer + tools console. - 2. If the {{[[control thread state]]}} on the - {{AudioContext}} is closed reject the promise - with {{InvalidStateError}}, abort these steps, - returning promise. +
+		contextOptions: User-specified options controlling how the {{AudioContext}} should be constructed.
+		
+
- 3. Append promise to {{BaseAudioContext/[[pending promises]]}}. +

+Attributes

- 4. Set {{[[suspended by user]]}} to true. +
+ : baseLatency + :: + This represents the number of seconds of processing latency + incurred by the {{AudioContext}} passing the audio from the + {{AudioDestinationNode}} to the audio subsystem. It does not + include any additional latency that might be caused by any + other processing between the output of the + {{AudioDestinationNode}} and the audio hardware and + specifically does not include any latency incurred the audio + graph itself. + + For example, if the audio context is running at 44.1 kHz and + the {{AudioDestinationNode}} implements double buffering + internally and can process and output audio each render + quantum, then the processing latency is \((2\cdot128)/44100 + = 5.805 \mathrm{ ms}\), approximately. + + : outputLatency + :: + The estimation in seconds of audio output latency, i.e., the + interval between the time the UA requests the host system to + play a buffer and the time at which the first sample in the + buffer is actually processed by the audio output device. For + devices such as speakers or headphones that produce an acoustic + signal, this latter time refers to the time when a sample's + sound is produced. + + The {{outputLatency}} attribute value depends + on the platform and the connected hardware audio output device. + The {{outputLatency}} attribute value does not + change for the context's lifetime as long as the connected + audio output device remains the same. If the audio output + device is changed the {{outputLatency}} + attribute value will be updated accordingly. +
- 5. Set the {{[[control thread state]]}} on the {{AudioContext}} to suspended. +

+Methods

- 6. Queue a control message to suspend the {{AudioContext}}. +
+ : close() + :: + Closes the {{AudioContext}}, [=release system resources|releasing the system + resources=] being used. This will not automatically release + all {{AudioContext}}-created objects, but will suspend the + progression of the {{AudioContext}}'s + {{BaseAudioContext/currentTime}}, and stop + processing audio data. + +
+ When close is called, execute these steps: + + 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. + + 1. Let promise be a new Promise. + + 1. If the control thread state flag on the + {{AudioContext}} is closed reject the promise + with {{InvalidStateError}}, abort these steps, + returning promise. + + 1. Set the control thread state flag on the {{AudioContext}} to closed. + + 1. Queue a control message to close the {{AudioContext}}. + + 1. Return promise. +
+ +
+ Running a control message to close an + {{AudioContext}} means running these steps on the + rendering thread: + + 1. Attempt to release system resources. + + 2. Set the rendering thread state to suspended. +
+ This will stop rendering. +
+ + 3. If this control message is being run in a reaction to the + document being unloaded, abort this algorithm. +
+ There is no need to notify the control thread in this case. +
+ + 4. Queue a task on the control thread's event loop, to execute these steps: + 1. Resolve promise. + 2. If the {{BaseAudioContext/state}} attribute of the {{AudioContext}} is not already "{{AudioContextState/closed}}": + 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/closed}}". + 2. Queue a task to fire a simple event named statechange at the {{AudioContext}}. +
+ + When an {{AudioContext}} is closed, any + {{MediaStream}}s and {{HTMLMediaElement}}s + that were connected to an {{AudioContext}} will have their + output ignored. That is, these will no longer cause any output + to speakers or other output devices. For more flexibility in + behavior, consider using + HTMLMediaElement.captureStream(). + + Note: When an {{AudioContext}} has been closed, implementation can + choose to aggressively release more resources than when + suspending. + +
+ No parameters. +
+
+ Return type: {{Promise}}<{{void}}> +
+ + : createMediaElementSource(mediaElement) + :: + Creates a {{MediaElementAudioSourceNode}} + given an {{HTMLMediaElement}}. As a consequence of calling this + method, audio playback from the {{HTMLMediaElement}} will be + re-routed into the processing graph of the + {{AudioContext}}. + +
+		mediaElement: The media element that will be re-routed.
+		
+ +
+ Return type: {{MediaElementAudioSourceNode}} +
+ + : createMediaStreamDestination() + :: + Creates a {{MediaStreamAudioDestinationNode}} + +
+ No parameters. +
+
+ Return type: + {{MediaStreamAudioDestinationNode}} +
+ + : createMediaStreamSource(mediaStream) + :: + Creates a {{MediaStreamAudioSourceNode}}. + +
+		mediaStream: The media stream that will act as source.
+		
+ +
+ Return type: {{MediaStreamAudioSourceNode}} +
+ + : createMediaStreamTrackSource(mediaStreamTrack) + :: + Creates a {{MediaStreamTrackAudioSourceNode}}. + +
+		mediaStreamTrack: The {{MediaStreamTrack}} that will act as source. The value of its kind attribute must be equal to "audio", or an {{InvalidStateError}} exception MUST be thrown.
+		
+ +
+ Return type: + {{MediaStreamTrackAudioSourceNode}} +
+ + : getOutputTimestamp() + :: + Returns a new {{AudioTimestamp}} instance + containing two related audio stream position + values for the context: the {{AudioTimestamp/contextTime}} member contains + the time of the sample frame which is currently being rendered + by the audio output device (i.e., output audio stream + position), in the same units and origin as context's + {{BaseAudioContext/currentTime}}; the + {{AudioTimestamp/performanceTime}} member + contains the time estimating the moment when the sample frame + corresponding to the stored contextTime value was + rendered by the audio output device, in the same units and + origin as performance.now() (described in + [[!hr-time-2]]). + + If the context's rendering graph has not yet processed a block + of audio, then {{getOutputTimestamp}} call + returns an {{AudioTimestamp}} instance with both + members containing zero. + + After the context's rendering graph has started processing of + blocks of audio, its {{BaseAudioContext/currentTime}} attribute value + always exceeds the {{AudioTimestamp/contextTime}} value obtained + from {{AudioContext/getOutputTimestamp}} method call. + +
+ The value returned from {{getOutputTimestamp}} + method can be used to get performance time estimation for the + slightly later context's time value: + +
+				function outputPerformanceTime(contextTime) {
+					const timestamp = context.getOutputTimestamp();
+					const elapsedTime = contextTime - timestamp.contextTime;
+					return timestamp.performanceTime + elapsedTime * 1000;
+				}
+			
+ + In the above example the accuracy of the estimation depends on + how close the argument value is to the current output audio + stream position: the closer the given contextTime + is to timestamp.contextTime, the better the + accuracy of the obtained estimation. +
+ + Note: The difference between the values of the context's + {{BaseAudioContext/currentTime}} and the + {{AudioTimestamp/contextTime}} + obtained from {{AudioContext/getOutputTimestamp}} method call + cannot be considered as a reliable output latency estimation + because {{BaseAudioContext/currentTime}} may be + incremented at non-uniform time intervals, so {{AudioContext/outputLatency}} attribute should + be used instead. + +
+ No parameters. +
+
+ Return type: {{AudioTimestamp}} +
+ + : resume() + :: + Resumes the progression of the {{AudioContext}}'s + {{BaseAudioContext/currentTime}} when it has + been suspended. + +
+ When resume is called, + execute these steps: + + 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. + + 1. Let promise be a new Promise. + + 2. If the control thread state on the + {{AudioContext}} is closed reject the + promise with {{InvalidStateError}}, abort these steps, + returning promise. + + 3. Set {{[[suspended by user]]}} to false. + + 4. If the context is not allowed to start, append + promise to {{BaseAudioContext/[[pending promises]]}} and + {{AudioContext/[[pending resume promises]]}} and abort these steps, returning + promise. - 7. Return promise. -
+ 5. Set the control thread state on the + {{AudioContext}} to running. -
- Running a control message to suspend an - {{AudioContext}} means running these steps on the - rendering thread: + 6. Queue a control message to resume the {{AudioContext}}. - 1. Attempt to release system resources. + 7. Return promise. +
- 2. Set the {{[[rendering thread state]]}} on the {{AudioContext}} to suspended. +
+ Running a control message to resume an + {{AudioContext}} means running these steps on the + rendering thread: - 3. - queue a media element task to execute the following steps: + 1. Attempt to acquire system resources. - 1. Resolve promise. + 2. Set the rendering thread state on the {{AudioContext}} to running. - 2. If the {{BaseAudioContext/state}} - attribute of the {{AudioContext}} is not already "{{AudioContextState/suspended}}": + 3. Start rendering the audio graph. - 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/suspended}}". + 4. In case of failure, queue a task on the control thread to execute the following, + and abort these steps: - 1. [=Queue a media element task=] to [=fire an event=] named - {{BaseAudioContext/statechange}} at the {{AudioContext}}. -
+ 1. Reject all promises from {{AudioContext/[[pending resume promises]]}} + in order, then clear {{AudioContext/[[pending resume promises]]}}. - While an {{AudioContext}} is suspended, - {{MediaStream}}s will have their output ignored; that - is, data will be lost by the real time nature of media streams. - {{HTMLMediaElement}}s will similarly have their output - ignored until the system is resumed. {{AudioWorkletNode}}s - and {{ScriptProcessorNode}}s will cease to have their - processing handlers invoked while suspended, but will resume - when the context is resumed. For the purpose of - {{AnalyserNode}} window functions, the data is considered as - a continuous stream - i.e. the - resume()/suspend() does not cause - silence to appear in the {{AnalyserNode}}'s stream of data. - In particular, calling {{AnalyserNode}} functions repeatedly - when a {{AudioContext}} is suspended MUST return the same - data. + 2. Additionally, remove those promises from {{BaseAudioContext/[[pending + promises]]}}. -
- No parameters. -
-
- Return type: {{Promise}}<{{undefined}}> -
+ 5. Queue a task on the control thread's event loop, to + execute these steps: - : setSinkId((DOMString or AudioSinkOptions) sinkId) - :: - Sets the identifier of an output device. When this method is invoked, the - user agent MUST run the following steps: -
- 1. Let |sinkId| be the method's first argument. + 1. Clear {{AudioContext/[[pending resume promises]]}}. Additionally, remove those + promises from {{BaseAudioContext/[[pending promises]]}}. - 1. If |sinkId| is equal to {{AudioContext/[[sink ID]]}}, return a - promise, resolve it immediately and abort these steps. + 2. Resolve promise. - 1. Let |validationResult| be the return value of - sink identifier validation - of |sinkId|. - - 1. If |validationResult| is not null, return a promise - rejected with |validationResult|. Abort these steps. + 3. If the {{BaseAudioContext/state}} attribute of the {{AudioContext}} is not already "{{AudioContextState/running}}": - 1. Let |p| be a new promise. + 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/running}}". - 1. Send a control message with |p| and |sinkId| to start - processing. + 2. Queue a task to fire a simple event named statechange at the {{AudioContext}}. +
- 1. Return |p|. - +
+ No parameters. +
-
- Sending a control message to start processing during - {{AudioContext/setSinkId()}} means executing the following steps: +
+ Return type: {{Promise}}<{{void}}> +
- 1. Let |p| be the promise passed into this algorithm. + : suspend() + :: + Suspends the progression of {{AudioContext}}'s + {{BaseAudioContext/currentTime}}, allows any + current context processing blocks that are already processed to + be played to the destination, and then allows the system to + release its claim on audio hardware. This is generally useful + when the application knows it will not need the + {{AudioContext}} for some time, and wishes to temporarily + release system resource associated with the + {{AudioContext}}. The promise resolves when the frame buffer + is empty (has been handed off to the hardware), or immediately + (with no other effect) if the context is already + suspended. The promise is rejected if the context + has been closed. - 1. Let |sinkId| be the sink identifier passed into this algorithm. +
+ When suspend is called, execute these steps: - 1. If both |sinkId| and {{AudioContext/[[sink ID]]}} are a type of - {{DOMString}}, and they are equal to each other, - - queue a media element task to resolve |p| and abort these steps. + 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. - 1. If |sinkId| is a type of {{AudioSinkOptions}} and - {{AudioContext/[[sink ID]]}} is a type of {{AudioSinkInfo}}, and - {{AudioSinkOptions/type}} in |sinkId| and {{AudioSinkInfo/type}} in - {{AudioContext/[[sink ID]]}} are equal, - - queue a media element task to resolve |p| and abort these steps. + 1. Let promise be a new Promise. - 1. Let |wasRunning| be true. + 2. If the control thread state on the + {{AudioContext}} is closed reject the promise + with {{InvalidStateError}}, abort these steps, + returning promise. - 1. Set |wasRunning| to false if the {{[[rendering thread state]]}} on - the {{AudioContext}} is "suspended". + 3. Append promise to {{BaseAudioContext/[[pending promises]]}}. - 1. Pause the renderer after processing the current render quantum. + 4. Set {{[[suspended by user]]}} to true. - 1. Attempt to release system resources. + 5. Set the control thread state on the {{AudioContext}} to suspended. - 1. If |wasRunning| is true: + 6. Queue a control message to suspend the {{AudioContext}}. - 1. Set the {{[[rendering thread state]]}} on the {{AudioContext}} to - "suspended". - - 1. - Queue a media element task to execute the following steps: + 7. Return promise. +
- 1. If the {{BaseAudioContext/state}} attribute of the - {{AudioContext}} is not already "{{AudioContextState/suspended}}": - - 1. Set the {{BaseAudioContext/state}} attribute of the - {{AudioContext}} to "{{AudioContextState/suspended}}". - - 1. [=Fire an event=] named {{BaseAudioContext/statechange}} at the - associated {{AudioContext}}. - - 1. Attempt to acquire system resources to use - a following audio output device based on {{AudioContext/[[sink ID]]}} - for rendering: - - * The default audio output device for the empty string. - * A audio output device identified by {{AudioContext/[[sink ID]]}}. - - In case of failure, reject |p| with "{{InvalidAccessError}}" abort - the following steps. - - 1. - Queue a media element task to execute the following steps: - - 1. If |sinkId| is a type of {{DOMString}}, set - {{AudioContext/[[sink ID]]}} to |sinkId|. Abort these steps. +
+ Running a control message to suspend an + {{AudioContext}} means running these steps on the + rendering thread: - 1. If |sinkId| is a type of {{AudioSinkOptions}} and - {{AudioContext/[[sink ID]]}} is a type of {{DOMString}}, - set {{AudioContext/[[sink ID]]}} to a new instance of - {{AudioSinkInfo}} created with the value of - {{AudioSinkOptions/type}} of |sinkId|. + 1. Attempt to release system resources. - 1. If |sinkId| is a type of {{AudioSinkOptions}} and - {{AudioContext/[[sink ID]]}} is a type of {{AudioSinkInfo}}, - set {{AudioSinkInfo/type}} of {{AudioContext/[[sink ID]]}} to - the {{AudioSinkOptions/type}} value of |sinkId|. + 2. Set the rendering thread state on the {{AudioContext}} to suspended. - 1. Resolve |p|. - - 1. [=Fire an event=] named {{AudioContext/sinkchange}} at the - associated {{AudioContext}}. + 3. Queue a task on the control thread's event loop, to execute these steps: - 1. If |wasRunning| is true: + 1. Resolve promise. - 1. Set the {{[[rendering thread state]]}} on the {{AudioContext}} to - "running". + 2. If the {{BaseAudioContext/state}} + attribute of the {{AudioContext}} is not already "{{AudioContextState/suspended}}": - 1. - Queue a media element task to execute the following steps: + 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/suspended}}". + 2. Queue a task to fire a simple event named statechange at the {{AudioContext}}. +
- 1. If the {{BaseAudioContext/state}} attribute of the - {{AudioContext}} is not already "{{AudioContextState/running}}": + While an {{AudioContext}} is suspended, + {{MediaStream}}s will have their output ignored; that + is, data will be lost by the real time nature of media streams. + {{HTMLMediaElement}}s will similarly have their output + ignored until the system is resumed. {{AudioWorkletNode}}s + and {{ScriptProcessorNode}}s will cease to have their + processing handlers invoked while suspended, but will resume + when the context is resumed. For the purpose of + {{AnalyserNode}} window functions, the data is considered as + a continuous stream - i.e. the + resume()/suspend() does not cause + silence to appear in the {{AnalyserNode}}'s stream of data. + In particular, calling {{AnalyserNode}} functions repeatedly + when a {{AudioContext}} is suspended MUST return the same + data. - 1. Set the {{BaseAudioContext/state}} attribute of the - {{AudioContext}} to "{{AudioContextState/running}}". - - 1. [=Fire an event=] named {{BaseAudioContext/statechange}} at the - associated {{AudioContext}}. -
+
+ No parameters. +
+
+ Return type: {{Promise}}<{{void}}> +
-

-Validating {{AudioContext/sinkId}}

- -This algorithm is used to validate the information provided to modify -{{AudioContext/sinkId}}: - -
- 1. Let |document| be the current settings object's [=associated Document=]. - - 1. Let |sinkIdArg| be the value passed in to this algorithm. - - 1. If |document| is not allowed to use the feature identified by - "speaker-selection", return a new {{DOMException}} whose name - is "{{NotAllowedError}}". - - 1. If |sinkIdArg| is a type of {{DOMString}} but it is not equal to the empty - string or it does not match any audio output device identified by the - result that would be provided by {{MediaDevices/enumerateDevices()}}, - return a new {{DOMException}} whose name is "{{NotFoundError}}". - - 1. Return null. -
- -

+

{{AudioContextOptions}}

The {{AudioContextOptions}} dictionary is used to specify user-specified options for an {{AudioContext}}.
-    dictionary AudioContextOptions {
-        (AudioContextLatencyCategory or double) latencyHint = "interactive";
-        float sampleRate;
-        (DOMString or AudioSinkOptions) sinkId;
-        (AudioContextRenderSizeCategory or unsigned long) renderSizeHint = "default";
-    };
+dictionary AudioContextOptions {
+	(AudioContextLatencyCategory or double) latencyHint = "interactive";
+	float sampleRate;
+};
 
Dictionary {{AudioContextOptions}} Members
- : latencyHint - :: - Identify the type of playback, which affects tradeoffs - between audio output latency and power consumption. - - The preferred value of the latencyHint is a - value from {{AudioContextLatencyCategory}}. However, a - double can also be specified for the number of seconds of - latency for finer control to balance latency and power - consumption. It is at the browser's discretion to interpret - the number appropriately. The actual latency used is given by - AudioContext's {{AudioContext/baseLatency}} attribute. - - : sampleRate - :: - Set the {{BaseAudioContext/sampleRate}} to this value - for the {{AudioContext}} that will be created. The - supported values are the same as the sample rates for an - {{AudioBuffer}}. A - {{NotSupportedError}} exception MUST be thrown if - the specified sample rate is not supported. - - If {{AudioContextOptions/sampleRate}} is not - specified, the preferred sample rate of the output device for - this {{AudioContext}} is used. - - : sinkId - :: - The identifier or associated information of the audio output device. - See {{AudioContext/sinkId}} for more details. - - : renderSizeHint - :: - This allows users to ask for a particular render quantum size when an - integer is passed, to use the default of 128 frames if nothing or - "default" is passed, or to ask the User-Agent to pick a good - render quantum size if "hardware" is specified. - - It is a hint that might not be honored. -
- -

-{{AudioSinkOptions}}

- -The {{AudioSinkOptions}} dictionary is used to specify options for -{{AudioContext/sinkId}}. - -
-dictionary AudioSinkOptions {
-    required AudioSinkType type;
-};
-
- -
-Dictionary {{AudioSinkOptions}} Members
- -
- : type - :: - A value of {{AudioSinkType}} to specify the type of the device. -
- -

-{{AudioSinkInfo}}

- -The {{AudioSinkInfo}} interface is used to get information on the current -audio output device via {{AudioContext/sinkId}}. - -
-[Exposed=Window]
-interface AudioSinkInfo {
-    readonly attribute AudioSinkType type;
-};
-
- -
-Attributes
- -
- : type - :: - A value of {{AudioSinkType}} that represents the type of the device. + : latencyHint + :: + Identify the type of playback, which affects tradeoffs + between audio output latency and power consumption. + + The preferred value of the latencyHint is a + value from {{AudioContextLatencyCategory}}. However, a + double can also be specified for the number of seconds of + latency for finer control to balance latency and power + consumption. It is at the browser's discretion to interpret + the number appropriately. The actual latency used is given by + AudioContext's {{AudioContext/baseLatency}} attribute. + + : sampleRate + :: + Set the {{BaseAudioContext/sampleRate}} to this value + for the {{AudioContext}} that will be created. The + supported values are the same as the sample rates for an + {{AudioBuffer}}. A + {{NotSupportedError}} exception MUST be thrown if + the specified sample rate is not supported. + + If {{AudioContextOptions/sampleRate}} is not + specified, the preferred sample rate of the output device for + this {{AudioContext}} is used.
-

+

{{AudioTimestamp}}

 dictionary AudioTimestamp {
-    double contextTime;
-    DOMHighResTimeStamp performanceTime;
+	double contextTime;
+	DOMHighResTimeStamp performanceTime;
 };
 
@@ -2381,142 +1884,16 @@ dictionary AudioTimestamp { Dictionary {{AudioTimestamp}} Members
- : contextTime - :: - Represents a point in the time coordinate system of - BaseAudioContext's {{BaseAudioContext/currentTime}}. - - : performanceTime - :: - Represents a point in the time coordinate system of a - Performance interface implementation (described in - [[!hr-time-3]]). -
- -

-{{AudioRenderCapacity}}

- -
-[Exposed=Window]
-interface AudioRenderCapacity : EventTarget {
-    undefined start(optional AudioRenderCapacityOptions options = {});
-        undefined stop();
-        attribute EventHandler onupdate;
-};
-
- -This interface provides rendering performance metrics of an -{{AudioContext}}. In order to calculate them, the renderer collects a -load value per system-level audio callback. - -
-Attributes
- -
- : onupdate - :: - The event type of this event handler is update. Events - dispatched to the event handler will use the - {{AudioRenderCapacityEvent}} interface. -
- -
-Methods
- -
- : start(options) - :: - Starts metric collection and analysis. This will repeatedly [=fire an event=] named - {{AudioRenderCapacity/update}} at the {{AudioRenderCapacity}}, using - {{AudioRenderCapacityEvent}}, with the given update interval in - {{AudioRenderCapacityOptions}}. - - : stop() - :: - Stops metric collection and analysis. It also stops dispatching - {{AudioRenderCapacity/update}} events. -
- -

-{{AudioRenderCapacityOptions}}

- -The {{AudioRenderCapacityOptions}} dictionary can be used to provide user -options for an {{AudioRenderCapacity}}. - -
-dictionary AudioRenderCapacityOptions {
-        double updateInterval = 1;
-};
-
- -
-Dictionary {{AudioRenderCapacityOptions}} Members
- -
- : updateInterval - :: - An update interval (in second) for dispaching - {{AudioRenderCapacityEvent}}s. A load value is calculated - per system-level audio callback, and multiple load values will - be collected over the specified interval period. For example, if - the renderer runs at a 48Khz sample rate and the system-level - audio callback's buffer size is 192 frames, 250 load values - will be collected over 1 second interval. - - If the given value is smaller than the duration of - the system-level audio callback, {{NotSupportedError}} is - thrown. -
- -

-{{AudioRenderCapacityEvent}}

- -
-[Exposed=Window]
-interface AudioRenderCapacityEvent : Event {
-    constructor (DOMString type, optional AudioRenderCapacityEventInit eventInitDict = {});
-        readonly attribute double timestamp;
-        readonly attribute double averageLoad;
-        readonly attribute double peakLoad;
-        readonly attribute double underrunRatio;
-};
-
-dictionary AudioRenderCapacityEventInit : EventInit {
-    double timestamp = 0;
-    double averageLoad = 0;
-    double peakLoad = 0;
-    double underrunRatio = 0;
-};
-
- -
-Attributes
- -
- : timestamp - :: - The start time of the data collection period in terms of the - associated {{AudioContext}}'s {{BaseAudioContext/currentTime}}. - : averageLoad - :: - An average of collected load values over the given update - interval. The precision is limited to 1/100th. - : peakLoad - :: - A maximum value from collected load values over the given update - interval. The precision is also limited to 1/100th. - : underrunRatio - :: - A ratio between the number of buffer underruns (when a - load value is greater than 1.0) and the total number of - system-level audio callbacks over the given update interval. - - Where \(u\) is the number of buffer underruns and \(N\) is the - number of system-level audio callbacks over the given update - interval, the buffer underrun ratio is: - - 0.0 if \(u\) = 0. - - Otherwise, compute \(u/N\) and take a ceiling value of the - nearest 100th. + : contextTime + :: + Represents a point in the time coordinate system of + BaseAudioContext's {{BaseAudioContext/currentTime}}. + + : performanceTime + :: + Represents a point in the time coordinate system of a + Performance interface implementation (described in + [[!hr-time-2]]).
-

+

The {{OfflineAudioContext}} Interface

{{OfflineAudioContext}} is a particular type of @@ -2542,13 +1919,13 @@ returned promise with the rendered result as an [Exposed=Window] interface OfflineAudioContext : BaseAudioContext { - constructor(OfflineAudioContextOptions contextOptions); - constructor(unsigned long numberOfChannels, unsigned long length, float sampleRate); - Promise<AudioBuffer> startRendering(); - Promise<undefined> resume(); - Promise<undefined> suspend(double suspendTime); - readonly attribute unsigned long length; - attribute EventHandler oncomplete; + constructor(OfflineAudioContextOptions contextOptions); + constructor(unsigned long numberOfChannels, unsigned long length, float sampleRate); + Promise<AudioBuffer> startRendering(); + Promise<void> resume(); + Promise<void> suspend(double suspendTime); + readonly attribute unsigned long length; + attribute EventHandler oncomplete; }; @@ -2556,296 +1933,266 @@ interface OfflineAudioContext : BaseAudioContext { Constructors
- : OfflineAudioContext(contextOptions) - :: -
- -

- If the [=current settings object=]'s [=relevant global object=]'s - [=associated Document=] is NOT [=fully active=], - throw an {{InvalidStateError}} and abort these steps. -

- Let |c| be a new {{OfflineAudioContext}} object. - Initialize |c| as follows: - - 1. Set the {{[[control thread state]]}} for |c| - to "suspended". - - 1. Set the {{[[rendering thread state]]}} for - |c| to "suspended". - - 1. Determine the {{[[render quantum size]]}} for this {{OfflineAudioContext}}, - based on the value of the {{OfflineAudioContextOptions/renderSizeHint}}: - - 1. If it has the default value of "default" or - "hardware", set the {{[[render quantum size]]}} private - slot to 128. - - 1. Else, if an integer has been passed, the User-Agent can decide to - honour this value by setting it to the {{[[render quantum size]]}} - private slot. - - 1. Construct an {{AudioDestinationNode}} with its - {{AudioNode/channelCount}} set to - contextOptions.numberOfChannels. - - 1. Let |messageChannel| be a new {{MessageChannel}}. - - 1. Let |controlSidePort| be the value of - |messageChannel|'s {{MessageChannel/port1}} attribute. - - 1. Let |renderingSidePort| be the value of - |messageChannel|'s {{MessageChannel/port2}} attribute. - - 1. Let |serializedRenderingSidePort| be the result of - [$StructuredSerializeWithTransfer$](|renderingSidePort|, - « |renderingSidePort| »). - - 1. Set this {{BaseAudioContext/audioWorklet}}'s {{AudioWorklet/port}} to - |controlSidePort|. - - 1. Queue a control message to set the - MessagePort on the AudioContextGlobalScope, with - |serializedRenderingSidePort|. -
- -
-        contextOptions: The initial parameters needed to construct this context.
-        
- - : OfflineAudioContext(numberOfChannels, length, sampleRate) - :: - The {{OfflineAudioContext}} can be constructed with the same arguments - as AudioContext.createBuffer. A - {{NotSupportedError}} exception MUST be thrown if any - of the arguments is negative, zero, or outside its nominal - range. - - The OfflineAudioContext is constructed as if - -
-            new OfflineAudioContext({
-                    numberOfChannels: numberOfChannels,
-                    length: length,
-                    sampleRate: sampleRate
-            })
-        
- - were called instead. - -
-        numberOfChannels: Determines how many channels the buffer will have. See {{BaseAudioContext/createBuffer()}} for the supported number of channels.
-        length: Determines the size of the buffer in sample-frames.
-        sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. See {{BaseAudioContext/createBuffer()}} for valid sample rates.
-        
+ : OfflineAudioContext(contextOptions) + :: +
+ +

+ If the current + settings object's associated + Document is NOT + fully active, throw an InvalidStateError and + abort these steps. +

+ Let c be a new {{OfflineAudioContext}} object. + Initialize c as follows: + + 1. Set the control thread state for c + to "suspended". + + 2. Set the rendering thread state for + c to "suspended". + + 3. Construct an {{AudioDestinationNode}} with its + {{AudioNode/channelCount}} set to + contextOptions.numberOfChannels. +
+ +
+		contextOptions: The initial parameters needed to construct this context.
+		
+ + : OfflineAudioContext(numberOfChannels, length, sampleRate) + :: + The {{OfflineAudioContext}} can be constructed with the same arguments + as AudioContext.createBuffer. A + {{NotSupportedError}} exception MUST be thrown if any + of the arguments is negative, zero, or outside its nominal + range. + + The OfflineAudioContext is constructed as if + +
+			new OfflineAudioContext({
+					numberOfChannels: numberOfChannels,
+					length: length,
+					sampleRate: sampleRate
+			})
+		
+ + were called instead. + +
+		numberOfChannels: Determines how many channels the buffer will have. See {{BaseAudioContext/createBuffer()}} for the supported number of channels.
+		length: Determines the size of the buffer in sample-frames.
+		sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. See {{BaseAudioContext/createBuffer()}} for valid sample rates.
+		

Attributes

- : length - :: - The size of the buffer in sample-frames. This is the same as the - value of the length parameter for the constructor. - - : oncomplete - :: - The event type of this event handler is complete. The event - dispatched to the event handler will use the {{OfflineAudioCompletionEvent}} - interface. It is the last event fired on an {{OfflineAudioContext}}. + : length + :: + The size of the buffer in sample-frames. This is the same as the + value of the length parameter for the constructor. + + : oncomplete + :: + An EventHandler of type OfflineAudioCompletionEvent. + It is the last event fired on an {{OfflineAudioContext}}.

Methods

- : startRendering() - :: - Given the current connections and scheduled changes, starts - rendering audio. - - Although the primary method of getting the rendered audio data - is via its promise return value, the instance will also fire an - event named complete for legacy reasons. - -
- Let [[rendering started]] be an internal slot of this {{OfflineAudioContext}}. Initialize this slot to false. - - When startRendering is - called, the following steps MUST be performed on the control - thread: - -
    -
  1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. - -
  2. If the {{[[rendering started]]}} slot on the - {{OfflineAudioContext}} is true, return a rejected - promise with {{InvalidStateError}}, and abort these - steps. + : startRendering() + :: + Given the current connections and scheduled changes, starts + rendering audio. + + Although the primary method of getting the rendered audio data + is via its promise return value, the instance will also fire an + event named complete for legacy reasons. + +
    + Let [[rendering started]] be an internal slot of this {{OfflineAudioContext}}. Initialize this slot to false. + + When startRendering is + called, the following steps MUST be performed on the control + thread: + +
      +
    1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}. -
    2. Set the {{[[rendering started]]}} slot of the - {{OfflineAudioContext}} to true. +
    3. If the {{[[rendering started]]}} slot on the + {{OfflineAudioContext}} is true, return a rejected + promise with {{InvalidStateError}}, and abort these + steps. + +
    4. Set the {{[[rendering started]]}} slot of the + {{OfflineAudioContext}} to true. -
    5. Let promise be a new promise. +
    6. Let promise be a new promise. -
    7. Create a new {{AudioBuffer}}, with a number of - channels, length and sample rate equal respectively to the - numberOfChannels, length and - sampleRate values passed to this instance's - constructor in the contextOptions parameter. - Assign this buffer to an internal slot - [[rendered buffer]] in the {{OfflineAudioContext}}. +
    8. Create a new {{AudioBuffer}}, with a number of + channels, length and sample rate equal respectively to the + numberOfChannels, length and + sampleRate values passed to this instance's + constructor in the contextOptions parameter. + Assign this buffer to an internal slot + [[rendered buffer]] in the {{OfflineAudioContext}}. -
    9. If an exception was thrown during the preceding - {{AudioBuffer}} constructor call, reject - promise with this exception. +
    10. If an exception was thrown during the preceding + {{AudioBuffer}} constructor call, reject + promise with this exception. -
    11. Otherwise, in the case that the buffer was successfully - constructed, begin offline rendering. +
    12. Otherwise, in the case that the buffer was successfully + constructed, begin offline rendering. -
    13. Append promise to {{BaseAudioContext/[[pending promises]]}}. +
    14. Append promise to {{BaseAudioContext/[[pending promises]]}}. -
    15. Return promise. -
    -
    +
  3. Return promise. +
+
-
- To begin offline rendering, the following steps MUST - happen on a rendering thread that is created for the - occasion. +
+ To begin offline rendering, the following steps MUST + happen on a rendering thread that is created for the + occasion. -
    -
  1. Given the current connections and scheduled changes, start - rendering length sample-frames of audio into - {{[[rendered buffer]]}} +
      +
    1. Given the current connections and scheduled changes, start + rendering length sample-frames of audio into + {{[[rendered buffer]]}} -
    2. For every render quantum, check and - {{OfflineAudioContext/suspend()|suspend}} - rendering if necessary. +
    3. For every render quantum, check and + {{OfflineAudioContext/suspend()|suspend}} + rendering if necessary. -
    4. If a suspended context is resumed, continue to render the - buffer. +
    5. If a suspended context is resumed, continue to render the + buffer. -
    6. Once the rendering is complete, - - queue a media element task to execute the following steps: +
    7. Once the rendering is complete, queue a task on the + control thread's event loop to perform the following + steps: +
        +
      1. Resolve the promise created by {{startRendering()}} with {{[[rendered buffer]]}}. -
          -
        1. Resolve the promise created by {{startRendering()}} - with {{[[rendered buffer]]}}. +
        2. Queue a task to fire an event named + complete at this instance, using an instance + of {{OfflineAudioCompletionEvent}} whose + renderedBuffer property is set to + {{[[rendered buffer]]}}. -
        3. [=Queue a media element task=] to [=fire an event=] named - {{OfflineAudioContext/complete}} at the {{OfflineAudioContext}} using - {{OfflineAudioCompletionEvent}} whose `renderedBuffer` property is set to - {{[[rendered buffer]]}}. +
        -
      +
    +
- -
+
+ No parameters. +
+
+ Return type: {{Promise}}<{{AudioBuffer}}> +
-
- No parameters. -
-
- Return type: {{Promise}}<{{AudioBuffer}}> -
+ : resume() + :: + Resumes the progression of the {{OfflineAudioContext}}'s + {{BaseAudioContext/currentTime}} when it has + been suspended. - : resume() - :: - Resumes the progression of the {{OfflineAudioContext}}'s - {{BaseAudioContext/currentTime}} when it has - been suspended. +
+ When resume is called, + execute these steps: -
- When resume is called, - execute these steps: + 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is + not [=fully active=] then return [=a promise rejected with=] + "{{InvalidStateError}}" {{DOMException}}. - 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is - not [=fully active=] then return [=a promise rejected with=] - "{{InvalidStateError}}" {{DOMException}}. + 1. Let promise be a new Promise. - 1. Let promise be a new Promise. + 1. Abort these steps and reject promise with + {{InvalidStateError}} when any of following conditions is true: + - The control thread state on the {{OfflineAudioContext}} + is closed. + - The {{[[rendering started]]}} slot on the {{OfflineAudioContext}} + is false. - 1. Abort these steps and reject promise with - {{InvalidStateError}} when any of following conditions is true: - - The {{[[control thread state]]}} on the {{OfflineAudioContext}} - is closed. - - The {{[[rendering started]]}} slot on the {{OfflineAudioContext}} - is false. + 1. Set the control thread state flag on the + {{OfflineAudioContext}} to running. - 1. Set the {{[[control thread state]]}} flag on the - {{OfflineAudioContext}} to running. + 1. Queue a control message to resume the {{OfflineAudioContext}}. - 1. Queue a control message to resume the {{OfflineAudioContext}}. + 1. Return promise. +
- 1. Return promise. -
+
+ Running a control message to resume an + {{OfflineAudioContext}} means running these steps on the + rendering thread: -
- Running a control message to resume an - {{OfflineAudioContext}} means running these steps on the - rendering thread: + 1. Set the rendering thread state on the {{OfflineAudioContext}} to running. - 1. Set the {{[[rendering thread state]]}} on the {{OfflineAudioContext}} to running. + 2. Start rendering the audio graph. + + 3. In case of failure, queue a task on the control thread to + reject promise and abort these steps: - 2. Start rendering the audio graph. + 4. Queue a task on the control thread's event loop, to + execute these steps: - 3. In case of failure, - - queue a media element task to reject |promise| and abort the remaining steps. + 1. Resolve promise. - 4. - queue a media element task to execute the following steps: + 2. If the {{BaseAudioContext/state}} attribute of the + {{OfflineAudioContext}} is not already "{{AudioContextState/running}}": - 1. Resolve promise. + 1. Set the {{BaseAudioContext/state}} attribute of the + {{OfflineAudioContext}} to "{{AudioContextState/running}}". - 2. If the {{BaseAudioContext/state}} attribute of the - {{OfflineAudioContext}} is not already "{{AudioContextState/running}}": + 2. Queue a task to fire a simple event named statechange + at the {{OfflineAudioContext}}. +
- 1. Set the {{BaseAudioContext/state}} attribute of the - {{OfflineAudioContext}} to "{{AudioContextState/running}}". +
+ No parameters. +
- 1. [=Queue a media element task=] to [=fire an event=] named - {{BaseAudioContext/statechange}} at the {{OfflineAudioContext}}. +
+ Return type: {{Promise}}<{{void}}> +
-
+ : suspend(suspendTime) + :: + Schedules a suspension of the time progression in the audio + context at the specified time and returns a promise. This is + generally useful when manipulating the audio graph + synchronously on {{OfflineAudioContext}}. -
- No parameters. -
+ Note that the maximum precision of suspension is the size of + the render quantum and the specified suspension time + will be rounded up to the nearest render quantum + boundary. For this reason, it is not allowed to schedule + multiple suspends at the same quantized frame. Also, scheduling + should be done while the context is not running to ensure + precise suspension. -
- Return type: {{Promise}}<{{undefined}}> -
+
+		suspendTime: Schedules a suspension of the rendering at the specified time, which is quantized and rounded up to the render quantum size. If the quantized frame number 
  1. is negative or
  2. is less than or equal to the current time or
  3. is greater than or equal to the total render duration or
  4. is scheduled by another suspend for the same time,
then the promise is rejected with {{InvalidStateError}}. +
- : suspend(suspendTime) - :: - Schedules a suspension of the time progression in the audio - context at the specified time and returns a promise. This is - generally useful when manipulating the audio graph - synchronously on {{OfflineAudioContext}}. - - Note that the maximum precision of suspension is the size of - the render quantum and the specified suspension time - will be rounded up to the nearest render quantum - boundary. For this reason, it is not allowed to schedule - multiple suspends at the same quantized frame. Also, scheduling - should be done while the context is not running to ensure - precise suspension. - -
-        suspendTime: Schedules a suspension of the rendering at the specified time, which is quantized and rounded up to the render quantum size. If the quantized frame number 
  1. is negative or
  2. is less than or equal to the current time or
  3. is greater than or equal to the total render duration or
  4. is scheduled by another suspend for the same time,
then the promise is rejected with {{InvalidStateError}}. -
- -
- Return type: {{Promise}}<{{undefined}}> -
+
+ Return type: {{Promise}}<{{void}}> +
-

+

{{OfflineAudioContextOptions}}

This specifies the options to use in constructing an @@ -2853,10 +2200,9 @@ This specifies the options to use in constructing an
 dictionary OfflineAudioContextOptions {
-    unsigned long numberOfChannels = 1;
-    required unsigned long length;
-    required float sampleRate;
-    (AudioContextRenderSizeCategory or unsigned long) renderSizeHint = "default";
+	unsigned long numberOfChannels = 1;
+	required unsigned long length;
+	required float sampleRate;
 };
 
@@ -2864,25 +2210,20 @@ dictionary OfflineAudioContextOptions { Dictionary {{OfflineAudioContextOptions}} Members
- : length - :: - The length of the rendered {{AudioBuffer}} in sample-frames. - - : numberOfChannels - :: - The number of channels for this {{OfflineAudioContext}}. - - : sampleRate - :: - The sample rate for this {{OfflineAudioContext}}. - - : renderSizeHint - :: - A hint for the render quantum size of this - {{OfflineAudioContext}}. + : length + :: + The length of the rendered {{AudioBuffer}} in sample-frames. + + : numberOfChannels + :: + The number of channels for this {{OfflineAudioContext}}. + + : sampleRate + :: + The sample rate for this {{OfflineAudioContext}}.
-

+

The {{OfflineAudioCompletionEvent}} Interface

This is an {{Event}} object which is dispatched to @@ -2891,8 +2232,8 @@ This is an {{Event}} object which is dispatched to
 [Exposed=Window]
 interface OfflineAudioCompletionEvent : Event {
-    constructor (DOMString type, OfflineAudioCompletionEventInit eventInitDict);
-    readonly attribute AudioBuffer renderedBuffer;
+	constructor (DOMString type, OfflineAudioCompletionEventInit eventInitDict);
+	readonly attribute AudioBuffer renderedBuffer;
 };
 
@@ -2900,17 +2241,17 @@ interface OfflineAudioCompletionEvent : Event { Attributes
- : renderedBuffer - :: - An {{AudioBuffer}} containing the rendered audio data. + : renderedBuffer + :: + An {{AudioBuffer}} containing the rendered audio data.
-
+
{{OfflineAudioCompletionEventInit}}
 dictionary OfflineAudioCompletionEventInit : EventInit {
-    required AudioBuffer renderedBuffer;
+	required AudioBuffer renderedBuffer;
 };
 
@@ -2918,9 +2259,9 @@ dictionary OfflineAudioCompletionEventInit : EventInit { Dictionary {{OfflineAudioCompletionEventInit}} Members
- : renderedBuffer - :: - Value to be assigned to the {{OfflineAudioCompletionEvent/renderedBuffer}} attribute of the event. + : renderedBuffer + :: + Value to be assigned to the {{OfflineAudioCompletionEvent/renderedBuffer}} attribute of the event.
-

+

The {{AudioBuffer}} Interface

-This interface represents a memory-resident audio asset. It can contain one or -more channels with each channel appearing to be 32-bit floating-point -[=linear PCM=] values with a nominal range of \([-1,1]\) but the -values are not limited to this range. Typically, it would be expected -that the length of the +This interface represents a memory-resident audio asset. Its format is non-interleaved +32-bit floating-point [=linear PCM=] values with a normal range of \([-1, +1]\), but values are not limited to this range. It can contain one or +more channels. Typically, it would be expected that the length of the PCM data would be fairly short (usually somewhat less than a minute). For longer sounds, such as music soundtracks, streaming should be used with the <{audio}> element and @@ -2954,38 +2294,38 @@ An {{AudioBuffer}} may be used by one or more {{AudioBuffer}} has four internal slots:
- : [[number of channels]] - :: - The number of audio channels for this {{AudioBuffer}}, which is an unsigned long. + : [[number of channels]] + :: + The number of audio channels for this {{AudioBuffer}}, which is an unsigned long. - : \[[length]] - :: - The length of each channel of this {{AudioBuffer}}, which is an unsigned long. + : \[[length]] + :: + The length of each channel of this {{AudioBuffer}}, which is an unsigned long. - : [[sample rate]] - :: - The sample-rate, in Hz, of this {{AudioBuffer}}, a float. + : [[sample rate]] + :: + The sample-rate, in Hz, of this {{AudioBuffer}}, a float. - : [[internal data]] - :: - A [=data block=] holding the audio sample data. + : [[internal data]] + :: + A [=data block=] holding the audio sample data.
 [Exposed=Window]
 interface AudioBuffer {
-    constructor (AudioBufferOptions options);
-    readonly attribute float sampleRate;
-    readonly attribute unsigned long length;
-    readonly attribute double duration;
-    readonly attribute unsigned long numberOfChannels;
-    Float32Array getChannelData (unsigned long channel);
-    undefined copyFromChannel (Float32Array destination,
-                               unsigned long channelNumber,
-                               optional unsigned long bufferOffset = 0);
-    undefined copyToChannel (Float32Array source,
-                             unsigned long channelNumber,
-                             optional unsigned long bufferOffset = 0);
+	constructor (AudioBufferOptions options);
+	readonly attribute float sampleRate;
+	readonly attribute unsigned long length;
+	readonly attribute double duration;
+	readonly attribute unsigned long numberOfChannels;
+	Float32Array getChannelData (unsigned long channel);
+	void copyFromChannel (Float32Array destination,
+	                      unsigned long channelNumber,
+	                      optional unsigned long bufferOffset = 0);
+	void copyToChannel (Float32Array source,
+	                    unsigned long channelNumber,
+	                    optional unsigned long bufferOffset = 0);
 };
 
@@ -2993,136 +2333,136 @@ interface AudioBuffer { Constructors
- : AudioBuffer(options) - :: -
- 1. If any of the values in {{AudioBuffer/constructor()/options!!argument}} lie outside its nominal range, throw a {{NotSupportedError}} exception and abort the following steps. - - 1. Let b be a new {{AudioBuffer}} object. - 1. Respectively assign the values of the attributes - {{AudioBufferOptions/numberOfChannels}}, {{AudioBufferOptions/length}}, - {{AudioBufferOptions/sampleRate}} of the {{AudioBufferOptions}} passed - in the constructor to the internal slots {{[[number of channels]]}}, {{[[length]]}}, {{[[sample rate]]}}. - - 1. Set the internal slot {{[[internal data]]}} of this - {{AudioBuffer}} to the result of calling - CreateByteDataBlock({{[[length]]}} * {{[[number of channels]]}}). - - Note: This initializes the underlying storage to zero. - - 1. Return b. -
-
-        options: An {{AudioBufferOptions}} that determine the properties for this {{AudioBuffer}}.
-        
+ : AudioBuffer(options) + :: +
+ 1. If any of the values in {{AudioBuffer/AudioBuffer()/options}} lie outside its nominal range, throw a {{NotSupportedError}} exception and abort the following steps. + + 1. Let b be a new {{AudioBuffer}} object. + 1. Respectively assign the values of the attributes + {{AudioBufferOptions/numberOfChannels}}, {{AudioBufferOptions/length}}, + {{AudioBufferOptions/sampleRate}} of the {{AudioBufferOptions}} passed + in the constructor to the internal slots {{[[number of channels]]}}, {{[[length]]}}, {{[[sample rate]]}}. + + 1. Set the internal slot {{[[internal data]]}} of this + {{AudioBuffer}} to the result of calling + CreateByteDataBlock({{[[length]]}} * {{[[number of channels]]}}). + + Note: This initializes the underlying storage to zero. + + 1. Return b. +
+
+		options: An {{AudioBufferOptions}} that determine the properties for this {{AudioBuffer}}.
+		

Attributes

- : duration - :: - Duration of the PCM audio data in seconds. - - This is computed from the {{[[sample rate]]}} and the - {{[[length]]}} of the {{AudioBuffer}} by performing - a division between the {{AudioBuffer/[[length]]}} and the - {{[[sample rate]]}}. - - : length - :: - Length of the PCM audio data in sample-frames. This MUST return - the value of {{AudioBuffer/[[length]]}}. - - : numberOfChannels - :: - The number of discrete audio channels. This MUST return the value - of {{[[number of channels]]}}. - - : sampleRate - :: - The sample-rate for the PCM audio data in samples per second. - This MUST return the value of {{[[sample rate]]}}. + : duration + :: + Duration of the PCM audio data in seconds. + + This is computed from the {{[[sample rate]]}} and the + {{[[length]]}} of the {{AudioBuffer}} by performing + a division between the {{AudioBuffer/[[length]]}} and the + {{[[sample rate]]}}. + + : length + :: + Length of the PCM audio data in sample-frames. This MUST return + the value of {{AudioBuffer/[[length]]}}. + + : numberOfChannels + :: + The number of discrete audio channels. This MUST return the value + of {{[[number of channels]]}}. + + : sampleRate + :: + The sample-rate for the PCM audio data in samples per second. + This MUST return the value of {{[[sample rate]]}}.

Methods

- : copyFromChannel(destination, channelNumber, bufferOffset) - :: - The {{AudioBuffer/copyFromChannel()}} method copies the samples from - the specified channel of the {{AudioBuffer}} to the - destination array. - - Let buffer be the {{AudioBuffer}} with - \(N_b\) frames, let \(N_f\) be the number of elements in the - {{AudioBuffer/copyFromChannel()/destination}} array, and \(k\) be the value of - {{AudioBuffer/copyFromChannel()/bufferOffset}}. Then the number of frames copied - from buffer to {{AudioBuffer/copyFromChannel()/destination}} is - \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the - remaining elements of {{AudioBuffer/copyFromChannel()/destination}} are not - modified. - -
-        destination: The array the channel data will be copied to.
-        channelNumber: The index of the channel to copy the data from. If channelNumber is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
-        bufferOffset: An optional offset, defaulting to 0.  Data from the {{AudioBuffer}} starting at this offset is copied to the {{AudioBuffer/copyFromChannel()/destination}}.
-        
- -
- Return type: {{undefined}} -
- - : copyToChannel(source, channelNumber, bufferOffset) - :: - The {{AudioBuffer/copyToChannel()}} method copies the samples to - the specified channel of the {{AudioBuffer}} from the - source array. - - A {{UnknownError}} may be thrown if - {{AudioBuffer/copyToChannel()/source}} cannot be - copied to the buffer. - - Let buffer be the {{AudioBuffer}} with - \(N_b\) frames, let \(N_f\) be the number of elements in the - {{AudioBuffer/copyToChannel()/source}} array, and \(k\) be the value of - {{AudioBuffer/copyToChannel()/bufferOffset}}. Then the number of frames copied - from {{AudioBuffer/copyToChannel()/source}} to the buffer is - \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the - remaining elements of buffer are not - modified. - -
-        source: The array the channel data will be copied from.
-        channelNumber: The index of the channel to copy the data to. If channelNumber is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
-        bufferOffset: An optional offset, defaulting to 0.  Data from the {{AudioBuffer/copyToChannel()/source}} is copied to the {{AudioBuffer}} starting at this offset.
-        
- -
- Return type: {{undefined}} -
- - : getChannelData(channel) - :: - According to the rules described in acquire the content - either allow [=ArrayBufferView/write|writing=] into - or [=get a copy of the buffer source|getting a copy of=] - the bytes stored in {{[[internal data]]}} in a new - {{Float32Array}} - - A {{UnknownError}} may be thrown if the {{[[internal - data]]}} or the new {{Float32Array}} cannot be - created. - -
-        channel: This parameter is an index representing the particular channel to get data for. An index value of 0 represents the first channel. This index value MUST be less than {{[[number of channels]]}} or an {{IndexSizeError}} exception MUST be thrown.
-        
- -
- Return type: {{Float32Array}} -
+ : copyFromChannel(destination, channelNumber, bufferOffset) + :: + The {{AudioBuffer/copyFromChannel()}} method copies the samples from + the specified channel of the {{AudioBuffer}} to the + destination array. + + Let buffer be the {{AudioBuffer}} with + \(N_b\) frames, let \(N_f\) be the number of elements in the + {{AudioBuffer/copyFromChannel()/destination}} array, and \(k\) be the value of + {{AudioBuffer/copyFromChannel()/bufferOffset}}. Then the number of frames copied + from buffer to {{AudioBuffer/copyFromChannel()/destination}} is + \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the + remaining elements of {{AudioBuffer/copyFromChannel()/destination}} are not + modified. + +
+		destination: The array the channel data will be copied to.
+		channelNumber: The index of the channel to copy the data from. If channelNumber is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
+		bufferOffset: An optional offset, defaulting to 0.  Data from the {{AudioBuffer}} starting at this offset is copied to the {{AudioBuffer/copyFromChannel()/destination}}.
+		
+ +
+ Return type: void +
+ + : copyToChannel(source, channelNumber, bufferOffset) + :: + The {{AudioBuffer/copyToChannel()}} method copies the samples to + the specified channel of the {{AudioBuffer}} from the + source array. + + A {{UnknownError}} may be thrown if + {{AudioBuffer/copyToChannel()/source}} cannot be + copied to the buffer. + + Let buffer be the {{AudioBuffer}} with + \(N_b\) frames, let \(N_f\) be the number of elements in the + {{AudioBuffer/copyToChannel()/source}} array, and \(k\) be the value of + {{AudioBuffer/copyToChannel()/bufferOffset}}. Then the number of frames copied + from {{AudioBuffer/copyToChannel()/source}} to the buffer is + \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the + remaining elements of buffer are not + modified. + +
+		source: The array the channel data will be copied from.
+		channelNumber: The index of the channel to copy the data to. If channelNumber is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
+		bufferOffset: An optional offset, defaulting to 0.  Data from the {{AudioBuffer/copyToChannel()/source}} is copied to the {{AudioBuffer}} starting at this offset.
+		
+ +
+ Return type: void +
+ + : getChannelData(channel) + :: + According to the rules described in acquire the content + either [=get a reference to the buffer source|get a reference to=] + or [=get a copy of the buffer source|get a copy of=] + the bytes stored in {{[[internal data]]}} in a new + {{Float32Array}} + + A {{UnknownError}} may be thrown if the {{[[internal + data]]}} or the new {{Float32Array}} cannot be + created. + +
+		channel: This parameter is an index representing the particular channel to get data for. An index value of 0 represents the first channel. This index value MUST be less than {{[[number of channels]]}} or an {{IndexSizeError}} exception MUST be thrown.
+		
+ +
+ Return type: {{Float32Array}} +
Note: The methods {{AudioBuffer/copyToChannel()}} and @@ -3141,50 +2481,51 @@ implementation. This operation returns immutable channel data to the invoker.
- When an acquire the content - operation occurs on an {{AudioBuffer}}, run the following steps: + When an acquire the content + operation occurs on an {{AudioBuffer}}, run the following steps: - - 1. If any of the {{AudioBuffer}}'s {{ArrayBuffer}}s are - [=BufferSource/detached=], return `true`, abort these steps, and - return a zero-length channel data buffer to the invoker. + 1. If the operation IsDetachedBuffer + on any of the {{AudioBuffer}}'s {{ArrayBuffer}}s return + `true`, abort these steps, and return a zero-length + channel data buffer to the invoker. - 2. [=ArrayBuffer/Detach=] all {{ArrayBuffer}}s for arrays previously returned - by {{AudioBuffer/getChannelData()}} on this {{AudioBuffer}}. + 2. Detach + all {{ArrayBuffer}}s for arrays previously returned by + {{AudioBuffer/getChannelData()}} on this {{AudioBuffer}}. - Note: Because {{AudioBuffer}} can only be created via - {{BaseAudioContext/createBuffer()}} or via the {{AudioBuffer}} constructor, this - cannot throw. + Note: Because {{AudioBuffer}} can only be created via + {{BaseAudioContext/createBuffer()}} or via the {{AudioBuffer}} constructor, this + cannot throw. - 3. Retain the underlying {{[[internal data]]}} from those - {{ArrayBuffer}}s and return references to them to the - invoker. + 3. Retain the underlying {{[[internal data]]}} from those + {{ArrayBuffer}}s and return references to them to the + invoker. - 4. Attach {{ArrayBuffer}}s containing copies of the data to - the {{AudioBuffer}}, to be returned by the next call to - {{AudioBuffer/getChannelData()}}. + 4. Attach {{ArrayBuffer}}s containing copies of the data to + the {{AudioBuffer}}, to be returned by the next call to + {{AudioBuffer/getChannelData()}}.
The [=acquire the contents of an AudioBuffer=] operation is invoked in the following cases: * When {{AudioBufferSourceNode/start()|AudioBufferSourceNode.start}} is called, it - acquires the contents of the - node's {{AudioBufferSourceNode/buffer}}. If the operation fails, nothing is - played. + acquires the contents of the + node's {{AudioBufferSourceNode/buffer}}. If the operation fails, nothing is + played. * When the {{AudioBufferSourceNode/buffer}} of an {{AudioBufferSourceNode}} - is set and {{AudioBufferSourceNode/start()|AudioBufferSourceNode.start}} has been - previously called, the setter acquires - the content of the {{AudioBuffer}}. If the operation fails, - nothing is played. + is set and {{AudioBufferSourceNode/start()|AudioBufferSourceNode.start}} has been + previously called, the setter acquires + the content of the {{AudioBuffer}}. If the operation fails, + nothing is played. * When a {{ConvolverNode}}'s {{ConvolverNode/buffer}} is set to an - {{AudioBuffer}} it acquires the content of - the {{AudioBuffer}}. + {{AudioBuffer}} it acquires the content of + the {{AudioBuffer}}. * When the dispatch of an {{AudioProcessingEvent}} completes, it - acquires the contents of its - {{AudioProcessingEvent/outputBuffer}}. + acquires the contents of its + {{AudioProcessingEvent/outputBuffer}}. Note: This means that {{AudioBuffer/copyToChannel()}} cannot be used to change the content of an {{AudioBuffer}} currently in use by an @@ -3192,7 +2533,7 @@ the content of an {{AudioBuffer}} currently in use by an since the {{AudioNode}} will continue to use the data previously acquired. -

+

{{AudioBufferOptions}}

This specifies the options to use in constructing an @@ -3201,9 +2542,9 @@ required.
 dictionary AudioBufferOptions {
-    unsigned long numberOfChannels = 1;
-    required unsigned long length;
-    required float sampleRate;
+	unsigned long numberOfChannels = 1;
+	required unsigned long length;
+	required float sampleRate;
 };
 
@@ -3213,17 +2554,17 @@ Dictionary {{AudioBufferOptions}} Members The allowed values for the members of this dictionary are constrained. See {{BaseAudioContext/createBuffer()}}.
- : length - :: - The length in sample frames of the buffer. See {{BaseAudioContext/createBuffer()/length}} for constraints. + : length + :: + The length in sample frames of the buffer. See {{BaseAudioContext/createBuffer()/length}} for constraints. - : numberOfChannels - :: - The number of channels for the buffer. See {{BaseAudioContext/createBuffer()/numberOfChannels}} for constraints. + : numberOfChannels + :: + The number of channels for the buffer. See {{BaseAudioContext/createBuffer()/numberOfChannels}} for constraints. - : sampleRate - :: - The sample rate in Hz for the buffer. See {{BaseAudioContext/createBuffer()/sampleRate}} for constraints. + : sampleRate + :: + The sample rate in Hz for the buffer. See {{BaseAudioContext/createBuffer()/sampleRate}} for constraints.
@@ -3238,7 +2579,7 @@ The allowed values for the members of this dictionary are constrained. See {{Ba --> -

+

The {{AudioNode}} Interface

{{AudioNode}}s are the building blocks of an {{AudioContext}}. This interface @@ -3283,25 +2624,25 @@ reach an {{AudioContext}}'s {{AudioDestinationNode}}.
 [Exposed=Window]
 interface AudioNode : EventTarget {
-    AudioNode connect (AudioNode destinationNode,
-                       optional unsigned long output = 0,
-                       optional unsigned long input = 0);
-    undefined connect (AudioParam destinationParam, optional unsigned long output = 0);
-    undefined disconnect ();
-    undefined disconnect (unsigned long output);
-    undefined disconnect (AudioNode destinationNode);
-    undefined disconnect (AudioNode destinationNode, unsigned long output);
-    undefined disconnect (AudioNode destinationNode,
-                          unsigned long output,
-                          unsigned long input);
-    undefined disconnect (AudioParam destinationParam);
-    undefined disconnect (AudioParam destinationParam, unsigned long output);
-    readonly attribute BaseAudioContext context;
-    readonly attribute unsigned long numberOfInputs;
-    readonly attribute unsigned long numberOfOutputs;
-    attribute unsigned long channelCount;
-    attribute ChannelCountMode channelCountMode;
-    attribute ChannelInterpretation channelInterpretation;
+	AudioNode connect (AudioNode destinationNode,
+	                   optional unsigned long output = 0,
+	                   optional unsigned long input = 0);
+	void connect (AudioParam destinationParam, optional unsigned long output = 0);
+	void disconnect ();
+	void disconnect (unsigned long output);
+	void disconnect (AudioNode destinationNode);
+	void disconnect (AudioNode destinationNode, unsigned long output);
+	void disconnect (AudioNode destinationNode,
+	                 unsigned long output,
+	                 unsigned long input);
+	void disconnect (AudioParam destinationParam);
+	void disconnect (AudioParam destinationParam, unsigned long output);
+	readonly attribute BaseAudioContext context;
+	readonly attribute unsigned long numberOfInputs;
+	readonly attribute unsigned long numberOfOutputs;
+	attribute unsigned long channelCount;
+	attribute ChannelCountMode channelCountMode;
+	attribute ChannelInterpretation channelInterpretation;
 };
 
@@ -3321,53 +2662,53 @@ method, the associated BaseAudioContext of the is called on.
- To create a new {{AudioNode}} of a particular type n - using its factory method, called on a - {{BaseAudioContext}} c, execute these steps: + To create a new {{AudioNode}} of a particular type n + using its factory method, called on a + {{BaseAudioContext}} c, execute these steps: - 1. Let node be a new object of type n. + 1. Let node be a new object of type n. - 2. Let option be a dictionary of the type associated to the interface - associated to this factory method. + 2. Let option be a dictionary of the type associated to the interface + associated to this factory method. - 3. For each parameter passed to the factory method, set the - dictionary member of the same name on option to the - value of this parameter. + 3. For each parameter passed to the factory method, set the + dictionary member of the same name on option to the + value of this parameter. - 4. Call the constructor for n on node with - c and option as arguments. + 4. Call the constructor for n on node with + c and option as arguments. - 5. Return node + 5. Return node
- Initializing an object - o that inherits from {{AudioNode}} means executing the following - steps, given the arguments context and dict passed to - the constructor of this interface. - - 1. Set o's associated {{BaseAudioContext}} to context. - - 2. Set its value for {{AudioNode/numberOfInputs}}, - {{AudioNode/numberOfOutputs}}, {{AudioNode/channelCount}}, - {{AudioNode/channelCountMode}}, {{AudioNode/channelInterpretation}} to the - default value for this - specific interface outlined in the section for each {{AudioNode}}. - - 3. For each member of dict passed in, execute these steps, with - k the key of the member, and v its value. If any - exceptions is thrown when executing these steps, abort the iteration and - propagate the exception to the caller of the algorithm (constructor or - factory method). - - 1. If k is the name of an {{AudioParam}} on this - interface, set the {{AudioParam/value}} - attribute of this {{AudioParam}} to v. - - 2. Else if k is the name of an attribute on this - interface, set the object associated with this attribute to - v. + Initializing an object + o that inherits from {{AudioNode}} means executing the following + steps, given the arguments context and dict passed to + the constructor of this interface. + + 1. Set o's associated {{BaseAudioContext}} to context. + + 2. Set its value for {{AudioNode/numberOfInputs}}, + {{AudioNode/numberOfOutputs}}, {{AudioNode/channelCount}}, + {{AudioNode/channelCountMode}}, {{AudioNode/channelInterpretation}} to the + default value for this + specific interface outlined in the section for each {{AudioNode}}. + + 3. For each member of dict passed in, execute these steps, with + k the key of the member, and v its value. If any + exceptions is thrown when executing these steps, abort the iteration and + propagate the exception to the caller of the algorithm (constructor or + factory method). + + 1. If k is the name of an {{AudioParam}} on this + interface, set the {{AudioParam/value}} + attribute of this {{AudioParam}} to v. + + 2. Else if k is the name of an attribute on this + interface, set the object associated with this attribute to + v.
The associated interface for a factory method is the @@ -3382,9 +2723,9 @@ accept events.
 enum ChannelCountMode {
-    "max",
-    "clamped-max",
-    "explicit"
+	"max",
+	"clamped-max",
+	"explicit"
 };
 
@@ -3397,57 +2738,57 @@ mixing is to be done.
- - - - - - - - - + + + + + +
{{ChannelCountMode}} enumeration description
Enum valueDescription
"max" - - computedNumberOfChannels is the maximum of the number of - channels of all connections to an input. In this mode - {{AudioNode/channelCount}} is ignored. -
"clamped-max" - - computedNumberOfChannels is determined as for "{{ChannelCountMode/max}}" - and then clamped to a maximum value of the given - {{AudioNode/channelCount}}. -
"explicit" - - computedNumberOfChannels is the exact value as specified - by the {{AudioNode/channelCount}}. +
+ Enumeration description +
"max" + + computedNumberOfChannels is the maximum of the number of + channels of all connections to an input. In this mode + {{AudioNode/channelCount}} is ignored. +
"clamped-max" + + computedNumberOfChannels is determined as for "{{ChannelCountMode/max}}" + and then clamped to a maximum value of the given + {{AudioNode/channelCount}}. +
"explicit" + + computedNumberOfChannels is the exact value as specified + by the {{AudioNode/channelCount}}.
 enum ChannelInterpretation {
-    "speakers",
-    "discrete"
+	"speakers",
+	"discrete"
 };
 
- - - - - - - - + + + + +
{{ChannelInterpretation}} enumeration description
Enum valueDescription
"speakers" - - use up-mix equations or down-mix equations. In cases where the number of - channels do not match any of these basic speaker layouts, revert - to "{{ChannelInterpretation/discrete}}". -
"discrete" - - Up-mix by filling channels until they run out then zero out - remaining channels. Down-mix by filling as many channels as - possible, then dropping remaining channels. +
+ Enumeration description +
"speakers" + + use up-mix equations or down-mix equations. In cases where the number of + channels do not match any of these basic speaker layouts, revert + to "{{ChannelInterpretation/discrete}}". +
"discrete" + + Up-mix by filling channels until they run out then zero out + remaining channels. Down-mix by filling as many channels as + possible, then dropping remaining channels.
@@ -3467,29 +2808,29 @@ after the input transitions from non-silent to silent. during a render quantum, if any of the following conditions hold. - An {{AudioScheduledSourceNode}} is [=actively processing=] if and only if it - is [=playing=] for at least part of the current rendering quantum. + is [=playing=] for at least part of the current rendering quantum. - A {{MediaElementAudioSourceNode}} is [=actively processing=] if and only if its - {{MediaElementAudioSourceNode/mediaElement}} is playing for at least part of - the current rendering quantum. + {{MediaElementAudioSourceNode/mediaElement}} is playing for at least part of + the current rendering quantum. - A {{MediaStreamAudioSourceNode}} or a {{MediaStreamTrackAudioSourceNode}} are - [=actively processing=] when the associated - {{MediaStreamTrack}} object has a - readyState attribute equal to "live", a - muted attribute equal to false and an - enabled attribute equal to true. + [=actively processing=] when the associated + {{MediaStreamTrack}} object has a + readyState attribute equal to "live", a + muted attribute equal to false and an + enabled attribute equal to true. - A {{DelayNode}} in a cycle is [=actively processing=] only when the absolute value - of any output sample for the current [=render quantum=] is greater than or equal - to \( 2^{-126} \). + of any output sample for the current [=render quantum=] is greater than or equal + to \( 2^{-126} \). - A {{ScriptProcessorNode}} is [=actively processing=] when its input or output is - connected. + connected. - An {{AudioWorkletNode}} is [=actively processing=] when its - {{AudioWorkletProcessor}}'s {{[[callable process]]}} returns true - and either its [=active source=] flag is true or any - {{AudioNode}} connected to one of its inputs is [=actively processing=]. + {{AudioWorkletProcessor}}'s {{[[callable process]]}} returns true + and either its [=active source=] flag is true or any + {{AudioNode}} connected to one of its inputs is [=actively processing=]. - All other {{AudioNode}}s start [=actively processing=] when any - {{AudioNode}} connected to one of its inputs is [=actively processing=], and - stops [=actively processing=] when the input that was received from other - [=actively processing=] {{AudioNode}} no longer affects the output. + {{AudioNode}} connected to one of its inputs is [=actively processing=], and + stops [=actively processing=] when the input that was received from other + [=actively processing=] {{AudioNode}} no longer affects the output. Note: This takes into account {{AudioNode}}s that have a [=tail-time=]. @@ -3501,400 +2842,400 @@ silence. Attributes
- : channelCount - :: - {{AudioNode/channelCount}} is the number of channels used when - up-mixing and down-mixing connections to any inputs to the - node. The default value is 2 except for specific nodes where - its value is specially determined. This attribute has no effect - for nodes with no inputs. If this - value is set to zero or to a value greater than the - implementation's maximum number of channels the implementation - MUST throw a {{NotSupportedError}} exception. - - In addition, some nodes have additional channelCount - constraints on the possible values for the channel count: - - : {{AudioDestinationNode}} - :: - The behavior depends on whether the destination node is the - destination of an {{AudioContext}} or - {{OfflineAudioContext}}: - - : {{AudioContext}} - :: - The channel count MUST be between 1 and - {{AudioDestinationNode/maxChannelCount}}. An {{IndexSizeError}} exception MUST - be thrown for any attempt to set the count outside this - range. - : {{OfflineAudioContext}} - :: - The channel count cannot be changed. An {{InvalidStateError}} exception - MUST be thrown for any attempt to change the - value. - - : {{AudioWorkletNode}} - :: - See [[#configuring-channels-with-audioworkletnodeoptions]] - Configuring Channels with AudioWorkletNodeOptions. - - : {{ChannelMergerNode}} - :: - The channel count cannot be changed, and an {{InvalidStateError}} exception MUST - be thrown for any attempt to change the value. - - : {{ChannelSplitterNode}} - :: - The channel count cannot be changed, and an {{InvalidStateError}} exception MUST - be thrown for any attempt to change the value. - - : {{ConvolverNode}} - :: - The channel count cannot be greater than two, and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to change it to a - value greater than two. - - : {{DynamicsCompressorNode}} - :: - The channel count cannot be greater than two, and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to change it to a - value greater than two. - - : {{PannerNode}} - :: - The channel count cannot be greater than two, and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to change it to a - value greater than two. - - : {{ScriptProcessorNode}} - :: - The channel count cannot be changed, and an {{NotSupportedError}} exception MUST - be thrown for any attempt to change the value. - - : {{StereoPannerNode}} - :: - The channel count cannot be greater than two, and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to change it to a - value greater than two. - - See [[#channel-up-mixing-and-down-mixing]] for more information on this attribute. - - : channelCountMode - :: - {{AudioNode/channelCountMode}} determines how channels will be counted - when up-mixing and down-mixing connections to any inputs to the - node. The default value is "{{ChannelCountMode/max}}". This attribute has no effect for nodes with no inputs. - - In addition, some nodes have additional channelCountMode - constraints on the possible values for the channel count - mode: - - : {{AudioDestinationNode}} - :: - If the {{AudioDestinationNode}} is the {{BaseAudioContext/destination}} node of an - {{OfflineAudioContext}}, then the channel count mode - cannot be changed. An {{InvalidStateError}} exception MUST - be thrown for any attempt to change the value. - : {{ChannelMergerNode}} - :: - The channel count mode cannot be changed from "{{ChannelCountMode/explicit}}" and - an {{InvalidStateError}} - exception MUST be thrown for any attempt to change the - value. - - - : {{ChannelSplitterNode}} - :: - The channel count mode cannot be changed from "{{ChannelCountMode/explicit}}" and - an {{InvalidStateError}} - exception MUST be thrown for any attempt to change the - value. - - : {{ConvolverNode}} - :: - The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to set it to - "{{ChannelCountMode/max}}". - - : {{DynamicsCompressorNode}} - :: - The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to set it to - "{{ChannelCountMode/max}}". - - : {{PannerNode}} - :: - The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to set it to - "{{ChannelCountMode/max}}". - - : {{ScriptProcessorNode}} - :: - The channel count mode cannot be changed from "{{ChannelCountMode/explicit}}" and - an {{NotSupportedError}} - exception MUST be thrown for any attempt to change the - value. - - : {{StereoPannerNode}} - :: - The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a - {{NotSupportedError}} - exception MUST be thrown for any attempt to set it to - "{{ChannelCountMode/max}}". - - See the - section for more information on this attribute. - - : channelInterpretation - :: - {{AudioNode/channelInterpretation}} determines how individual channels - will be treated when up-mixing and down-mixing connections to - any inputs to the node. The default value is "{{ChannelInterpretation/speakers}}". This attribute has no effect for nodes - with no inputs. - - In addition, some nodes have additional - channelInterpretation constraints on the possible - values for the channel interpretation: - - : {{ChannelSplitterNode}} - :: - The channel intepretation can not be changed from - "{{ChannelInterpretation/discrete}}" and a - {{InvalidStateError}} - exception MUST be thrown for any attempt to change the - value. - - See [[#channel-up-mixing-and-down-mixing]] for more information on this attribute. - - : context - :: - The {{BaseAudioContext}} which owns this {{AudioNode}}. - - : numberOfInputs - :: - The number of inputs feeding into the {{AudioNode}}. For - source nodes, this will be 0. This attribute is - predetermined for many {{AudioNode}} types, but some - {{AudioNode}}s, like the {{ChannelMergerNode}} and the - {{AudioWorkletNode}}, have variable number of inputs. - - : numberOfOutputs - :: - The number of outputs coming out of the {{AudioNode}}. This - attribute is predetermined for some {{AudioNode}} types, but - can be variable, like for the {{ChannelSplitterNode}} and the - {{AudioWorkletNode}}. + : channelCount + :: + {{AudioNode/channelCount}} is the number of channels used when + up-mixing and down-mixing connections to any inputs to the + node. The default value is 2 except for specific nodes where + its value is specially determined. This attribute has no effect + for nodes with no inputs. If this + value is set to zero or to a value greater than the + implementation's maximum number of channels the implementation + MUST throw a {{NotSupportedError}} exception. + + In addition, some nodes have additional channelCount + constraints on the possible values for the channel count: + + : {{AudioDestinationNode}} + :: + The behavior depends on whether the destination node is the + destination of an {{AudioContext}} or + {{OfflineAudioContext}}: + + : {{AudioContext}} + :: + The channel count MUST be between 1 and + {{AudioDestinationNode/maxChannelCount}}. An {{IndexSizeError}} exception MUST + be thrown for any attempt to set the count outside this + range. + : {{OfflineAudioContext}} + :: + The channel count cannot be changed. An {{InvalidStateError}} exception + MUST be thrown for any attempt to change the + value. + + : {{AudioWorkletNode}} + :: + See [[#configuring-channels-with-audioworkletnodeoptions]] + Configuring Channels with AudioWorkletNodeOptions. + + : {{ChannelMergerNode}} + :: + The channel count cannot be changed, and an {{InvalidStateError}} exception MUST + be thrown for any attempt to change the value. + + : {{ChannelSplitterNode}} + :: + The channel count cannot be changed, and an {{InvalidStateError}} exception MUST + be thrown for any attempt to change the value. + + : {{ConvolverNode}} + :: + The channel count cannot be greater than two, and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to change it to a + value greater than two. + + : {{DynamicsCompressorNode}} + :: + The channel count cannot be greater than two, and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to change it to a + value greater than two. + + : {{PannerNode}} + :: + The channel count cannot be greater than two, and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to change it to a + value greater than two. + + : {{ScriptProcessorNode}} + :: + The channel count cannot be changed, and an {{NotSupportedError}} exception MUST + be thrown for any attempt to change the value. + + : {{StereoPannerNode}} + :: + The channel count cannot be greater than two, and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to change it to a + value greater than two. + + See [[#channel-up-mixing-and-down-mixing]] for more information on this attribute. + + : channelCountMode + :: + {{AudioNode/channelCountMode}} determines how channels will be counted + when up-mixing and down-mixing connections to any inputs to the + node. The default value is "{{ChannelCountMode/max}}". This attribute has no effect for nodes with no inputs. + + In addition, some nodes have additional channelCountMode + constraints on the possible values for the channel count + mode: + + : {{AudioDestinationNode}} + :: + If the {{AudioDestinationNode}} is the {{BaseAudioContext/destination}} node of an + {{OfflineAudioContext}}, then the channel count mode + cannot be changed. An {{InvalidStateError}} exception MUST + be thrown for any attempt to change the value. + : {{ChannelMergerNode}} + :: + The channel count mode cannot be changed from "{{ChannelCountMode/explicit}}" and + an {{InvalidStateError}} + exception MUST be thrown for any attempt to change the + value. + + + : {{ChannelSplitterNode}} + :: + The channel count mode cannot be changed from "{{ChannelCountMode/explicit}}" and + an {{InvalidStateError}} + exception MUST be thrown for any attempt to change the + value. + + : {{ConvolverNode}} + :: + The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to set it to + "{{ChannelCountMode/max}}". + + : {{DynamicsCompressorNode}} + :: + The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to set it to + "{{ChannelCountMode/max}}". + + : {{PannerNode}} + :: + The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to set it to + "{{ChannelCountMode/max}}". + + : {{ScriptProcessorNode}} + :: + The channel count mode cannot be changed from "{{ChannelCountMode/explicit}}" and + an {{NotSupportedError}} + exception MUST be thrown for any attempt to change the + value. + + : {{StereoPannerNode}} + :: + The channel count mode cannot be set to "{{ChannelCountMode/max}}", and a + {{NotSupportedError}} + exception MUST be thrown for any attempt to set it to + "{{ChannelCountMode/max}}". + + See the + section for more information on this attribute. + + : channelInterpretation + :: + {{AudioNode/channelInterpretation}} determines how individual channels + will be treated when up-mixing and down-mixing connections to + any inputs to the node. The default value is "{{ChannelInterpretation/speakers}}". This attribute has no effect for nodes + with no inputs. + + In addition, some nodes have additional + channelInterpretation constraints on the possible + values for the channel interpretation: + + : {{ChannelSplitterNode}} + :: + The channel intepretation can not be changed from + "{{ChannelInterpretation/discrete}}" and a + {{InvalidStateError}} + exception MUST be thrown for any attempt to change the + value. + + See [[#channel-up-mixing-and-down-mixing]] for more information on this attribute. + + : context + :: + The {{BaseAudioContext}} which owns this {{AudioNode}}. + + : numberOfInputs + :: + The number of inputs feeding into the {{AudioNode}}. For + source nodes, this will be 0. This attribute is + predetermined for many {{AudioNode}} types, but some + {{AudioNode}}s, like the {{ChannelMergerNode}} and the + {{AudioWorkletNode}}, have variable number of inputs. + + : numberOfOutputs + :: + The number of outputs coming out of the {{AudioNode}}. This + attribute is predetermined for some {{AudioNode}} types, but + can be variable, like for the {{ChannelSplitterNode}} and the + {{AudioWorkletNode}}.

Methods

- : connect(destinationNode, output, input) - :: - There can only be one connection between a given output of one - specific node and a given input of another specific node. - Multiple connections with the same termini are ignored. - -
- For example: - -
-                nodeA.connect(nodeB);
-                nodeA.connect(nodeB);
-            
- - will have the same effect as - -
-                nodeA.connect(nodeB);
-            
-
- - This method returns destination - {{AudioNode}} object. - -
-            destinationNode: The destination parameter is the {{AudioNode}} to connect to. If the destination parameter is an {{AudioNode}} that has been created using another {{AudioContext}}, an {{InvalidAccessError}} MUST be thrown. That is, {{AudioNode}}s cannot be shared between {{AudioContext}}s. Multiple {{AudioNode}}s can be connected to the same {{AudioNode}}, this is described in [[#channel-up-mixing-and-down-mixing|Channel Upmixing and down mixing]] section.
-            output: The output parameter is an index describing which output of the {{AudioNode}} from which to connect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} output to more than one input with multiple calls to connect(). Thus, "fan-out" is supported.
-            input:  The input parameter is an index describing which input of the destination {{AudioNode}} to connect to. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} to another {{AudioNode}} which creates a cycle: an {{AudioNode}} may connect to another {{AudioNode}}, which in turn connects back to the input or {{AudioParam}} of the first {{AudioNode}}.
-        
- -
- Return type: {{AudioNode}} -
- - : connect(destinationParam, output) - :: - Connects the {{AudioNode}} to an - {{AudioParam}}, controlling the parameter value - with an a-rate signal. - - It is possible to connect an {{AudioNode}} - output to more than one {{AudioParam}} with - multiple calls to connect(). Thus, "fan-out" is supported. - - It is possible to connect more than one - {{AudioNode}} output to a single - {{AudioParam}} with multiple calls to - connect(). Thus, "fan-in" is supported. - - An {{AudioParam}} will take the rendered audio - data from any {{AudioNode}} output connected to - it and convert it to mono by - down-mixing if it is not already mono, then mix it together - with other such outputs and finally will mix with the - intrinsic parameter value (the value the - {{AudioParam}} would normally have without any - audio connections), including any timeline changes scheduled - for the parameter. - - The down-mixing to mono is equivalent to the down-mixing for an - {{AudioNode}} with {{AudioNode/channelCount}} = 1, - {{AudioNode/channelCountMode}} = "{{ChannelCountMode/explicit}}", and - {{AudioNode/channelInterpretation}} = "{{ChannelInterpretation/speakers}}". - - There can only be one connection between a given output of one - specific node and a specific {{AudioParam}}. - Multiple connections with the same termini are ignored. - -
- For example: - -
-                nodeA.connect(param);
-                nodeA.connect(param);
-            
- - will have the same effect as - -
-                nodeA.connect(param);
-            
-
- -
-            destinationParam: The destination parameter is the {{AudioParam}} to connect to. This method does not return the destination {{AudioParam}} object. If {{AudioNode/connect(destinationParam, output)/destinationParam}} belongs to an {{AudioNode}} that belongs to a {{BaseAudioContext}} that is different from the {{BaseAudioContext}} that has created the {{AudioNode}} on which this method was called, an {{InvalidAccessError}} MUST be thrown.
-            output: The output parameter is an index describing which output of the {{AudioNode}} from which to connect. If the parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
-        
- -
- Return type: {{undefined}} -
- - : disconnect() - :: - Disconnects all outgoing connections from the - {{AudioNode}}. - -
- No parameters. -
-
- Return type: {{undefined}} -
- - : disconnect(output) - :: - Disconnects a single output of the - {{AudioNode}} from any other - {{AudioNode}} or {{AudioParam}} - objects to which it is connected. - -
-            output:  This parameter is an index describing which output of the {{AudioNode}} to disconnect. It disconnects all outgoing connections from the given output. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
-        
- -
- Return type: {{undefined}} -
- - : disconnect(destinationNode) - :: - Disconnects all outputs of the {{AudioNode}} - that go to a specific destination - {{AudioNode}}. - -
-            destinationNode: The destinationNode parameter is the {{AudioNode}} to disconnect. It disconnects all outgoing connections to the given destinationNode. If there is no connection to the destinationNode, an {{InvalidAccessError}} exception MUST be thrown.
-        
-
- Return type: {{undefined}} -
- - : disconnect(destinationNode, output) - :: - Disconnects a specific output of the - {{AudioNode}} from any and all inputs of some - destination {{AudioNode}}. - -
-            destinationNode: The destinationNode parameter is the {{AudioNode}} to disconnect. If there is no connection to the destinationNode from the given output, an {{InvalidAccessError}} exception MUST be thrown.
-            output: The output parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
-        
- -
- Return type: {{undefined}} -
- - : disconnect(destinationNode, output, input) - :: - Disconnects a specific output of the - {{AudioNode}} from a specific input of some - destination {{AudioNode}}. - -
-            destinationNode: The destinationNode parameter is the {{AudioNode}} to disconnect. If there is no connection to the destinationNode from the given output to the given input, an {{InvalidAccessError}} exception MUST be thrown.
-            output: The output parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
-            input: The input parameter is an index describing which input of the destination {{AudioNode}} to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
-        
- -
- Return type: {{undefined}} -
- - : disconnect(destinationParam) - :: - Disconnects all outputs of the {{AudioNode}} - that go to a specific destination - {{AudioParam}}. The contribution of this - {{AudioNode}} to the computed parameter value - goes to 0 when this operation takes effect. The intrinsic - parameter value is not affected by this operation. - -
-            destinationParam: The destinationParam parameter is the {{AudioParam}} to disconnect. If there is no connection to the destinationParam, an {{InvalidAccessError}} exception MUST be thrown.
-        
-
- Return type: {{undefined}} -
- - : disconnect(destinationParam, output) - :: - Disconnects a specific output of the - {{AudioNode}} from a specific destination - {{AudioParam}}. The contribution of this - {{AudioNode}} to the computed parameter value - goes to 0 when this operation takes effect. The intrinsic - parameter value is not affected by this operation. - -
-            destinationParam: The destinationParam parameter is the {{AudioParam}} to disconnect. If there is no connection to the destinationParam, an {{InvalidAccessError}} exception MUST be thrown.
-            output: The output parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If the parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
-        
-
- Return type: {{undefined}} -
+ : connect(destinationNode, output, input) + :: + There can only be one connection between a given output of one + specific node and a given input of another specific node. + Multiple connections with the same termini are ignored. + +
+ For example: + +
+				nodeA.connect(nodeB);
+				nodeA.connect(nodeB);
+			
+ + will have the same effect as + +
+				nodeA.connect(nodeB);
+			
+
+ + This method returns destination + {{AudioNode}} object. + +
+			destinationNode: The destination parameter is the {{AudioNode}} to connect to. If the destination parameter is an {{AudioNode}} that has been created using another {{AudioContext}}, an {{InvalidAccessError}} MUST be thrown. That is, {{AudioNode}}s cannot be shared between {{AudioContext}}s. Multiple {{AudioNode}}s can be connected to the same {{AudioNode}}, this is described in [[#channel-up-mixing-and-down-mixing|Channel Upmixing and down mixing]] section.
+			output: The output parameter is an index describing which output of the {{AudioNode}} from which to connect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} output to more than one input with multiple calls to connect(). Thus, "fan-out" is supported.
+			input:  The input parameter is an index describing which input of the destination {{AudioNode}} to connect to. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} to another {{AudioNode}} which creates a cycle: an {{AudioNode}} may connect to another {{AudioNode}}, which in turn connects back to the input or {{AudioParam}} of the first {{AudioNode}}.
+		
+ +
+ Return type: {{AudioNode}} +
+ + : connect(destinationParam, output) + :: + Connects the {{AudioNode}} to an + {{AudioParam}}, controlling the parameter value + with an a-rate signal. + + It is possible to connect an {{AudioNode}} + output to more than one {{AudioParam}} with + multiple calls to connect(). Thus, "fan-out" is supported. + + It is possible to connect more than one + {{AudioNode}} output to a single + {{AudioParam}} with multiple calls to + connect(). Thus, "fan-in" is supported. + + An {{AudioParam}} will take the rendered audio + data from any {{AudioNode}} output connected to + it and convert it to mono by + down-mixing if it is not already mono, then mix it together + with other such outputs and finally will mix with the + intrinsic parameter value (the value the + {{AudioParam}} would normally have without any + audio connections), including any timeline changes scheduled + for the parameter. + + The down-mixing to mono is equivalent to the down-mixing for an + {{AudioNode}} with {{AudioNode/channelCount}} = 1, + {{AudioNode/channelCountMode}} = "{{ChannelCountMode/explicit}}", and + {{AudioNode/channelInterpretation}} = "{{ChannelInterpretation/speakers}}". + + There can only be one connection between a given output of one + specific node and a specific {{AudioParam}}. + Multiple connections with the same termini are ignored. + +
+ For example: + +
+				nodeA.connect(param);
+				nodeA.connect(param);
+			
+ + will have the same effect as + +
+				nodeA.connect(param);
+			
+
+ +
+			destinationParam: The destination parameter is the {{AudioParam}} to connect to. This method does not return the destination {{AudioParam}} object. If {{AudioNode/connect(destinationParam, output)/destinationParam}} belongs to an {{AudioNode}} that belongs to a {{BaseAudioContext}} that is different from the {{BaseAudioContext}} that has created the {{AudioNode}} on which this method was called, an {{InvalidAccessError}} MUST be thrown.
+			output: The output parameter is an index describing which output of the {{AudioNode}} from which to connect. If the parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
+		
+ +
+ Return type: void +
+ + : disconnect() + :: + Disconnects all outgoing connections from the + {{AudioNode}}. + +
+ No parameters. +
+
+ Return type: void +
+ + : disconnect(output) + :: + Disconnects a single output of the + {{AudioNode}} from any other + {{AudioNode}} or {{AudioParam}} + objects to which it is connected. + +
+			output:  This parameter is an index describing which output of the {{AudioNode}} to disconnect. It disconnects all outgoing connections from the given output. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
+		
+ +
+ Return type: void +
+ + : disconnect(destinationNode) + :: + Disconnects all outputs of the {{AudioNode}} + that go to a specific destination + {{AudioNode}}. + +
+			destinationNode: The destinationNode parameter is the {{AudioNode}} to disconnect. It disconnects all outgoing connections to the given destinationNode. If there is no connection to the destinationNode, an {{InvalidAccessError}} exception MUST be thrown.
+		
+
+ Return type: void +
+ + : disconnect(destinationNode, output) + :: + Disconnects a specific output of the + {{AudioNode}} from any and all inputs of some + destination {{AudioNode}}. + +
+			destinationNode: The destinationNode parameter is the {{AudioNode}} to disconnect. If there is no connection to the destinationNode from the given output, an {{InvalidAccessError}} exception MUST be thrown.
+			output: The output parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
+		
+ +
+ Return type: void +
+ + : disconnect(destinationNode, output, input) + :: + Disconnects a specific output of the + {{AudioNode}} from a specific input of some + destination {{AudioNode}}. + +
+			destinationNode: The destinationNode parameter is the {{AudioNode}} to disconnect. If there is no connection to the destinationNode from the given input to the given output, an {{InvalidAccessError}} exception MUST be thrown.
+			output: The output parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
+			input: The input parameter is an index describing which input of the destination {{AudioNode}} to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
+		
+ +
+ Return type: void +
+ + : disconnect(destinationParam) + :: + Disconnects all outputs of the {{AudioNode}} + that go to a specific destination + {{AudioParam}}. The contribution of this + {{AudioNode}} to the computed parameter value + goes to 0 when this operation takes effect. The intrinsic + parameter value is not affected by this operation. + +
+			destinationParam: The destinationParam parameter is the {{AudioParam}} to disconnect. If there is no connection to the destinationParam, an {{InvalidAccessError}} exception MUST be thrown.
+		
+
+ Return type: void +
+ + : disconnect(destinationParam, output) + :: + Disconnects a specific output of the + {{AudioNode}} from a specific destination + {{AudioParam}}. The contribution of this + {{AudioNode}} to the computed parameter value + goes to 0 when this operation takes effect. The intrinsic + parameter value is not affected by this operation. + +
+			destinationParam: The destinationParam parameter is the {{AudioParam}} to disconnect. If there is no connection to the destinationParam, an {{InvalidAccessError}} exception MUST be thrown.
+			output: The output parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If the parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
+		
+
+ Return type: void +
-

+

{{AudioNodeOptions}}

This specifies the options that can be used in constructing all @@ -3903,9 +3244,9 @@ values used for each node depends on the actual node.
 dictionary AudioNodeOptions {
-    unsigned long channelCount;
-    ChannelCountMode channelCountMode;
-    ChannelInterpretation channelInterpretation;
+	unsigned long channelCount;
+	ChannelCountMode channelCountMode;
+	ChannelInterpretation channelInterpretation;
 };
 
@@ -3913,14 +3254,14 @@ dictionary AudioNodeOptions { Dictionary {{AudioNodeOptions}} Members
- : channelCount - :: Desired number of channels for the {{AudioNode/channelCount}} attribute. + : channelCount + :: Desired number of channels for the {{AudioNode/channelCount}} attribute. - : channelCountMode - :: Desired mode for the {{AudioNode/channelCountMode}} attribute. + : channelCountMode + :: Desired mode for the {{AudioNode/channelCountMode}} attribute. - : channelInterpretation - :: Desired mode for the {{AudioNode/channelInterpretation}} attribute. + : channelInterpretation + :: Desired mode for the {{AudioNode/channelInterpretation}} attribute.
-

+

The {{AudioParam}} Interface

{{AudioParam}} controls an individual aspect of an @@ -4012,28 +3353,28 @@ specific to the method: The following rules will apply when calling these methods: * Automation event times are - not quantized with respect to the prevailing sample rate. Formulas - for determining curves and ramps are applied to the exact numerical - times given when scheduling events. + not quantized with respect to the prevailing sample rate. Formulas + for determining curves and ramps are applied to the exact numerical + times given when scheduling events. * If one of these events is added at a time where there is already - one or more events, then it will be placed in the list after them, - but before events whose times are after the event. + one or more events, then it will be placed in the list after them, + but before events whose times are after the event. * If setValueCurveAtTime() is called for time \(T\) - and duration \(D\) and there are any events having a time strictly greater than - \(T\), but strictly less than \(T + D\), then a {{NotSupportedError}} exception - MUST be thrown. In other words, it's not ok to schedule a value curve - during a time period containing other events, but it's ok to schedule a value - curve exactly at the time of another event. + data-link-for="AudioParam">setValueCurveAtTime() is called for time \(T\) + and duration \(D\) and there are any events having a time strictly greater than + \(T\), but strictly less than \(T + D\), then a {{NotSupportedError}} exception + MUST be thrown. In other words, it's not ok to schedule a value curve + during a time period containing other events, but it's ok to schedule a value + curve exactly at the time of another event. * Similarly a - {{NotSupportedError}} exception MUST be thrown if any - automation method is called at - a time which is contained in \([T, T+D)\), \(T\) being the time of the curve - and \(D\) its duration. - + {{NotSupportedError}} exception MUST be thrown if any + automation method is called at + a time which is contained in \([T, T+D)\), \(T\) being the time of the curve + and \(D\) its duration. + Note: {{AudioParam}} attributes are read only, with the exception of the {{AudioParam/value}} attribute. @@ -4045,36 +3386,36 @@ automation rate can be changed.
 enum AutomationRate {
-    "a-rate",
-    "k-rate"
+	"a-rate",
+	"k-rate"
 };
 
- - - - - - - - - - - - - - - + + + + + + + + + + + + +
{{AutomationRate}} enumeration description
Enum valueDescription
- "a-rate" - - This {{AudioParam}} is set for [=a-rate=] processing. -
- "k-rate" - - This {{AudioParam}} is set for [=k-rate=] processing. -
+ Enumeration description +
+ "a-rate" + + This {{AudioParam}} is set for [=a-rate=] processing. +
+ "k-rate" + + This {{AudioParam}} is set for [=k-rate=] processing. +
@@ -4084,20 +3425,20 @@ initially set to the {{AudioParam}}'s {{AudioParam/defaultValue}}.
 [Exposed=Window]
 interface AudioParam {
-    attribute float value;
-    attribute AutomationRate automationRate;
-    readonly attribute float defaultValue;
-    readonly attribute float minValue;
-    readonly attribute float maxValue;
-    AudioParam setValueAtTime (float value, double startTime);
-    AudioParam linearRampToValueAtTime (float value, double endTime);
-    AudioParam exponentialRampToValueAtTime (float value, double endTime);
-    AudioParam setTargetAtTime (float target, double startTime, float timeConstant);
-    AudioParam setValueCurveAtTime (sequence<float> values,
-                                    double startTime,
-                                    double duration);
-    AudioParam cancelScheduledValues (double cancelTime);
-    AudioParam cancelAndHoldAtTime (double cancelTime);
+	attribute float value;
+	attribute AutomationRate automationRate;
+	readonly attribute float defaultValue;
+	readonly attribute float minValue;
+	readonly attribute float maxValue;
+	AudioParam setValueAtTime (float value, double startTime);
+	AudioParam linearRampToValueAtTime (float value, double endTime);
+	AudioParam exponentialRampToValueAtTime (float value, double endTime);
+	AudioParam setTargetAtTime (float target, double startTime, float timeConstant);
+	AudioParam setValueCurveAtTime (sequence<float> values,
+	                                double startTime,
+	                                double duration);
+	AudioParam cancelScheduledValues (double cancelTime);
+	AudioParam cancelAndHoldAtTime (double cancelTime);
 };
 
@@ -4105,428 +3446,428 @@ interface AudioParam { Attributes
- : automationRate - :: - The automation rate for the {{AudioParam}}. The - default value depends on the actual {{AudioParam}}; - see the description of each individual {{AudioParam}} for the - default value. - - Some nodes have additional automation rate constraints as follows: - - : {{AudioBufferSourceNode}} - :: - The {{AudioParam}}s - {{AudioBufferSourceNode/playbackRate}} and - {{AudioBufferSourceNode/detune}} MUST be - "{{AutomationRate/k-rate}}". An {{InvalidStateError}} - must be thrown if the rate is changed to - "{{AutomationRate/a-rate}}". - - : {{DynamicsCompressorNode}} - :: - The {{AudioParam}}s - {{DynamicsCompressorNode/threshold}}, - {{DynamicsCompressorNode/knee}}, - {{DynamicsCompressorNode/ratio}}, - {{DynamicsCompressorNode/attack}}, and - {{DynamicsCompressorNode/release}} - MUST be "{{AutomationRate/k-rate}}". An {{InvalidStateError}} - must be thrown if the rate is changed to - "{{AutomationRate/a-rate}}". - - : {{PannerNode}} - :: - If the {{PannerNode/panningModel}} is - "{{PanningModelType/HRTF}}", the setting of - the {{AudioParam/automationRate}} for any - {{AudioParam}} of the {{PannerNode}} is ignored. - Likewise, the setting of the - {{AudioParam/automationRate}} for any {{AudioParam}} - of the {{AudioListener}} is ignored. In this - case, the {{AudioParam}} behaves as if the - {{AudioParam/automationRate}} were set to - "{{AutomationRate/k-rate}}". - - : defaultValue - :: - Initial value for the value attribute. - - : maxValue - :: - The nominal maximum value that the parameter can take. Together - with minValue, this forms the nominal range - for this parameter. - - : minValue - :: - The nominal minimum value that the parameter can take. Together - with maxValue, this forms the nominal range - for this parameter. - - : value - :: - The parameter's floating-point value. This attribute is - initialized to the defaultValue. - - Getting this attribute returns the contents of the - {{[[current value]]}} slot. See - [[#computation-of-value]] for the algorithm for the - value that is returned. - - Setting this attribute has the effect of assigning the - requested value to the {{[[current value]]}} slot, and - calling the setValueAtTime() - method with the current {{AudioContext}}'s - currentTime and {{[[current value]]}}. Any - exceptions that would be thrown by - setValueAtTime() will also be thrown by setting - this attribute. + : automationRate + :: + The automation rate for the {{AudioParam}}. The + default value depends on the actual {{AudioParam}}; + see the description of each individual {{AudioParam}} for the + default value. + + Some nodes have additional automation rate constraints as follows: + + : {{AudioBufferSourceNode}} + :: + The {{AudioParam}}s + {{AudioBufferSourceNode/playbackRate}} and + {{AudioBufferSourceNode/detune}} MUST be + "{{AutomationRate/k-rate}}". An {{InvalidStateError}} + must be thrown if the rate is changed to + "{{AutomationRate/a-rate}}". + + : {{DynamicsCompressorNode}} + :: + The {{AudioParam}}s + {{DynamicsCompressorNode/threshold}}, + {{DynamicsCompressorNode/knee}}, + {{DynamicsCompressorNode/ratio}}, + {{DynamicsCompressorNode/attack}}, and + {{DynamicsCompressorNode/release}} + MUST be "{{AutomationRate/k-rate}}". An {{InvalidStateError}} + must be thrown if the rate is changed to + "{{AutomationRate/a-rate}}". + + : {{PannerNode}} + :: + If the {{PannerNode/panningModel}} is + "{{PanningModelType/HRTF}}", the setting of + the {{AudioParam/automationRate}} for any + {{AudioParam}} of the {{PannerNode}} is ignored. + Likewise, the setting of the + {{AudioParam/automationRate}} for any {{AudioParam}} + of the {{AudioListener}} is ignored. In this + case, the {{AudioParam}} behaves as if the + {{AudioParam/automationRate}} were set to + "{{AutomationRate/k-rate}}". + + : defaultValue + :: + Initial value for the value attribute. + + : maxValue + :: + The nominal maximum value that the parameter can take. Together + with minValue, this forms the nominal range + for this parameter. + + : minValue + :: + The nominal minimum value that the parameter can take. Together + with maxValue, this forms the nominal range + for this parameter. + + : value + :: + The parameter's floating-point value. This attribute is + initialized to the defaultValue. + + Getting this attribute returns the contents of the + {{[[current value]]}} slot. See + [[#computation-of-value]] for the algorithm for the + value that is returned. + + Setting this attribute has the effect of assigning the + requested value to the {{[[current value]]}} slot, and + calling the setValueAtTime() + method with the current {{AudioContext}}'s + currentTime and {{[[current value]]}}. Any + exceptions that would be thrown by + setValueAtTime() will also be thrown by setting + this attribute.

Methods

- : cancelAndHoldAtTime(cancelTime) - :: - This is similar to {{AudioParam/cancelScheduledValues()}} in that it cancels all - scheduled parameter changes with times greater than or equal to - {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}}. However, in addition, the automation - value that would have happened at {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}} is - then proprogated for all future time until other automation - events are introduced. - - The behavior of the timeline in the face of - {{AudioParam/cancelAndHoldAtTime()}} when automations are running - and can be introduced at any time after calling - {{AudioParam/cancelAndHoldAtTime()}} and before - {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}} is reached is quite complicated. The - behavior of {{AudioParam/cancelAndHoldAtTime()}} is therefore - specified in the following algorithm. - -
- Let \(t_c\) be the value of {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}}. Then - - 1. Let \(E_1\) be the event (if any) at time \(t_1\) where - \(t_1\) is the largest number satisfying \(t_1 \le t_c\). - - 2. Let \(E_2\) be the event (if any) at time \(t_2\) where - \(t_2\) is the smallest number satisfying \(t_c \lt t_2\). - - 3. If \(E_2\) exists: - 1. If \(E_2\) is a linear or exponential ramp, - 1. Effectively rewrite \(E_2\) to be the same kind of - ramp ending at time \(t_c\) with an end value that - would be the value of the original ramp at time - \(t_c\). - Graphical representation of calling cancelAndHoldAtTime when linearRampToValueAtTime has been called at this time. - - 2. Go to step 5. - - 2. Otherwise, go to step 4. - - 4. If \(E_1\) exists: - 1. If \(E_1\) is a setTarget event, - 1. Implicitly insert a setValueAtTime - event at time \(t_c\) with the value that the - setTarget would have at time - \(t_c\). - Graphical representation of calling cancelAndHoldAtTime when setTargetAtTime has been called at this time - - 2. Go to step 5. - - 2. If \(E_1\) is a setValueCurve with a start - time of \(t_3\) and a duration of \(d\) - - 1. If \(t_c \gt t_3 + d\), go to step 5. - - 2. Otherwise, - 1. Effectively replace this event with a - setValueCurve event with a start time - of \(t_3\) and a new duration of \(t_c-t_3\). - However, this is not a true replacement; this - automation MUST take care to produce the same - output as the original, and not one computed using - a different duration. (That would cause sampling of - the value curve in a slightly different way, - producing different results.) - Graphical representation of calling cancelAndHoldAtTime when setValueCurve has been called at this time - - 2. Go to step 5. - - 5. Remove all events with time greater than \(t_c\). - - If no events are added, then the automation value after - {{AudioParam/cancelAndHoldAtTime()}} is the constant value that - the original timeline would have had at time \(t_c\). -
- -
-            cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute.  A {{RangeError}} exception MUST be thrown if cancelTime is negative. If {{AudioParam/cancelAndHoldAtTime()/cancelTime}} is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-        
- -
- Return type: {{AudioParam}} -
- - : cancelScheduledValues(cancelTime) - :: - Cancels all scheduled parameter changes with times greater than - or equal to {{AudioParam/cancelScheduledValues()/cancelTime!!argument}}. Cancelling a scheduled - parameter change means removing the scheduled event from the - event list. Any active automations whose automation event time is less - than {{AudioParam/cancelScheduledValues()/cancelTime!!argument}} are also cancelled, and such - cancellations may cause discontinuities because the original - value (from before such automation) is restored immediately. Any - hold values scheduled by {{AudioParam/cancelAndHoldAtTime()}} - are also removed if the hold time occurs after - {{AudioParam/cancelScheduledValues()/cancelTime!!argument}}. - - For a {{AudioParam/setValueCurveAtTime()}}, let \(T_0\) and \(T_D\) be the corresponding - {{AudioParam/setValueCurveAtTime()/startTime!!argument}} and {{AudioParam/setValueCurveAtTime()/duration!!argument}}, respectively of this event. - Then if {{AudioParam/cancelScheduledValues()/cancelTime!!argument}} - is in the range \([T_0, T_0 + T_D]\), the event is - removed from the timeline. - -
-            cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute.  A {{RangeError}} exception MUST be thrown if cancelTime is negative. If cancelTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-        
- -
- Return type: {{AudioParam}} -
- - : exponentialRampToValueAtTime(value, endTime) - :: - Schedules an exponential continuous change in parameter value - from the previous scheduled parameter value to the given value. - Parameters representing filter frequencies and playback rate - are best changed exponentially because of the way humans - perceive sound. - - The value during the time interval \(T_0 \leq t < T_1\) - (where \(T_0\) is the time of the previous event and \(T_1\) is - the {{AudioParam/exponentialRampToValueAtTime()/endTime!!argument}} parameter passed into this method) - will be calculated as: - -
-        $$
-            v(t) = V_0 \left(\frac{V_1}{V_0}\right)^\frac{t - T_0}{T_1 - T_0}
-        $$
-        
- - where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is - the {{AudioParam/exponentialRampToValueAtTime()/value!!argument}} parameter passed into this method. If - \(V_0\) and \(V_1\) have opposite signs or if \(V_0\) is zero, - then \(v(t) = V_0\) for \(T_0 \le t \lt T_1\). - - This also implies an exponential ramp to 0 is not possible. A - good approximation can be achieved using {{AudioParam/setTargetAtTime()}} with an appropriately chosen - time constant. - - If there are no more events after this ExponentialRampToValue - event then for \(t \geq T_1\), \(v(t) = V_1\). - - If there is no event preceding this event, the exponential ramp - behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} - were called where value is the current value of - the attribute and currentTime is the context - {{BaseAudioContext/currentTime}} at the time - {{AudioParam/exponentialRampToValueAtTime()}} is called. - - If the preceding event is a SetTarget event, \(T_0\) - and \(V_0\) are chosen from the current time and value of - SetTarget automation. That is, if the - SetTarget event has not started, \(T_0\) is the start - time of the event, and \(V_0\) is the value just before the - SetTarget event starts. In this case, the - ExponentialRampToValue event effectively replaces the - SetTarget event. If the SetTarget event has - already started, \(T_0\) is the current context time, and - \(V_0\) is the current SetTarget automation value at - time \(T_0\). In both cases, the automation curve is - continuous. - -
-            value: The value the parameter will exponentially ramp to at the given time. A {{RangeError}} exception MUST be thrown if this value is equal to 0.
-            endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute where the exponential ramp ends. A {{RangeError}} exception MUST be thrown if endTime is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-        
- -
- Return type: {{AudioParam}} -
- - : linearRampToValueAtTime(value, endTime) - :: - Schedules a linear continuous change in parameter value from - the previous scheduled parameter value to the given value. - - The value during the time interval \(T_0 \leq t < T_1\) - (where \(T_0\) is the time of the previous event and \(T_1\) is - the {{AudioParam/linearRampToValueAtTime()/endTime!!argument}} parameter passed into this method) - will be calculated as: - -
-        $$
-            v(t) = V_0 + (V_1 - V_0) \frac{t - T_0}{T_1 - T_0}
-        $$
-        
- - where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is - the {{AudioParam/linearRampToValueAtTime()/value!!argument}} parameter passed into this method. - - If there are no more events after this LinearRampToValue event - then for \(t \geq T_1\), \(v(t) = V_1\). - - If there is no event preceding this event, the linear ramp - behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} - were called where value is the current value of - the attribute and currentTime is the context - {{BaseAudioContext/currentTime}} at the time - {{AudioParam/linearRampToValueAtTime()}} is called. - - If the preceding event is a SetTarget event, \(T_0\) - and \(V_0\) are chosen from the current time and value of - SetTarget automation. That is, if the - SetTarget event has not started, \(T_0\) is the start - time of the event, and \(V_0\) is the value just before the - SetTarget event starts. In this case, the - LinearRampToValue event effectively replaces the - SetTarget event. If the SetTarget event has - already started, \(T_0\) is the current context time, and - \(V_0\) is the current SetTarget automation value at - time \(T_0\). In both cases, the automation curve is - continuous. - -
-            value: The value the parameter will linearly ramp to at the given time.
-            endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the automation ends. A {{RangeError}} exception MUST be thrown if endTime is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-        
- -
- Return type: {{AudioParam}} -
- - : setTargetAtTime(target, startTime, timeConstant) - :: - Start exponentially approaching the target value at the given - time with a rate having the given time constant. Among other - uses, this is useful for implementing the "decay" and "release" - portions of an ADSR envelope. Please note that the parameter - value does not immediately change to the target value at the - given time, but instead gradually changes to the target value. - - During the time interval: \(T_0 \leq t\), where \(T_0\) is the - {{AudioParam/setTargetAtTime()/startTime!!argument}} parameter: - -
-        $$
-            v(t) = V_1 + (V_0 - V_1)\, e^{-\left(\frac{t - T_0}{\tau}\right)}
-        $$
-        
- - where \(V_0\) is the initial value (the {{[[current value]]}} - attribute) at \(T_0\) (the {{AudioParam/setTargetAtTime()/startTime!!argument}} parameter), - \(V_1\) is equal to the {{AudioParam/setTargetAtTime()/target!!argument}} parameter, and - \(\tau\) is the {{AudioParam/setTargetAtTime()/timeConstant!!argument}} parameter. - - If a LinearRampToValue or - ExponentialRampToValue event follows this event, the - behavior is described in {{AudioParam/linearRampToValueAtTime()}} or - {{AudioParam/exponentialRampToValueAtTime()}}, - respectively. For all other events, the SetTarget - event ends at the time of the next event. - -
-            target: The value the parameter will start changing to at the given time.
-            startTime: The time at which the exponential approach will begin, in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if start is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-            timeConstant: The time-constant value of first-order filter (exponential) approach to the target value. The larger this value is, the slower the transition will be. The value MUST be non-negative or a {{RangeError}} exception MUST be thrown. If timeConstant is zero, the output value jumps immediately to the final value. More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value \(1 - 1/e\) (around 63.2%) given a step input response (transition from 0 to 1 value).
-        
- -
- Return type: {{AudioParam}} -
- - : setValueAtTime(value, startTime) - :: - Schedules a parameter value change at the given time. - - If there are no more events after this SetValue event, - then for \(t \geq T_0\), \(v(t) = V\), where \(T_0\) is the - {{AudioParam/setValueAtTime()/startTime!!argument}} parameter and \(V\) is the - {{AudioParam/setValueAtTime()/value!!argument}} parameter. In other words, the value will - remain constant. - - If the next event (having time \(T_1\)) after this - SetValue event is not of type - LinearRampToValue or ExponentialRampToValue, - then, for \(T_0 \leq t < T_1\): - -
-        $$
-            v(t) = V
-        $$
-        
- - In other words, the value will remain constant during this time - interval, allowing the creation of "step" functions. - - If the next event after this SetValue event is of type - LinearRampToValue or ExponentialRampToValue - then please see {{AudioParam/linearRampToValueAtTime()}} or - {{AudioParam/exponentialRampToValueAtTime()}}, - respectively. - -
-            value: The value the parameter will change to at the given time.
-            startTime: The time in the same time coordinate system as the {{BaseAudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the parameter changes to the given value. A {{RangeError}} exception MUST be thrown if startTime is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-        
- -
- Return type: {{AudioParam}} -
- - : setValueCurveAtTime(values, startTime, duration) - :: - Sets an array of arbitrary parameter values starting at the - given time for the given duration. The number of values will be - scaled to fit into the desired duration. - - Let \(T_0\) be {{AudioParam/setValueCurveAtTime()/startTime!!argument}}, \(T_D\) be - {{AudioParam/setValueCurveAtTime()/duration!!argument}}, \(V\) be the {{AudioParam/setValueCurveAtTime()/values!!argument}} array, - and \(N\) be the length of the {{AudioParam/setValueCurveAtTime()/values!!argument}} array. Then, - during the time interval: \(T_0 \le t < T_0 + T_D\), let - -
-        $$
-            \begin{align*} k &= \left\lfloor \frac{N - 1}{T_D}(t-T_0) \right\rfloor \\
-            \end{align*}
-        $$
-        
- - Then \(v(t)\) is computed by linearly interpolating between - \(V[k]\) and \(V[k+1]\), - - After the end of the curve time interval (\(t \ge T_0 + T_D\)), - the value will remain constant at the final curve value, until - there is another automation event (if any). - - An implicit call to {{AudioParam/setValueAtTime()}} is made at time \(T_0 + - T_D\) with value \(V\[N-1]\) so that following automations will - start from the end of the {{AudioParam/setValueCurveAtTime()}} event. - -
-            values: A sequence of float values representing a parameter value curve. These values will apply starting at the given time and lasting for the given duration. When this method is called, an internal copy of the curve is created for automation purposes. Subsequent modifications of the contents of the passed-in array therefore have no effect on the {{AudioParam}}. An {{InvalidStateError}} MUST be thrown if this attribute is a sequence<float> object that has a length less than 2.
-            startTime: The start time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the value curve will be applied. A {{RangeError}} exception MUST be thrown if startTime is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-            duration: The amount of time in seconds (after the startTime parameter) where values will be calculated according to the values parameter. A {{RangeError}} exception MUST be thrown if duration is not strictly positive or is not a finite number.
-        
- -
- Return type: {{AudioParam}} -
+ : cancelAndHoldAtTime(cancelTime) + :: + This is similar to {{AudioParam/cancelScheduledValues()}} in that it cancels all + scheduled parameter changes with times greater than or equal to + {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}}. However, in addition, the automation + value that would have happened at {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}} is + then proprogated for all future time until other automation + events are introduced. + + The behavior of the timeline in the face of + {{AudioParam/cancelAndHoldAtTime()}} when automations are running + and can be introduced at any time after calling + {{AudioParam/cancelAndHoldAtTime()}} and before + {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}} is reached is quite complicated. The + behavior of {{AudioParam/cancelAndHoldAtTime()}} is therefore + specified in the following algorithm. + +
+ Let \(t_c\) be the value of {{AudioParam/cancelAndHoldAtTime()/cancelTime!!argument}}. Then + + 1. Let \(E_1\) be the event (if any) at time \(t_1\) where + \(t_1\) is the largest number satisfying \(t_1 \le t_c\). + + 2. Let \(E_2\) be the event (if any) at time \(t_2\) where + \(t_2\) is the smallest number satisfying \(t_c \lt t_2\). + + 3. If \(E_2\) exists: + 1. If \(E_2\) is a linear or exponential ramp, + 1. Effectively rewrite \(E_2\) to be the same kind of + ramp ending at time \(t_c\) with an end value that + would be the value of the original ramp at time + \(t_c\). + Graphical representation of calling cancelAndHoldAtTime when linearRampToValueAtTime has been called at this time. + + 2. Go to step 5. + + 2. Otherwise, go to step 4. + + 4. If \(E_1\) exists: + 1. If \(E_1\) is a setTarget event, + 1. Implicitly insert a setValueAtTime + event at time \(t_c\) with the value that the + setTarget would have at time + \(t_c\). + Graphical representation of calling cancelAndHoldAtTime when setTargetAtTime has been called at this time + + 2. Go to step 5. + + 2. If \(E_1\) is a setValueCurve with a start + time of \(t_3\) and a duration of \(d\) + + 1. If \(t_c \gt t_3 + d\), go to step 5. + + 2. Otherwise, + 1. Effectively replace this event with a + setValueCurve event with a start time + of \(t_3\) and a new duration of \(t_c-t_3\). + However, this is not a true replacement; this + automation MUST take care to produce the same + output as the original, and not one computed using + a different duration. (That would cause sampling of + the value curve in a slightly different way, + producing different results.) + Graphical representation of calling cancelAndHoldAtTime when setValueCurve has been called at this time + + 2. Go to step 5. + + 5. Remove all events with time greater than \(t_c\). + + If no events are added, then the automation value after + {{AudioParam/cancelAndHoldAtTime()}} is the constant value that + the original timeline would have had at time \(t_c\). +
+ +
+			cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if cancelTime is negative or is not a finite number. If {{AudioParam/cancelAndHoldAtTime()/cancelTime}} is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+		
+ +
+ Return type: {{AudioParam}} +
+ + : cancelScheduledValues(cancelTime) + :: + Cancels all scheduled parameter changes with times greater than + or equal to {{AudioParam/cancelScheduledValues()/cancelTime!!argument}}. Cancelling a scheduled + parameter change means removing the scheduled event from the + event list. Any active automations whose automation event time is less + than {{AudioParam/cancelScheduledValues()/cancelTime!!argument}} are also cancelled, and such + cancellations may cause discontinuities because the original + value (from before such automation) is restored immediately. Any + hold values scheduled by {{AudioParam/cancelAndHoldAtTime()}} + are also removed if the hold time occurs after + {{AudioParam/cancelScheduledValues()/cancelTime!!argument}}. + + For a {{AudioParam/setValueCurveAtTime()}}, let \(T_0\) and \(T_D\) be the corresponding + {{AudioParam/setValueCurveAtTime()/startTime!!argument}} and {{AudioParam/setValueCurveAtTime()/duration!!argument}}, respectively of this event. + Then if {{AudioParam/cancelScheduledValues()/cancelTime!!argument}} + is in the range \([T_0, T_0 + T_D]\), the event is + removed from the timeline. + +
+			cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if cancelTime is negative or is not a finite number. If cancelTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+		
+ +
+ Return type: {{AudioParam}} +
+ + : exponentialRampToValueAtTime(value, endTime) + :: + Schedules an exponential continuous change in parameter value + from the previous scheduled parameter value to the given value. + Parameters representing filter frequencies and playback rate + are best changed exponentially because of the way humans + perceive sound. + + The value during the time interval \(T_0 \leq t < T_1\) + (where \(T_0\) is the time of the previous event and \(T_1\) is + the {{AudioParam/exponentialRampToValueAtTime()/endTime!!argument}} parameter passed into this method) + will be calculated as: + +
+		$$
+			v(t) = V_0 \left(\frac{V_1}{V_0}\right)^\frac{t - T_0}{T_1 - T_0}
+		$$
+		
+ + where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is + the {{AudioParam/exponentialRampToValueAtTime()/value!!argument}} parameter passed into this method. If + \(V_0\) and \(V_1\) have opposite signs or if \(V_0\) is zero, + then \(v(t) = V_0\) for \(T_0 \le t \lt T_1\). + + This also implies an exponential ramp to 0 is not possible. A + good approximation can be achieved using {{AudioParam/setTargetAtTime()}} with an appropriately chosen + time constant. + + If there are no more events after this ExponentialRampToValue + event then for \(t \geq T_1\), \(v(t) = V_1\). + + If there is no event preceding this event, the exponential ramp + behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} + were called where value is the current value of + the attribute and currentTime is the context + {{BaseAudioContext/currentTime}} at the time + {{AudioParam/exponentialRampToValueAtTime()}} is called. + + If the preceding event is a SetTarget event, \(T_0\) + and \(V_0\) are chosen from the current time and value of + SetTarget automation. That is, if the + SetTarget event has not started, \(T_0\) is the start + time of the event, and \(V_0\) is the value just before the + SetTarget event starts. In this case, the + ExponentialRampToValue event effectively replaces the + SetTarget event. If the SetTarget event has + already started, \(T_0\) is the current context time, and + \(V_0\) is the current SetTarget automation value at + time \(T_0\). In both cases, the automation curve is + continuous. + +
+			value: The value the parameter will exponentially ramp to at the given time. A {{RangeError}} exception MUST be thrown if this value is equal to 0.
+			endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute where the exponential ramp ends. A {{RangeError}} exception MUST be thrown if endTime is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+		
+ +
+ Return type: {{AudioParam}} +
+ + : linearRampToValueAtTime(value, endTime) + :: + Schedules a linear continuous change in parameter value from + the previous scheduled parameter value to the given value. + + The value during the time interval \(T_0 \leq t < T_1\) + (where \(T_0\) is the time of the previous event and \(T_1\) is + the {{AudioParam/linearRampToValueAtTime()/endTime!!argument}} parameter passed into this method) + will be calculated as: + +
+		$$
+			v(t) = V_0 + (V_1 - V_0) \frac{t - T_0}{T_1 - T_0}
+		$$
+		
+ + where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is + the {{AudioParam/linearRampToValueAtTime()/value!!argument}} parameter passed into this method. + + If there are no more events after this LinearRampToValue event + then for \(t \geq T_1\), \(v(t) = V_1\). + + If there is no event preceding this event, the linear ramp + behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} + were called where value is the current value of + the attribute and currentTime is the context + {{BaseAudioContext/currentTime}} at the time + {{AudioParam/linearRampToValueAtTime()}} is called. + + If the preceding event is a SetTarget event, \(T_0\) + and \(V_0\) are chosen from the current time and value of + SetTarget automation. That is, if the + SetTarget event has not started, \(T_0\) is the start + time of the event, and \(V_0\) is the value just before the + SetTarget event starts. In this case, the + LinearRampToValue event effectively replaces the + SetTarget event. If the SetTarget event has + already started, \(T_0\) is the current context time, and + \(V_0\) is the current SetTarget automation value at + time \(T_0\). In both cases, the automation curve is + continuous. + +
+			value: The value the parameter will linearly ramp to at the given time.
+			endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the automation ends. A {{RangeError}} exception MUST be thrown if endTime is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+		
+ +
+ Return type: {{AudioParam}} +
+ + : setTargetAtTime(target, startTime, timeConstant) + :: + Start exponentially approaching the target value at the given + time with a rate having the given time constant. Among other + uses, this is useful for implementing the "decay" and "release" + portions of an ADSR envelope. Please note that the parameter + value does not immediately change to the target value at the + given time, but instead gradually changes to the target value. + + During the time interval: \(T_0 \leq t\), where \(T_0\) is the + {{AudioParam/setTargetAtTime()/startTime!!argument}} parameter: + +
+		$$
+			v(t) = V_1 + (V_0 - V_1)\, e^{-\left(\frac{t - T_0}{\tau}\right)}
+		$$
+		
+ + where \(V_0\) is the initial value (the {{[[current value]]}} + attribute) at \(T_0\) (the {{AudioParam/setTargetAtTime()/startTime!!argument}} parameter), + \(V_1\) is equal to the {{AudioParam/setTargetAtTime()/target!!argument}} parameter, and + \(\tau\) is the {{AudioParam/setTargetAtTime()/timeConstant!!argument}} parameter. + + If a LinearRampToValue or + ExponentialRampToValue event follows this event, the + behavior is described in {{AudioParam/linearRampToValueAtTime()}} or + {{AudioParam/exponentialRampToValueAtTime()}}, + respectively. For all other events, the SetTarget + event ends at the time of the next event. + +
+			target: The value the parameter will start changing to at the given time.
+			startTime: The time at which the exponential approach will begin, in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if start is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+			timeConstant: The time-constant value of first-order filter (exponential) approach to the target value. The larger this value is, the slower the transition will be. The value MUST be non-negative or a {{RangeError}} exception MUST be thrown. If timeConstant is zero, the output value jumps immediately to the final value. More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value \(1 - 1/e\) (around 63.2%) given a step input response (transition from 0 to 1 value).
+		
+ +
+ Return type: {{AudioParam}} +
+ + : setValueAtTime(value, startTime) + :: + Schedules a parameter value change at the given time. + + If there are no more events after this SetValue event, + then for \(t \geq T_0\), \(v(t) = V\), where \(T_0\) is the + {{AudioParam/setValueAtTime()/startTime!!argument}} parameter and \(V\) is the + {{AudioParam/setValueAtTime()/value!!argument}} parameter. In other words, the value will + remain constant. + + If the next event (having time \(T_1\)) after this + SetValue event is not of type + LinearRampToValue or ExponentialRampToValue, + then, for \(T_0 \leq t < T_1\): + +
+		$$
+			v(t) = V
+		$$
+		
+ + In other words, the value will remain constant during this time + interval, allowing the creation of "step" functions. + + If the next event after this SetValue event is of type + LinearRampToValue or ExponentialRampToValue + then please see {{AudioParam/linearRampToValueAtTime()}} or + {{AudioParam/exponentialRampToValueAtTime()}}, + respectively. + +
+			value: The value the parameter will change to at the given time.
+			startTime: The time in the same time coordinate system as the {{BaseAudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the parameter changes to the given value. A {{RangeError}} exception MUST be thrown if startTime is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+		
+ +
+ Return type: {{AudioParam}} +
+ + : setValueCurveAtTime(values, startTime, duration) + :: + Sets an array of arbitrary parameter values starting at the + given time for the given duration. The number of values will be + scaled to fit into the desired duration. + + Let \(T_0\) be {{AudioParam/setValueCurveAtTime()/startTime!!argument}}, \(T_D\) be + {{AudioParam/setValueCurveAtTime()/duration!!argument}}, \(V\) be the {{AudioParam/setValueCurveAtTime()/values!!argument}} array, + and \(N\) be the length of the {{AudioParam/setValueCurveAtTime()/values!!argument}} array. Then, + during the time interval: \(T_0 \le t < T_0 + T_D\), let + +
+		$$
+			\begin{align*} k &= \left\lfloor \frac{N - 1}{T_D}(t-T_0) \right\rfloor \\
+			\end{align*}
+		$$
+		
+ + Then \(v(t)\) is computed by linearly interpolating between + \(V[k]\) and \(V[k+1]\), + + After the end of the curve time interval (\(t \ge T_0 + T_D\)), + the value will remain constant at the final curve value, until + there is another automation event (if any). + + An implicit call to {{AudioParam/setValueAtTime()}} is made at time \(T_0 + + T_D\) with value \(V\[N-1]\) so that following automations will + start from the end of the {{AudioParam/setValueCurveAtTime()}} event. + +
+			values: A sequence of float values representing a parameter value curve. These values will apply starting at the given time and lasting for the given duration. When this method is called, an internal copy of the curve is created for automation purposes. Subsequent modifications of the contents of the passed-in array therefore have no effect on the {{AudioParam}}. An {{InvalidStateError}} MUST be thrown if this attribute is a sequence<float> object that has a length less than 2.
+			startTime: The start time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the value curve will be applied. A {{RangeError}} exception MUST be thrown if startTime is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+			duration: The amount of time in seconds (after the startTime parameter) where values will be calculated according to the values parameter. A {{RangeError}} exception MUST be thrown if duration is not strictly positive or is not a finite number.
+		
+ +
+ Return type: {{AudioParam}} +

@@ -4547,38 +3888,38 @@ rendering time quantum.
The computation of the value of an {{AudioParam}} consists of two parts: - - the paramIntrinsicValue value that is computed from the {{AudioParam/value}} - attribute and any automation events. - - the paramComputedValue that is the final value controlling the - audio DSP and is computed by the audio rendering thread during each - render quantum. + - the paramIntrinsicValue value that is computed from the {{AudioParam/value}} + attribute and any automation events. + - the paramComputedValue that is the final value controlling the + audio DSP and is computed by the audio rendering thread during each + render quantum. These values MUST be computed as follows: - 1. paramIntrinsicValue will be calculated at - each time, which is either the value set directly to - the {{AudioParam/value}} attribute, or, if there are - any automation - events with times before or at this time, the - value as calculated from these events. If automation - events are removed from a given time range, then the - paramIntrinsicValue value will remain - unchanged and stay at its previous value until either - the {{AudioParam/value}} attribute is directly set, or - automation events are added for the time range. - - 1. Set {{[[current value]]}} to the value of - paramIntrinsicValue at the beginning of - this render quantum. - - 2. paramComputedValue is the sum of the paramIntrinsicValue - value and the value of the input - AudioParam buffer. If the sum is NaN, replace the sum with the {{AudioParam/defaultValue}}. - - 3. If this {{AudioParam}} is a compound parameter, - compute its final value with other {{AudioParam}}s. - - 4. Set computedValue to paramComputedValue. + 1. paramIntrinsicValue will be calculated at + each time, which is either the value set directly to + the {{AudioParam/value}} attribute, or, if there are + any automation + events with times before or at this time, the + value as calculated from these events. If automation + events are removed from a given time range, then the + paramIntrinsicValue value will remain + unchanged and stay at its previous value until either + the {{AudioParam/value}} attribute is directly set, or + automation events are added for the time range. + + 1. Set {{[[current value]]}} to the value of + paramIntrinsicValue at the beginning of + this render quantum. + + 2. paramComputedValue is the sum of the paramIntrinsicValue + value and the value of the input + AudioParam buffer. If the sum is NaN, replace the sum with the {{AudioParam/defaultValue}}. + + 3. If this {{AudioParam}} is a compound parameter, + compute its final value with other {{AudioParam}}s. + + 4. Set computedValue to paramComputedValue.
The nominal range for a computedValue are the @@ -4595,77 +3936,77 @@ Only when the automation values are to be applied to the output is the clamping done as specified above.
- For example, consider a node \(N\) has an AudioParam \(p\) with a - nominal range of \([0, 1]\), and following automation sequence - -
-        N.p.setValueAtTime(0, 0);
-        N.p.linearRampToValueAtTime(4, 1);
-        N.p.linearRampToValueAtTime(0, 2);
-    
- - The initial slope of the curve is 4, until it reaches the maximum - value of 1, at which time, the output is held constant. Finally, - near time 2, the slope of the curve is -4. This is illustrated in - the graph below where the dashed line indicates what would have - happened without clipping, and the solid line indicates the actual - expected behavior of the audioparam due to clipping to the nominal - range. - -
- - AudioParam automation clipping to nominal -
- An example of clipping of an AudioParam automation from the - nominal range. -
-
+ AudioParam automation clipping to nominal +
+ An example of clipping of an AudioParam automation from the + nominal range. +
+

{{AudioParam}} Automation Example

- - AudioParam automation -
- An example of parameter automation. -
+ AudioParam automation +
+ An example of parameter automation. +
-    const curveLength = 44100;
-    const curve = new Float32Array(curveLength);
-    for (const i = 0; i < curveLength; ++i)
-        curve[i] = Math.sin(Math.PI * i / curveLength);
-
-    const t0 = 0;
-    const t1 = 0.1;
-    const t2 = 0.2;
-    const t3 = 0.3;
-    const t4 = 0.325;
-    const t5 = 0.5;
-    const t6 = 0.6;
-    const t7 = 0.7;
-    const t8 = 1.0;
-    const timeConstant = 0.1;
-
-    param.setValueAtTime(0.2, t0);
-    param.setValueAtTime(0.3, t1);
-    param.setValueAtTime(0.4, t2);
-    param.linearRampToValueAtTime(1, t3);
-    param.linearRampToValueAtTime(0.8, t4);
-    param.setTargetAtTime(.5, t4, timeConstant);
-    // Compute where the setTargetAtTime will be at time t5 so we can make
-    // the following exponential start at the right point so there's no
-    // jump discontinuity. From the spec, we have
-    // v(t) = 0.5 + (0.8 - 0.5)*exp(-(t-t4)/timeConstant)
-    // Thus v(t5) = 0.5 + (0.8 - 0.5)*exp(-(t5-t4)/timeConstant)
-    param.setValueAtTime(0.5 + (0.8 - 0.5)*Math.exp(-(t5 - t4)/timeConstant), t5);
-    param.exponentialRampToValueAtTime(0.75, t6);
-    param.exponentialRampToValueAtTime(0.05, t7);
-    param.setValueCurveAtTime(curve, t7, t8 - t7);
+	const curveLength = 44100;
+	const curve = new Float32Array(curveLength);
+	for (const i = 0; i < curveLength; ++i)
+		curve[i] = Math.sin(Math.PI * i / curveLength);
+
+	const t0 = 0;
+	const t1 = 0.1;
+	const t2 = 0.2;
+	const t3 = 0.3;
+	const t4 = 0.325;
+	const t5 = 0.5;
+	const t6 = 0.6;
+	const t7 = 0.7;
+	const t8 = 1.0;
+	const timeConstant = 0.1;
+
+	param.setValueAtTime(0.2, t0);
+	param.setValueAtTime(0.3, t1);
+	param.setValueAtTime(0.4, t2);
+	param.linearRampToValueAtTime(1, t3);
+	param.linearRampToValueAtTime(0.8, t4);
+	param.setTargetAtTime(.5, t4, timeConstant);
+	// Compute where the setTargetAtTime will be at time t5 so we can make
+	// the following exponential start at the right point so there's no
+	// jump discontinuity. From the spec, we have
+	// v(t) = 0.5 + (0.8 - 0.5)*exp(-(t-t4)/timeConstant)
+	// Thus v(t5) = 0.5 + (0.8 - 0.5)*exp(-(t5-t4)/timeConstant)
+	param.setValueAtTime(0.5 + (0.8 - 0.5)*Math.exp(-(t5 - t4)/timeConstant), t5);
+	param.exponentialRampToValueAtTime(0.75, t6);
+	param.exponentialRampToValueAtTime(0.05, t7);
+	param.setValueCurveAtTime(curve, t7, t8 - t7);
 
@@ -4679,7 +4020,7 @@ http://googlechrome.github.io/web-audio-samples/samples/audio/timeline.html --> ██ ██ ██████ ██████ ██ ██ ███████ ████████ ████████ --> -

+

The {{AudioScheduledSourceNode}} Interface

The interface represents the common features of source nodes such @@ -4706,9 +4047,9 @@ set to false.
 [Exposed=Window]
 interface AudioScheduledSourceNode : AudioNode {
-    attribute EventHandler onended;
-    undefined start(optional double when = 0);
-    undefined stop(optional double when = 0);
+	attribute EventHandler onended;
+	void start(optional double when = 0);
+	void stop(optional double when = 0);
 };
 
@@ -4716,112 +4057,112 @@ interface AudioScheduledSourceNode : AudioNode { Attributes
- : onended - :: - A property used to set an [=event handler=] for the ended - event type that is dispatched to {{AudioScheduledSourceNode}} node - types. When the source node has stopped playing (as determined - by the concrete node), an event that uses the {{Event}} interface will be - dispatched to the event handler. - - For all {{AudioScheduledSourceNode}}s, the - {{AudioScheduledSourceNode/ended}} event is dispatched when the stop time - determined by {{AudioScheduledSourceNode/stop()}} is reached. - For an {{AudioBufferSourceNode}}, the event is - also dispatched because the {{AudioBufferSourceNode/start(when, offset, duration)/duration}} has been - reached or if the entire {{AudioBufferSourceNode/buffer}} has been - played. + : onended + :: + A property used to set the EventHandler (described + in + HTML[[!HTML]]) for the ended event that is + dispatched for {{AudioScheduledSourceNode}} node + types. When the source node has stopped playing (as determined + by the concrete node), an event of type {{Event}} + (described in + HTML [[!HTML]]) will be dispatched to the event + handler. + + For all {{AudioScheduledSourceNode}}s, the + onended event is dispatched when the stop time + determined by {{AudioScheduledSourceNode/stop()}} is reached. + For an {{AudioBufferSourceNode}}, the event is + also dispatched because the {{AudioBufferSourceNode/start(when, offset, duration)/duration}} has been + reached or if the entire {{AudioBufferSourceNode/buffer}} has been + played.

Methods

- : start(when) - :: - Schedules a sound to playback at an exact time. - -
- When this method is called, execute - these steps: - - 1. If this {{AudioScheduledSourceNode}} internal - slot {{AudioScheduledSourceNode/[[source started]]}} is true, an - {{InvalidStateError}} exception MUST be thrown. - - 2. Check for any errors that must be thrown due to parameter - constraints described below. If any exception is thrown during this - step, abort those steps. - - 3. Set the internal slot {{AudioScheduledSourceNode/[[source started]]}} on - this {{AudioScheduledSourceNode}} to true. - - 4. Queue a control message to start the - {{AudioScheduledSourceNode}}, including the parameter - values in the message. - - 5. Send a control message to the associated {{AudioContext}} to - start running its rendering thread only when - all the following conditions are met: - 1. The context's {{[[control thread state]]}} is - "{{AudioContextState/suspended}}". - 1. The context is allowed to start. - 1. {{[[suspended by user]]}} flag is false. - - NOTE: This can allow {{AudioScheduledSourceNode/start()}} to start - an {{AudioContext}} that is currently allowed to start, - but has previously been prevented from starting. -
- -
-            when: The {{AudioScheduledSourceNode/start(when)/when}} parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. When the signal emitted by the {{AudioScheduledSourceNode}} depends on the sound's start time, the exact value of when is always used without rounding to the nearest sample frame. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown if when is negative.
-        
- -
- Return type: {{undefined}} -
- - : stop(when) - :: - Schedules a sound to stop playback at an exact time. If - stop is called again after already having been - called, the last invocation will be the only one applied; stop - times set by previous calls will not be applied, unless the - buffer has already stopped prior to any subsequent calls. If - the buffer has already stopped, further calls to - stop will have no effect. If a stop time is - reached prior to the scheduled start time, the sound will not - play. - -
- When this method is called, execute these steps: - - 1. If this {{AudioScheduledSourceNode}} internal - slot {{AudioScheduledSourceNode/[[source started]]}} is not true, - an {{InvalidStateError}} exception MUST be thrown. - - 2. Check for any errors that must be thrown due to parameter - constraints described below. - - 3. Queue a control message to stop the - {{AudioScheduledSourceNode}}, including the parameter - values in the message. -
- -
- If the node is an {{AudioBufferSourceNode}}, - running a control message to stop the - {{AudioBufferSourceNode}} means invoking the - handleStop() function in the playback algorithm. -
- -
-            when: The {{AudioScheduledSourceNode/stop(when)/when}} parameter describes at what time (in seconds) the source should stop playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will stop playing immediately. A {{RangeError}} exception MUST be thrown if when is negative.
-        
- -
- Return type: {{undefined}} -
+ : start(when) + :: + Schedules a sound to playback at an exact time. + +
+ When this method is called, execute + these steps: + + 1. If this {{AudioScheduledSourceNode}} internal + slot {{AudioScheduledSourceNode/[[source started]]}} is true, an + {{InvalidStateError}} exception MUST be thrown. + + 2. Check for any errors that must be thrown due to parameter + constraints described below. If any exception is thrown during this + step, abort those steps. + + 3. Set the internal slot {{AudioScheduledSourceNode/[[source started]]}} on + this {{AudioScheduledSourceNode}} to true. + + 4. Queue a control message to start the + {{AudioScheduledSourceNode}}, including the parameter + values in the messsage. + + 5. Send a control message to the associated {{AudioContext}} to + run it in the rendering thread only when + the following conditions are met: + 1. The context's control thread state is + suspended. + 2. The context is allowed to start. + 3. {{[[suspended by user]]}} flag is false. +
+ +
+			when: The {{AudioScheduledSourceNode/start(when)/when}} parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. When the signal emitted by the {{AudioScheduledSourceNode}} depends on the sound's start time, the exact value of when is always used without rounding to the nearest sample frame. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown if when is negative.
+		
+ +
+ Return type: {{void}} +
+ + : stop(when) + :: + Schedules a sound to stop playback at an exact time. If + stop is called again after already having been + called, the last invocation will be the only one applied; stop + times set by previous calls will not be applied, unless the + buffer has already stopped prior to any subsequent calls. If + the buffer has already stopped, further calls to + stop will have no effect. If a stop time is + reached prior to the scheduled start time, the sound will not + play. + +
+ When this method is called, execute these steps: + + 1. If this {{AudioScheduledSourceNode}} internal + slot {{AudioScheduledSourceNode/[[source started]]}} is not true, + an {{InvalidStateError}} exception MUST be thrown. + + 2. Check for any errors that must be thrown due to parameter + constraints described below. + + 3. Queue a control message to stop the + {{AudioScheduledSourceNode}}, including the parameter + values in the messsage. +
+ +
+ If the node is an {{AudioBufferSourceNode}}, + running a control message to stop the + {{AudioBufferSourceNode}} means invoking the + handleStop() function in the playback algorithm. +
+ +
+			when: The {{AudioScheduledSourceNode/stop(when)/when}} parameter describes at what time (in seconds) the source should stop playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will stop playing immediately. A {{RangeError}} exception MUST be thrown if when is negative.
+		
+ +
+ Return type: {{void}} +
@@ -4835,7 +4176,7 @@ Methods ██ ██ ██ ██ ██ ██ ████████ ██ ██████ ████████ ██ ██ ██ ██ ███████ ████████ ████████ --> -

+

The {{AnalyserNode}} Interface

This interface represents a node which is able to provide real-time @@ -4845,28 +4186,28 @@ be passed un-processed from input to output.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    noo-notes: This output may be left unconnected.
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: No
+	noi: 1
+	noo: 1
+	noo-notes: This output may be left unconnected.
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: No
 
 [Exposed=Window]
 interface AnalyserNode : AudioNode {
-    constructor (BaseAudioContext context, optional AnalyserOptions options = {});
-    undefined getFloatFrequencyData (Float32Array array);
-    undefined getByteFrequencyData (Uint8Array array);
-    undefined getFloatTimeDomainData (Float32Array array);
-    undefined getByteTimeDomainData (Uint8Array array);
-    attribute unsigned long fftSize;
-    readonly attribute unsigned long frequencyBinCount;
-    attribute double minDecibels;
-    attribute double maxDecibels;
-    attribute double smoothingTimeConstant;
+	constructor (BaseAudioContext context, optional AnalyserOptions options = {});
+	void getFloatFrequencyData (Float32Array array);
+	void getByteFrequencyData (Uint8Array array);
+	void getFloatTimeDomainData (Float32Array array);
+	void getByteTimeDomainData (Uint8Array array);
+	attribute unsigned long fftSize;
+	readonly attribute unsigned long frequencyBinCount;
+	attribute double minDecibels;
+	attribute double maxDecibels;
+	attribute double smoothingTimeConstant;
 };
 
@@ -4874,196 +4215,209 @@ interface AnalyserNode : AudioNode { Constructors
- : AnalyserNode(context, options) - :: + : AnalyserNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{AnalyserNode}} will be associated with.
-            options: Optional initial parameter value for this {{AnalyserNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{AnalyserNode}} will be associated with.
+			options: Optional initial parameter value for this {{AnalyserNode}}.
+		

Attributes

- : fftSize - :: - The size of the FFT used for frequency-domain analysis (in sample-frames). - This MUST be a power of two in the - range 32 to 32768, otherwise an {{IndexSizeError}} - exception MUST be thrown. The default value is 2048. - Note that large FFT sizes can be costly to compute. - - If the {{AnalyserNode/fftSize}} is changed to a different value, - then all state associated with smoothing of the frequency data - (for {{AnalyserNode/getByteFrequencyData()}} and - {{AnalyserNode/getFloatFrequencyData()}}) is - reset. That is the previous block, \(\hat{X}_{-1}[k]\), - used for smoothing over time - is set to 0 for all \(k\). - - Note that increasing {{AnalyserNode/fftSize}} does mean that the - current time-domain data must be expanded to include past - frames that it previously did not. This means that the - {{AnalyserNode}} effectively MUST keep around the last - 32768 sample-frames and the current time-domain - data is the most recent - {{AnalyserNode/fftSize}} sample-frames out of that. - - : frequencyBinCount - :: - Half the FFT size. - - : maxDecibels - :: - {{AnalyserNode/maxDecibels}} is the maximum power value in the scaling - range for the FFT analysis data for conversion to unsigned byte - values. The default value is -30. If - the value of this attribute is set to a value less than or equal - to {{AnalyserNode/minDecibels}}, an - {{IndexSizeError}} exception MUST be thrown. - - : minDecibels - :: - {{AnalyserNode/minDecibels}} is the minimum power value in the scaling - range for the FFT analysis data for conversion to unsigned byte - values. The default value is -100. If - the value of this attribute is set to a value more than or equal - to {{AnalyserNode/maxDecibels}}, an - {{IndexSizeError}} exception MUST be thrown. - - : smoothingTimeConstant - :: - A value from 0 -> 1 where 0 represents no time averaging with - the last analysis frame. The default value is 0.8. If the value of this attribute is set to a value - less than 0 or more than 1, an {{IndexSizeError}} - exception MUST be thrown. + : fftSize + :: + The size of the FFT used for frequency-domain analysis (in sample-frames). + This MUST be a power of two in the + range 32 to 32768, otherwise an {{IndexSizeError}} + exception MUST be thrown. The default value is 2048. + Note that large FFT sizes can be costly to compute. + + If the {{AnalyserNode/fftSize}} is changed to a different value, + then all state associated with smoothing of the frequency data + (for {{AnalyserNode/getByteFrequencyData()}} and + {{AnalyserNode/getFloatFrequencyData()}}) is + reset. That is the previous block, \(\hat{X}_{-1}[k]\), + used for smoothing over time + is set to 0 for all \(k\). + + Note that increasing {{AnalyserNode/fftSize}} does mean that the + current time-domain data must be expanded to include past + frames that it previously did not. This means that the + {{AnalyserNode}} effectively MUST keep around the last + 32768 sample-frames and the current time-domain + data is the most recent + {{AnalyserNode/fftSize}} sample-frames out of that. + + : frequencyBinCount + :: + Half the FFT size. + + : maxDecibels + :: + {{AnalyserNode/maxDecibels}} is the maximum power value in the scaling + range for the FFT analysis data for conversion to unsigned byte + values. The default value is -30. If + the value of this attribute is set to a value less than or equal + to {{AnalyserNode/minDecibels}}, an + {{IndexSizeError}} exception MUST be thrown. + + : minDecibels + :: + {{AnalyserNode/minDecibels}} is the minimum power value in the scaling + range for the FFT analysis data for conversion to unsigned byte + values. The default value is -100. If + the value of this attribute is set to a value more than or equal + to {{AnalyserNode/maxDecibels}}, an + {{IndexSizeError}} exception MUST be thrown. + + : smoothingTimeConstant + :: + A value from 0 -> 1 where 0 represents no time averaging with + the last analysis frame. The default value is 0.8. If the value of this attribute is set to a value + less than 0 or more than 1, an {{IndexSizeError}} + exception MUST be thrown.

Methods

- : getByteFrequencyData(array) - :: - [=ArrayBufferView/Write=] the [=current frequency data=] into |array|. If - |array|'s [=BufferSource/byte length=] is less than {{frequencyBinCount}}, - the excess elements will be dropped. If |array|'s - [=BufferSource/byte length=] is greater than the {{frequencyBinCount}}, the - excess elements will be ignored. The most recent {{AnalyserNode/fftSize}} - frames are used in computing the frequency data. - - If another call to {{AnalyserNode/getByteFrequencyData()}} or - {{AnalyserNode/getFloatFrequencyData()}} occurs within the same - render quantum as a previous call, the current - frequency data is not updated with the same data. Instead, - the previously computed data is returned. - - The values stored in the unsigned byte array are computed in - the following way. Let \(Y[k]\) be the current frequency - data as described in FFT windowing and - smoothing. Then the byte value, \(b[k]\), is - -
-        $$
-            b[k] = \left\lfloor
-                    \frac{255}{\mbox{dB}_{max} - \mbox{dB}_{min}}
-                    \left(Y[k] - \mbox{dB}_{min}\right)
-                \right\rfloor
-        $$
-        
- - where \(\mbox{dB}_{min}\) is {{AnalyserNode/minDecibels}} - and \(\mbox{dB}_{max}\) is {{AnalyserNode/maxDecibels}}. If - \(b[k]\) lies outside the range of 0 to 255, \(b[k]\) is - clipped to lie in that range. - -
-            array: This parameter is where the frequency-domain analysis data will be copied.
-        
- -
- Return type: {{undefined}} -
- - : getByteTimeDomainData(array) - :: - [=ArrayBufferView/Write=] the [=current time-domain data=] (waveform data) - into |array|. If |array|'s [=BufferSource/byte length=] is less than - {{AnalyserNode/fftSize}}, the excess elements will be dropped. If |array|'s - [=BufferSource/byte length=] is greater than the {{AnalyserNode/fftSize}}, - the excess elements will be ignored. The most recent - {{AnalyserNode/fftSize}} frames are used in computing the byte data. - - The values stored in the unsigned byte array are computed in - the following way. Let \(x[k]\) be the time-domain data. Then - the byte value, \(b[k]\), is - -
-        $$
-            b[k] = \left\lfloor 128(1 + x[k]) \right\rfloor.
-        $$
-        
- - If \(b[k]\) lies outside the range 0 to 255, \(b[k]\) is - clipped to lie in that range. - -
-            array: This parameter is where the time-domain sample data will be copied.
-        
- -
- Return type: {{undefined}} -
- - : getFloatFrequencyData(array) - :: - [=ArrayBufferView/Write=] the [=current frequency data=] into |array|. If - |array| has fewer elements than the {{frequencyBinCount}}, the excess - elements will be dropped. If |array| has more elements than the - {{frequencyBinCount}}, the excess elements will be ignored. The most recent - {{AnalyserNode/fftSize}} frames are used in computing the frequency data. - - If another call to {{AnalyserNode/getFloatFrequencyData()}} or - {{AnalyserNode/getByteFrequencyData()}} occurs within the same - render quantum as a previous call, the current - frequency data is not updated with the same data. Instead, - the previously computed data is returned. - - The frequency data are in dB units. - -
-            array: This parameter is where the frequency-domain analysis data will be copied.
-        
- -
- Return type: {{undefined}} -
- - : getFloatTimeDomainData(array) - :: - [=ArrayBufferView/Write=] the [=current time-domain data=] (waveform data) - into |array|. If |array| has fewer elements than the value of - {{AnalyserNode/fftSize}}, the excess elements will be dropped. If |array| - has more elements than the value of {{AnalyserNode/fftSize}}, the excess - elements will be ignored. The most recent {{AnalyserNode/fftSize}} frames - are written (after downmixing). - -
-            array: This parameter is where the time-domain sample data will be copied.
-        
- -
- Return type: {{undefined}} -
+ : getByteFrequencyData(array) + :: + Get a + + reference to the bytes held by the {{Uint8Array}} + passed as an argument. Copies the current frequency data to those + bytes. If the array has fewer elements than the {{frequencyBinCount}}, the + excess elements will be dropped. If the array has more elements than the + {{frequencyBinCount}}, the excess elements will be ignored. The most + recent {{AnalyserNode/fftSize}} frames are used in computing the frequency + data. + + If another call to {{AnalyserNode/getByteFrequencyData()}} or + {{AnalyserNode/getFloatFrequencyData()}} occurs within the same + render quantum as a previous call, the current + frequency data is not updated with the same data. Instead, + the previously computed data is returned. + + The values stored in the unsigned byte array are computed in + the following way. Let \(Y[k]\) be the current frequency + data as described in FFT windowing and + smoothing. Then the byte value, \(b[k]\), is + +
+		$$
+			b[k] = \left\lfloor
+					\frac{255}{\mbox{dB}_{max} - \mbox{dB}_{min}}
+					\left(Y[k] - \mbox{dB}_{min}\right)
+				\right\rfloor
+		$$
+		
+ + where \(\mbox{dB}_{min}\) is {{AnalyserNode/minDecibels}} + and \(\mbox{dB}_{max}\) is {{AnalyserNode/maxDecibels}}. If + \(b[k]\) lies outside the range of 0 to 255, \(b[k]\) is + clipped to lie in that range. + +
+			array: This parameter is where the frequency-domain analysis data will be copied.
+		
+ +
+ Return type: void +
+ + : getByteTimeDomainData(array) + :: + Get a + + reference to the bytes held by the {{Uint8Array}} + passed as an argument. Copies the current time-domain data + (waveform data) into those bytes. If the array has fewer elements than the + value of {{AnalyserNode/fftSize}}, the excess elements will be dropped. If + the array has more elements than {{AnalyserNode/fftSize}}, the excess + elements will be ignored. The most recent {{AnalyserNode/fftSize}} frames + are used in computing the byte data. + + The values stored in the unsigned byte array are computed in + the following way. Let \(x[k]\) be the time-domain data. Then + the byte value, \(b[k]\), is + +
+		$$
+			b[k] = \left\lfloor 128(1 + x[k]) \right\rfloor.
+		$$
+		
+ + If \(b[k]\) lies outside the range 0 to 255, \(b[k]\) is + clipped to lie in that range. + +
+			array: This parameter is where the time-domain sample data will be copied.
+		
+ +
+ Return type: void +
+ + : getFloatFrequencyData(array) + :: + Get a + + reference to the bytes held by the + {{Float32Array}} passed as an argument. Copies + the current frequency data into those bytes. If the array has + fewer elements than the {{frequencyBinCount}}, the excess elements will be + dropped. If the array has more elements than the {{frequencyBinCount}}, + the excess elements will be ignored. The most recent + {{AnalyserNode/fftSize}} frames are used in computing the frequency data. + + If another call to {{AnalyserNode/getFloatFrequencyData()}} or + {{AnalyserNode/getByteFrequencyData()}} occurs within the same + render quantum as a previous call, the current + frequency data is not updated with the same data. Instead, + the previously computed data is returned. + + The frequency data are in dB units. + +
+			array: This parameter is where the frequency-domain analysis data will be copied.
+		
+ +
+ Return type: void +
+ + : getFloatTimeDomainData(array) + :: + Get a + + reference to the bytes held by the + {{Float32Array}} passed as an argument. Copies the + current time-domain data (waveform data) into those bytes. If the + array has fewer elements than the value of {{AnalyserNode/fftSize}}, the + excess elements will be dropped. If the array has more elements than + {{AnalyserNode/fftSize}}, the excess elements will be ignored. The most + recent {{AnalyserNode/fftSize}} frames are returned (after downmixing). + +
+			array: This parameter is where the time-domain sample data will be copied.
+		
+ +
+ Return type: void +
-

+

{{AnalyserOptions}}

This specifies the options to be used when constructing an @@ -5073,10 +4427,10 @@ node.
 dictionary AnalyserOptions : AudioNodeOptions {
-    unsigned long fftSize = 2048;
-    double maxDecibels = -30;
-    double minDecibels = -100;
-    double smoothingTimeConstant = 0.8;
+	unsigned long fftSize = 2048;
+	double maxDecibels = -30;
+	double minDecibels = -100;
+	double smoothingTimeConstant = 0.8;
 };
 
@@ -5084,30 +4438,30 @@ dictionary AnalyserOptions : AudioNodeOptions { Dictionary {{AnalyserOptions}} Members
- : fftSize - :: The desired initial size of the FFT for frequency-domain analysis. + : fftSize + :: The desired initial size of the FFT for frequency-domain analysis. - : maxDecibels - :: The desired initial maximum power in dB for FFT analysis. + : maxDecibels + :: The desired initial maximum power in dB for FFT analysis. - : minDecibels - :: The desired initial minimum power in dB for FFT analysis. + : minDecibels + :: The desired initial minimum power in dB for FFT analysis. - : smoothingTimeConstant - :: The desired initial smoothing constant for the FFT analysis. + : smoothingTimeConstant + :: The desired initial smoothing constant for the FFT analysis.

Time-Domain Down-Mixing

- When the current time-domain data are computed, the - input signal must be down-mixed to mono as if - {{AudioNode/channelCount}} is 1, - {{AudioNode/channelCountMode}} is - "{{ChannelCountMode/max}}" and {{AudioNode/channelInterpretation}} is "{{ChannelInterpretation/speakers}}". This is independent of the - settings for the {{AnalyserNode}} itself. The most recent - {{AnalyserNode/fftSize}} frames are used for the - down-mixing operation. + When the current time-domain data are computed, the + input signal must be down-mixed to mono as if + {{AudioNode/channelCount}} is 1, + {{AudioNode/channelCountMode}} is + "{{ChannelCountMode/max}}" and {{AudioNode/channelInterpretation}} is "{{ChannelInterpretation/speakers}}". This is independent of the + settings for the {{AnalyserNode}} itself. The most recent + {{AnalyserNode/fftSize}} frames are used for the + down-mixing operation.

FFT Windowing and Smoothing over Time

@@ -5117,112 +4471,109 @@ data are computed, the following operations are to be performed:
- 1. Compute the current time-domain data. + 1. Compute the current time-domain data. - 2. Apply a Blackman window to the time domain input data. + 2. Apply a Blackman window to the time domain input data. - 3. Apply a Fourier transform to the - windowed time domain input data to get real and imaginary - frequency data. + 3. Apply a Fourier transform to the + windowed time domain input data to get real and imaginary + frequency data. - 4. Smooth over time the frequency domain data. + 4. Smooth over time the frequency domain data. - 5. Convert to dB. + 5. Convert to dB.
In the following, let \(N\) be the value of the {{AnalyserNode/fftSize}} attribute of this {{AnalyserNode}}.
- Applying a Blackman window consists - in the following operation on the input time domain data. Let - \(x[n]\) for \(n = 0, \ldots, N - 1\) be the time domain data. The - Blackman window is defined by - -
-    $$
-    \begin{align*}
-        \alpha &= \mbox{0.16} \\ a_0 &= \frac{1-\alpha}{2} \\
-        a_1 &= \frac{1}{2} \\
-        a_2 &= \frac{\alpha}{2} \\
-        w[n] &= a_0 - a_1 \cos\frac{2\pi n}{N} + a_2 \cos\frac{4\pi n}{N}, \mbox{ for } n = 0, \ldots, N - 1
-    \end{align*}
-    $$
-    
- - The windowed signal \(\hat{x}[n]\) is - -
-    $$
-        \hat{x}[n] = x[n] w[n], \mbox{ for } n = 0, \ldots, N - 1
-    $$
-    
+ Applying a Blackman window consists + in the following operation on the input time domain data. Let + \(x[n]\) for \(n = 0, \ldots, N - 1\) be the time domain data. The + Blackman window is defined by + +
+	$$
+	\begin{align*}
+		\alpha &= \mbox{0.16} \\ a_0 &= \frac{1-\alpha}{2} \\
+		a_1 &= \frac{1}{2} \\
+		a_2 &= \frac{\alpha}{2} \\
+		w[n] &= a_0 - a_1 \cos\frac{2\pi n}{N} + a_2 \cos\frac{4\pi n}{N}, \mbox{ for } n = 0, \ldots, N - 1
+	\end{align*}
+	$$
+	
+ + The windowed signal \(\hat{x}[n]\) is + +
+	$$
+		\hat{x}[n] = x[n] w[n], \mbox{ for } n = 0, \ldots, N - 1
+	$$
+	
- Applying a Fourier transform - consists of computing the Fourier transform in the following way. - Let \(X[k]\) be the complex frequency domain data and - \(\hat{x}[n]\) be the windowed time domain data computed above. - Then - -
-    $$
-        X[k] = \frac{1}{N} \sum_{n = 0}^{N - 1} \hat{x}[n]\, W^{-kn}_{N}
-    $$
-    
- - for \(k = 0, \dots, N/2-1\) where \(W_N = e^{2\pi i/N}\). + Applying a Fourier transform + consists of computing the Fourier transform in the following way. + Let \(X[k]\) be the complex frequency domain data and + \(\hat{x}[n]\) be the windowed time domain data computed above. + Then + +
+	$$
+		X[k] = \frac{1}{N} \sum_{n = 0}^{N - 1} \hat{x}[n]\, W^{-kn}_{N}
+	$$
+	
+ + for \(k = 0, \dots, N/2-1\) where \(W_N = e^{2\pi i/N}\).
- Smoothing over time frequency - data consists in the following operation: - - * Let \(\hat{X}_{-1}[k]\) be the result of this operation on the - previous block. The previous block is defined as - being the buffer computed by the previous smoothing over time operation, or an - array of \(N\) zeros if this is the first time we are smoothing over time. + Smoothing over time frequency + data consists in the following operation: - * Let \(\tau\) be the value of the {{AnalyserNode/smoothingTimeConstant}} attribute for - this {{AnalyserNode}}. + * Let \(\hat{X}_{-1}[k]\) be the result of this operation on the + previous block. The previous block is defined as + being the buffer computed by the previous smoothing over time operation, or an + array of \(N\) zeros if this is the first time we are smoothing over time. - * Let \(X[k]\) be the result of applying a Fourier transform of the current block. + * Let \(\tau\) be the value of the {{AnalyserNode/smoothingTimeConstant}} attribute for + this {{AnalyserNode}}. - Then the smoothed value, \(\hat{X}[k]\), is computed by + * Let \(X[k]\) be the result of applying a Fourier transform of the current block. -
-    $$
-        \hat{X}[k] = \tau\, \hat{X}_{-1}[k] + (1 - \tau)\, \left|X[k]\right|
-    $$
-    
+ Then the smoothed value, \(\hat{X}[k]\), is computed by - * If \(\hat{X}[k]\) is NaN, positive infinity or negative infinity, set \(\hat{X}[k]\) = 0. -
+
+	$$
+		\hat{X}[k] = \tau\, \hat{X}_{-1}[k] + (1 - \tau)\, \left|X[k]\right|
+	$$
+	
- for \(k = 0, \ldots, N - 1\). + for \(k = 0, \ldots, N - 1\).
- Conversion to dB consists of the - following operation, where \(\hat{X}[k]\) is computed in smoothing over time: - -
-    $$
-        Y[k] = 20\log_{10}\hat{X}[k]
-    $$
-    
- - for \(k = 0, \ldots, N-1\). - - This array, \(Y[k]\), is copied to the output array for - {{AnalyserNode/getFloatFrequencyData()}}. For - {{AnalyserNode/getByteFrequencyData()}}, the \(Y[k]\) is clipped to lie - between {{AnalyserNode/minDecibels}} and - {{AnalyserNode/maxDecibels}} and then scaled to fit in an - unsigned byte such that {{AnalyserNode/minDecibels}} is - represented by the value 0 and {{AnalyserNode/maxDecibels}} is - represented by the value 255. + Conversion to dB consists of the + following operation, where \(\hat{X}[k]\) is computed in smoothing over time: + +
+	$$
+		Y[k] = 20\log_{10}\hat{X}[k]
+	$$
+	
+ + for \(k = 0, \ldots, N-1\). + + This array, \(Y[k]\), is copied to the output array for + {{AnalyserNode/getFloatFrequencyData()}}. For + {{AnalyserNode/getByteFrequencyData()}}, the \(Y[k]\) is clipped to lie + between {{AnalyserNode/minDecibels}} and + {{AnalyserNode/maxDecibels}} and then scaled to fit in an + unsigned byte such that {{AnalyserNode/minDecibels}} is + represented by the value 0 and {{AnalyserNode/maxDecibels}} is + represented by the value 255.
@@ -5260,12 +4611,12 @@ descriptions.
 path: audionode.include
 macros:
-    noi: 0
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: No
+	noi: 0
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: No
 
The number of channels of the output equals the number of channels of the @@ -5277,9 +4628,9 @@ In addition, if the buffer has more than one channel, then the {{AudioBufferSourceNode}} output must change to a single channel of silence at the beginning of a render quantum after the time at which any one of the following conditions holds: - * the end of the {{AudioBufferSourceNode/buffer}} has been reached; - * the {{AudioBufferSourceNode/start(when, offset, duration)/duration}} has been reached; - * the {{AudioScheduledSourceNode/stop(when)/when|stop}} time has been reached. + * the end of the {{AudioBufferSourceNode/buffer}} has been reached; + * the {{AudioBufferSourceNode/start(when, offset, duration)/duration}} has been reached; + * the {{AudioScheduledSourceNode/stop(when)/when|stop}} time has been reached. A playhead position for an {{AudioBufferSourceNode}} is defined as any quantity representing a time offset in seconds, @@ -5308,192 +4659,180 @@ slot [[buffer set]], initially
 [Exposed=Window]
 interface AudioBufferSourceNode : AudioScheduledSourceNode {
-    constructor (BaseAudioContext context,
-                 optional AudioBufferSourceOptions options = {});
-    attribute AudioBuffer? buffer;
-    readonly attribute AudioParam playbackRate;
-    readonly attribute AudioParam detune;
-    attribute boolean loop;
-    attribute double loopStart;
-    attribute double loopEnd;
-    undefined start (optional double when = 0,
-                     optional double offset,
-                     optional double duration);
+	constructor (BaseAudioContext context,
+	             optional AudioBufferSourceOptions options = {});
+	attribute AudioBuffer? buffer;
+	readonly attribute AudioParam playbackRate;
+	readonly attribute AudioParam detune;
+	attribute boolean loop;
+	attribute double loopStart;
+	attribute double loopEnd;
+	void start (optional double when = 0,
+	            optional double offset,
+	            optional double duration);
 };
 

Constructors

-
- : AudioBufferSourceNode(context, options) - :: +
+ : AudioBufferSourceNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{AudioBufferSourceNode}} will be associated with.
-            options: Optional initial parameter value for this {{AudioBufferSourceNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{AudioBufferSourceNode}} will be associated with.
+			options: Optional initial parameter value for this {{AudioBufferSourceNode}}.
+		

Attributes

- : buffer - :: - Represents the audio asset to be played. - -
- To set the {{AudioBufferSourceNode/buffer}} attribute, execute these steps: - - 1. Let new buffer be the {{AudioBuffer}} or - null value to be assigned to {{AudioBufferSourceNode/buffer}}. - - 2. If new buffer is not null and - {{AudioBufferSourceNode/[[buffer set]]}} is true, throw an - {{InvalidStateError}} and abort these steps. - - 3. If new buffer is not null, set - {{AudioBufferSourceNode/[[buffer set]]}} to true. - - 4. Assign new buffer to the {{AudioBufferSourceNode/buffer}} - attribute. - - 5. If start() has previously been called on this - node, perform the operation acquire the content on - {{AudioBufferSourceNode/buffer}}. -
- - : detune - :: - An additional parameter, in cents, to modulate the speed at - which is rendered the audio stream. This parameter is a - compound parameter with {{AudioBufferSourceNode/playbackRate}} to form a - computedPlaybackRate. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/k-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : loop - :: - Indicates if the region of audio data designated by - {{AudioBufferSourceNode/loopStart}} and {{AudioBufferSourceNode/loopEnd}} should be played continuously - in a loop. The default value is false. - - : loopEnd - :: - An optional playhead position where looping should end if - the {{AudioBufferSourceNode/loop}} attribute is true. Its value is exclusive of the - content of the loop. Its default value is 0, and it - may usefully be set to any value between 0 and the duration of - the buffer. If {{AudioBufferSourceNode/loopEnd}} is less than or equal to 0, or if - {{AudioBufferSourceNode/loopEnd}} is greater than the duration of the buffer, - looping will end at the end of the buffer. - - : loopStart - :: - An optional playhead position where looping should begin - if the {{AudioBufferSourceNode/loop}} attribute is true. Its default - value is 0, and it may usefully be set to any value - between 0 and the duration of the buffer. If {{AudioBufferSourceNode/loopStart}} is - less than 0, looping will begin at 0. If {{AudioBufferSourceNode/loopStart}} is - greater than the duration of the buffer, looping will begin at - the end of the buffer. - - : playbackRate - :: - The speed at which to render the audio stream. This is a - compound parameter with {{AudioBufferSourceNode/detune}} to form a - computedPlaybackRate. - -
-        path: audioparam.include
-        macros:
-            default: 1
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/k-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
+ : buffer + :: + Represents the audio asset to be played. + +
+ To set the {{AudioBufferSourceNode/buffer}} attribute, execute these steps: + + 1. Let new buffer be the {{AudioBuffer}} or + null value to be assigned to {{AudioBufferSourceNode/buffer}}. + + 2. If new buffer is not null and + {{AudioBufferSourceNode/[[buffer set]]}} is true, throw an + {{InvalidStateError}} and abort these steps. + + 3. If new buffer is not null, set + {{AudioBufferSourceNode/[[buffer set]]}} to true. + + 4. Assign new buffer to the {{AudioBufferSourceNode/buffer}} + attribute. + + 5. If start() has previously been called on this + node, perform the operation acquire the content on + {{AudioBufferSourceNode/buffer}}. +
+ + : detune + :: + An additional parameter, in cents, to modulate the speed at + which is rendered the audio stream. This parameter is a + compound parameter with {{AudioBufferSourceNode/playbackRate}} to form a + computedPlaybackRate. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/k-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : loop + :: + Indicates if the region of audio data designated by + {{AudioBufferSourceNode/loopStart}} and {{AudioBufferSourceNode/loopEnd}} should be played continuously + in a loop. The default value is false. + + : loopEnd + :: + An optional playhead position where looping should end if + the {{AudioBufferSourceNode/loop}} attribute is true. Its value is exclusive of the + content of the loop. Its default value is 0, and it + may usefully be set to any value between 0 and the duration of + the buffer. If {{AudioBufferSourceNode/loopEnd}} is less than or equal to 0, or if + {{AudioBufferSourceNode/loopEnd}} is greater than the duration of the buffer, + looping will end at the end of the buffer. + + : loopStart + :: + An optional playhead position where looping should begin + if the {{AudioBufferSourceNode/loop}} attribute is true. Its default + value is 0, and it may usefully be set to any value + between 0 and the duration of the buffer. If {{AudioBufferSourceNode/loopStart}} is + less than 0, looping will begin at 0. If {{AudioBufferSourceNode/loopStart}} is + greater than the duration of the buffer, looping will begin at + the end of the buffer. + + : playbackRate + :: + The speed at which to render the audio stream. This is a + compound parameter with {{AudioBufferSourceNode/detune}} to form a + computedPlaybackRate. + +
+		path: audioparam.include
+		macros:
+			default: 1
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/k-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		

Methods

- : start(when, offset, duration) - :: - Schedules a sound to playback at an exact time. - -
- When this method is called, execute these steps: - - 1. If this {{AudioBufferSourceNode}} internal - slot {{AudioScheduledSourceNode/[[source started]]}} is `true`, an - {{InvalidStateError}} exception MUST be thrown. - - 1. Check for any errors that must be thrown due to parameter - constraints described below. If any - exception is thrown during this step, - abort those steps. - - 1. Set the internal slot {{AudioScheduledSourceNode/[[source started]]}} on - this {{AudioBufferSourceNode}} to true. - - 1. Queue a control message to start the - {{AudioBufferSourceNode}}, including the parameter values - in the message. - - 1. Acquire the contents of the - {{AudioBufferSourceNode/buffer}} if the - {{AudioBufferSourceNode/buffer}} has been set. - 1. Send a control message to the associated {{AudioContext}} to - start running its rendering thread only when - all the following conditions are met: - 1. The context's {{[[control thread state]]}} is - {{AudioContextState/suspended}}. - 1. The context is allowed to start. - 1. {{[[suspended by user]]}} flag is false. - - NOTE: This can allow {{AudioBufferSourceNode/start()}} to start - an {{AudioContext}} that is currently allowed to start, - but has previously been prevented from starting. -
- -
- Running a control message to start the {{AudioBufferSourceNode}} - means invoking the handleStart() function in the - [[#playback-AudioBufferSourceNode|playback algorithm]] which follows. -
- -
-            when: The when parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than currentTime, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown if when is negative.
-            offset: The offset parameter supplies a playhead position where playback will begin. If 0 is passed in for this value, then playback will start from the beginning of the buffer. A {{RangeError}} exception MUST be thrown if offset is negative. If offset is greater than {{AudioBufferSourceNode/loopEnd}}, {{AudioBufferSourceNode/playbackRate}} is positive or zero, and {{AudioBufferSourceNode/loop}} is true, playback will begin at {{AudioBufferSourceNode/loopEnd}}.  If offset is greater than {{AudioBufferSourceNode/loopStart}}, {{AudioBufferSourceNode/playbackRate}} is negative, and {{AudioBufferSourceNode/loop}} is true, playback will begin at {{AudioBufferSourceNode/loopStart}}. offset is silently clamped to [0, duration], when startTime is reached, where duration is the value of the duration attribute of the {{AudioBuffer}} set to the {{AudioBufferSourceNode/buffer}} attribute of this AudioBufferSourceNode.
-            duration: The {{AudioBufferSourceNode/start(when, offset, duration)/duration}} parameter describes the duration of sound to be played, expressed as seconds of total buffer content to be output, including any whole or partial loop iterations. The units of {{AudioBufferSourceNode/start(when, offset, duration)/duration}} are independent of the effects of {{AudioBufferSourceNode/playbackRate}}. For example, a {{AudioBufferSourceNode/start(when, offset, duration)/duration}} of 5 seconds with a playback rate of 0.5 will output 5 seconds of buffer content at half speed, producing 10 seconds of audible output. A {{RangeError}} exception MUST be thrown if duration is negative.
-        
- -
- Return type: {{undefined}} -
+ : start(when, offset, duration) + :: + Schedules a sound to playback at an exact time. + +
+ When this method is called, execute these steps: + + 1. If stop has been called on this node, or if an + earlier call to start has already occurred, an + {{InvalidStateError}} exception MUST be thrown. + + 2. Check for any errors that must be thrown due to parameter + constraints described below. + + 3. Queue a control message to start the + {{AudioBufferSourceNode}}, including the parameter values + in the messsage. + + 4. Send a control message to the associated {{AudioContext}} to + run it in the rendering thread only when + the following conditions are met: + 1. The context's control thread state is + suspended. + 2. The context is allowed to start. + 3. {{[[suspended by user]]}} flag is false. +
+ +
+ Running a control message to start the {{AudioBufferSourceNode}} + means invoking the handleStart() function in the + [[#playback-AudioBufferSourceNode|playback algorithm]] which follows. +
+ +
+			when: The when parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than currentTime, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown if when is negative.
+			offset: The offset parameter supplies a playhead position where playback will begin. If 0 is passed in for this value, then playback will start from the beginning of the buffer. A {{RangeError}} exception MUST be thrown if offset is negative. If offset is greater than {{AudioBufferSourceNode/loopEnd}}, {{AudioBufferSourceNode/playbackRate}} is positive or zero, and {{AudioBufferSourceNode/loop}} is true, playback will begin at {{AudioBufferSourceNode/loopEnd}}.  If offset is greater than {{AudioBufferSourceNode/loopStart}}, {{AudioBufferSourceNode/playbackRate}} is negative, and {{AudioBufferSourceNode/loop}} is true, playback will begin at {{AudioBufferSourceNode/loopStart}}. offset is silently clamped to [0, duration], when startTime is reached, where duration is the value of the duration attribute of the {{AudioBuffer}} set to the {{AudioBufferSourceNode/buffer}} attribute of this AudioBufferSourceNode.
+			duration: The {{AudioBufferSourceNode/start(when, offset, duration)/duration}} parameter describes the duration of sound to be played, expressed as seconds of total buffer content to be output, including any whole or partial loop iterations. The units of {{AudioBufferSourceNode/start(when, offset, duration)/duration}} are independent of the effects of {{AudioBufferSourceNode/playbackRate}}. For example, a {{AudioBufferSourceNode/start(when, offset, duration)/duration}} of 5 seconds with a playback rate of 0.5 will output 5 seconds of buffer content at half speed, producing 10 seconds of audible output. A {{RangeError}} exception MUST be thrown if duration is negative.
+		
+ +
+ Return type: {{void}} +
-

+

{{AudioBufferSourceOptions}}

This specifies options for constructing a @@ -5503,12 +4842,12 @@ constructing the node.
 dictionary AudioBufferSourceOptions {
-    AudioBuffer? buffer;
-    float detune = 0;
-    boolean loop = false;
-    double loopEnd = 0;
-    double loopStart = 0;
-    float playbackRate = 1;
+	AudioBuffer? buffer;
+	float detune = 0;
+	boolean loop = false;
+	double loopEnd = 0;
+	double loopStart = 0;
+	float playbackRate = 1;
 };
 
@@ -5516,31 +4855,31 @@ dictionary AudioBufferSourceOptions { Dictionary {{AudioBufferSourceOptions}} Members
- : buffer - :: - The audio asset to be played. This is equivalent to assigning - {{AudioBufferSourceOptions/buffer}} to the {{AudioBufferSourceNode/buffer}} attribute of - the {{AudioBufferSourceNode}}. - - : detune - :: - The initial value for the {{AudioBufferSourceNode/detune}} AudioParam. - - : loop - :: - The initial value for the {{AudioBufferSourceNode/loop}} attribute. - - : loopEnd - :: - The initial value for the {{AudioBufferSourceNode/loopEnd}} attribute. - - : loopStart - :: - The initial value for the {{AudioBufferSourceNode/loopStart}} attribute. - - : playbackRate - :: - The initial value for the {{AudioBufferSourceNode/playbackRate}} AudioParam. + : buffer + :: + The audio asset to be played. This is equivalent to assigning + {{AudioBufferSourceOptions/buffer}} to the {{AudioBufferSourceNode/buffer}} attribute of + the {{AudioBufferSourceNode}}. + + : detune + :: + The initial value for the {{AudioBufferSourceNode/detune}} AudioParam. + + : loop + :: + The initial value for the {{AudioBufferSourceNode/loop}} attribute. + + : loopEnd + :: + The initial value for the {{AudioBufferSourceNode/loopEnd}} attribute. + + : loopStart + :: + The initial value for the {{AudioBufferSourceNode/loopStart}} attribute. + + : playbackRate + :: + The initial value for the {{AudioBufferSourceNode/playbackRate}} AudioParam.

@@ -5561,8 +4900,8 @@ looped playback will continue until one of the following occurs: * the scheduled stop time has been reached, * the duration has been exceeded, if - {{AudioBufferSourceNode/start()}} - was called with a duration value. + {{AudioBufferSourceNode/start()}} + was called with a duration value. The body of the loop is considered to occupy a region from {{AudioBufferSourceNode/loopStart}} up to, but @@ -5615,25 +4954,25 @@ the following factors working in combination: * A starting offset, which can be expressed with sub-sample precision. * Loop points, which can be expressed with sub-sample precision and can - vary dynamically during playback. + vary dynamically during playback. * Playback rate and detuning parameters, which combine to yield a - single computedPlaybackRate that can assume finite values - which may be positive or negative. + single computedPlaybackRate that can assume finite values + which may be positive or negative. The algorithm to be followed internally to generate output from an {{AudioBufferSourceNode}} conforms to the following principles: * Resampling of the buffer may be performed arbitrarily by the UA - at any desired point to increase the efficiency or quality of the - output. + at any desired point to increase the efficiency or quality of the + output. * Sub-sample start offsets or loop points may require additional - interpolation between sample frames. + interpolation between sample frames. * The playback of a looped buffer should behave identically to an - unlooped buffer containing consecutive occurrences of the looped - audio content, excluding any effects from interpolation. + unlooped buffer containing consecutive occurrences of the looped + audio content, excluding any effects from interpolation. The description of the algorithm is as follows: @@ -5661,141 +5000,141 @@ let dt = 1 / context.sampleRate; // Handle invocation of start method call function handleStart(when, pos, dur) { - if (arguments.length >= 1) { - start = when; - } - offset = pos; - if (arguments.length >= 3) { - duration = dur; - } + if (arguments.length >= 1) { + start = when; + } + offset = pos; + if (arguments.length >= 3) { + duration = dur; + } } // Handle invocation of stop method call function handleStop(when) { - if (arguments.length >= 1) { - stop = when; - } else { - stop = context.currentTime; - } + if (arguments.length >= 1) { + stop = when; + } else { + stop = context.currentTime; + } } // Interpolate a multi-channel signal value for some sample frame. // Returns an array of signal values. function playbackSignal(position) { - /* - This function provides the playback signal function for buffer, which is a - function that maps from a playhead position to a set of output signal - values, one for each output channel. If |position| corresponds to the - location of an exact sample frame in the buffer, this function returns - that frame. Otherwise, its return value is determined by a UA-supplied - algorithm that interpolates sample frames in the neighborhood of - |position|. - - If |position| is greater than or equal to |loopEnd| and there is no subsequent - sample frame in buffer, then interpolation should be based on the sequence - of subsequent frames beginning at |loopStart|. - */ - ... + /* + This function provides the playback signal function for buffer, which is a + function that maps from a playhead position to a set of output signal + values, one for each output channel. If |position| corresponds to the + location of an exact sample frame in the buffer, this function returns + that frame. Otherwise, its return value is determined by a UA-supplied + algorithm that interpolates between sample frames in the neighborhood of + position. + + If position is greater than or equal to loopEnd and there is no subsequent + sample frame in buffer, then interpolation should be based on the sequence + of subsequent frames beginning at loopStart. + */ + ... } // Generate a single render quantum of audio to be placed // in the channel arrays defined by output. Returns an array // of |numberOfFrames| sample frames to be output. function process(numberOfFrames) { - let currentTime = context.currentTime; // context time of next rendered frame - const output = []; // accumulates rendered sample frames - - // Combine the two k-rate parameters affecting playback rate - const computedPlaybackRate = playbackRate * Math.pow(2, detune / 1200); - - // Determine loop endpoints as applicable - let actualLoopStart, actualLoopEnd; - if (loop && buffer != null) { - if (loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) { - actualLoopStart = loopStart; - actualLoopEnd = Math.min(loopEnd, buffer.duration); - } else { - actualLoopStart = 0; - actualLoopEnd = buffer.duration; - } - } else { - // If the loop flag is false, remove any record of the loop having been entered - enteredLoop = false; - } - - // Handle null buffer case - if (buffer == null) { - stop = currentTime; // force zero output for all time - } - - // Render each sample frame in the quantum - for (let index = 0; index < numberOfFrames; index++) { - // Check that currentTime and bufferTimeElapsed are - // within allowable range for playback - if (currentTime < start || currentTime >= stop || bufferTimeElapsed >= duration) { - output.push(0); // this sample frame is silent - currentTime += dt; - continue; - } - - if (!started) { - // Take note that buffer has started playing and get initial - // playhead position. - if (loop && computedPlaybackRate >= 0 && offset >= actualLoopEnd) { - offset = actualLoopEnd; - } - if (computedPlaybackRate < 0 && loop && offset < actualLoopStart) { - offset = actualLoopStart; - } - bufferTime = offset; - started = true; - } - - // Handle loop-related calculations - if (loop) { - // Determine if looped portion has been entered for the first time - if (!enteredLoop) { - if (offset < actualLoopEnd && bufferTime >= actualLoopStart) { - // playback began before or within loop, and playhead is - // now past loop start - enteredLoop = true; - } - if (offset >= actualLoopEnd && bufferTime < actualLoopEnd) { - // playback began after loop, and playhead is now prior - // to the loop end - enteredLoop = true; - } - } - - // Wrap loop iterations as needed. Note that enteredLoop - // may become true inside the preceding conditional. - if (enteredLoop) { - while (bufferTime >= actualLoopEnd) { - bufferTime -= actualLoopEnd - actualLoopStart; - } - while (bufferTime < actualLoopStart) { - bufferTime += actualLoopEnd - actualLoopStart; - } - } - } - - if (bufferTime >= 0 && bufferTime < buffer.duration) { - output.push(playbackSignal(bufferTime)); - } else { - output.push(0); // past end of buffer, so output silent frame - } - - bufferTime += dt * computedPlaybackRate; - bufferTimeElapsed += dt * computedPlaybackRate; - currentTime += dt; - } // End of render quantum loop - - if (currentTime >= stop) { - // End playback state of this node. No further invocations of process() - // will occur. Schedule a change to set the number of output channels to 1. - } - - return output; + let currentTime = context.currentTime; // context time of next rendered frame + const output = []; // accumulates rendered sample frames + + // Combine the two k-rate parameters affecting playback rate + const computedPlaybackRate = playbackRate * Math.pow(2, detune / 1200); + + // Determine loop endpoints as applicable + let actualLoopStart, actualLoopEnd; + if (loop && buffer != null) { + if (loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) { + actualLoopStart = loopStart; + actualLoopEnd = Math.min(loopEnd, buffer.duration); + } else { + actualLoopStart = 0; + actualLoopEnd = buffer.duration; + } + } else { + // If the loop flag is false, remove any record of the loop having been entered + enteredLoop = false; + } + + // Handle null buffer case + if (buffer == null) { + stop = currentTime; // force zero output for all time + } + + // Render each sample frame in the quantum + for (let index = 0; index < numberOfFrames; index++) { + // Check that currentTime and bufferTimeElapsed are + // within allowable range for playback + if (currentTime < start || currentTime >= stop || bufferTimeElapsed >= duration) { + output.push(0); // this sample frame is silent + currentTime += dt; + continue; + } + + if (!started) { + // Take note that buffer has started playing and get initial + // playhead position. + if (loop && computedPlaybackRate >= 0 && offset >= actualLoopEnd) { + offset = actualLoopEnd; + } + if (computedPlaybackRate < 0 && loop && offset < actualLoopStart) { + offset = actualLoopStart; + } + bufferTime = offset; + started = true; + } + + // Handle loop-related calculations + if (loop) { + // Determine if looped portion has been entered for the first time + if (!enteredLoop) { + if (offset < actualLoopEnd && bufferTime >= actualLoopStart) { + // playback began before or within loop, and playhead is + // now past loop start + enteredLoop = true; + } + if (offset >= actualLoopEnd && bufferTime < actualLoopEnd) { + // playback began after loop, and playhead is now prior + // to the loop end + enteredLoop = true; + } + } + + // Wrap loop iterations as needed. Note that enteredLoop + // may become true inside the preceding conditional. + if (enteredLoop) { + while (bufferTime >= actualLoopEnd) { + bufferTime -= actualLoopEnd - actualLoopStart; + } + while (bufferTime < actualLoopStart) { + bufferTime += actualLoopEnd - actualLoopStart; + } + } + } + + if (bufferTime >= 0 && bufferTime < buffer.duration) { + output.push(playbackSignal(bufferTime)); + } else { + output.push(0); // past end of buffer, so output silent frame + } + + bufferTime += dt * computedPlaybackRate; + bufferTimeElapsed += dt * computedPlaybackRate; + currentTime += dt; + } // End of render quantum loop + + if (currentTime >= stop) { + // End playback state of this node. No further invocations of process() + // will occur. Schedule a change to set the number of output channels to 1. + } + + return output; } @@ -5809,13 +5148,13 @@ apply: * context sample rate is 1000 Hz * {{AudioBuffer}} content is shown with the first sample frame - at the x origin. + at the x origin. * output signals are shown with the sample frame located at time - start at the x origin. + start at the x origin. * linear interpolation is depicted throughout, although a UA - could employ other interpolation techniques. + could employ other interpolation techniques. * the duration values noted in the figures refer to the buffer, not arguments to {{AudioBufferSourceNode/start()}} @@ -5823,10 +5162,10 @@ This figure illustrates basic playback of a buffer, with a simple loop that ends after the last sample frame in the buffer:
- AudioBufferSourceNode basic playback -
- {{AudioBufferSourceNode}} basic playback -
+ AudioBufferSourceNode basic playback +
+ {{AudioBufferSourceNode}} basic playback +
This figure illustrates playbackRate interpolation, @@ -5836,10 +5175,10 @@ sample frame in the looped output, which is interpolated using the loop start point:
- AudioBufferSourceNode playbackRate interpolation -
- {{AudioBufferSourceNode}} playbackRate interpolation -
+ AudioBufferSourceNode playbackRate interpolation +
+ {{AudioBufferSourceNode}} playbackRate interpolation +
This figure illustrates sample rate interpolation, showing playback @@ -5850,10 +5189,10 @@ resulting output is the same as the preceding example, but for different reasons.
- AudioBufferSourceNode sample rate interpolation -
- {{AudioBufferSourceNode}} sample rate interpolation. -
+ AudioBufferSourceNode sample rate interpolation +
+ {{AudioBufferSourceNode}} sample rate interpolation. +
This figure illustrates subsample offset playback, in which the @@ -5861,10 +5200,10 @@ offset within the buffer begins at exactly half a sample frame. Consequently, every output frame is interpolated:
- AudioBufferSourceNode subsample offset playback -
- {{AudioBufferSourceNode}} subsample offset playback -
+ AudioBufferSourceNode subsample offset playback +
+ {{AudioBufferSourceNode}} subsample offset playback +
This figure illustrates subsample loop playback, showing how @@ -5873,10 +5212,10 @@ data points in the buffer that respect these offsets as if they were references to exact sample frames:
- AudioBufferSourceNode subsample loop playback -
- {{AudioBufferSourceNode}} subsample loop playback -
+ AudioBufferSourceNode subsample loop playback +
+ {{AudioBufferSourceNode}} subsample loop playback +
@@ -5918,12 +5257,12 @@ For an {{AudioContext}}, the defaults are
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-mode: explicit
-    cc-interp: speakers
-    tail-time: No
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-mode: explicit
+	cc-interp: speakers
+	tail-time: No
 
The {{AudioNode/channelCount}} can be set to any @@ -5938,12 +5277,12 @@ For an {{OfflineAudioContext}}, the defaults are
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: numberOfChannels
-    cc-mode: explicit
-    cc-interp: speakers
-    tail-time: No
+	noi: 1
+	noo: 1
+	cc: numberOfChannels
+	cc-mode: explicit
+	cc-interp: speakers
+	tail-time: No
 
where numberOfChannels is the number of channels @@ -5956,7 +5295,7 @@ different value.
 [Exposed=Window]
 interface AudioDestinationNode : AudioNode {
-    readonly attribute unsigned long maxChannelCount;
+	readonly attribute unsigned long maxChannelCount;
 };
 
@@ -5964,14 +5303,14 @@ interface AudioDestinationNode : AudioNode { Attributes

- : maxChannelCount - :: - The maximum number of channels that the {{AudioNode/channelCount}} attribute can be set - to. An {{AudioDestinationNode}} representing the - audio hardware end-point (the normal case) can potentially output - more than 2 channels of audio if the audio hardware is - multi-channel. {{AudioDestinationNode/maxChannelCount}} is the maximum number - of channels that this hardware is capable of supporting. + : maxChannelCount + :: + The maximum number of channels that the {{AudioNode/channelCount}} attribute can be set + to. An {{AudioDestinationNode}} representing the + audio hardware end-point (the normal case) can potentially output + more than 2 channels of audio if the audio hardware is + multi-channel. {{AudioDestinationNode/maxChannelCount}} is the maximum number + of channels that this hardware is capable of supporting.
@@ -5985,7 +5324,7 @@ Attributes ████████ ████ ██████ ██ ████████ ██ ██ ████████ ██ ██ --> -

+

The {{AudioListener}} Interface

This interface represents the position and orientation of the person @@ -6012,17 +5351,17 @@ interpreted, see the [[#Spatialization]] section.
 [Exposed=Window]
 interface AudioListener {
-    readonly attribute AudioParam positionX;
-    readonly attribute AudioParam positionY;
-    readonly attribute AudioParam positionZ;
-    readonly attribute AudioParam forwardX;
-    readonly attribute AudioParam forwardY;
-    readonly attribute AudioParam forwardZ;
-    readonly attribute AudioParam upX;
-    readonly attribute AudioParam upY;
-    readonly attribute AudioParam upZ;
-    undefined setPosition (float x, float y, float z);
-    undefined setOrientation (float x, float y, float z, float xUp, float yUp, float zUp);
+	readonly attribute AudioParam positionX;
+	readonly attribute AudioParam positionY;
+	readonly attribute AudioParam positionZ;
+	readonly attribute AudioParam forwardX;
+	readonly attribute AudioParam forwardY;
+	readonly attribute AudioParam forwardZ;
+	readonly attribute AudioParam upX;
+	readonly attribute AudioParam upY;
+	readonly attribute AudioParam upZ;
+	void setPosition (float x, float y, float z);
+	void setOrientation (float x, float y, float z, float xUp, float yUp, float zUp);
 };
 
@@ -6030,235 +5369,235 @@ interface AudioListener { Attributes
- : forwardX - :: - Sets the x coordinate component of the forward direction the - listener is pointing in 3D Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : forwardY - :: - Sets the y coordinate component of the forward direction the - listener is pointing in 3D Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : forwardZ - :: - Sets the z coordinate component of the forward direction the - listener is pointing in 3D Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: -1
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : positionX - :: - Sets the x coordinate position of the audio listener in a 3D - Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : positionY - :: - Sets the y coordinate position of the audio listener in a 3D - Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : positionZ - :: - Sets the z coordinate position of the audio listener in a 3D - Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : upX - :: - Sets the x coordinate component of the up direction the - listener is pointing in 3D Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : upY - :: - Sets the y coordinate component of the up direction the - listener is pointing in 3D Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 1
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : upZ - :: - Sets the z coordinate component of the up direction the - listener is pointing in 3D Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
+ : forwardX + :: + Sets the x coordinate component of the forward direction the + listener is pointing in 3D Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : forwardY + :: + Sets the y coordinate component of the forward direction the + listener is pointing in 3D Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : forwardZ + :: + Sets the z coordinate component of the forward direction the + listener is pointing in 3D Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: -1
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : positionX + :: + Sets the x coordinate position of the audio listener in a 3D + Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : positionY + :: + Sets the y coordinate position of the audio listener in a 3D + Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : positionZ + :: + Sets the z coordinate position of the audio listener in a 3D + Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : upX + :: + Sets the x coordinate component of the up direction the + listener is pointing in 3D Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : upY + :: + Sets the y coordinate component of the up direction the + listener is pointing in 3D Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 1
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : upZ + :: + Sets the z coordinate component of the up direction the + listener is pointing in 3D Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		

Methods

- : setOrientation(x, y, z, xUp, yUp, zUp) - :: - This method is DEPRECATED. It is equivalent to setting - {{forwardX}}.{{AudioParam/value}}, - {{forwardY}}.{{AudioParam/value}}, - {{forwardZ}}.{{AudioParam/value}}, - {{upX}}.{{AudioParam/value}}, - {{upY}}.{{AudioParam/value}}, and - {{upZ}}.{{AudioParam/value}} directly - with the given x, y, z, - xUp, yUp, and zUp - values, respectively. - - Consequently, if any of the {{forwardX}}, {{forwardY}}, - {{forwardZ}}, {{upX}}, {{upY}} and {{upZ}} - {{AudioParam}}s have an automation curve set using - {{AudioParam/setValueCurveAtTime()}} at the time - this method is called, a {{NotSupportedError}} MUST be - thrown. - - {{AudioListener/setOrientation()}} describes which direction the listener is pointing in the 3D - cartesian coordinate space. Both a [=forward=] vector and an - [=up=] vector are provided. In simple human terms, the - forward vector represents which direction the person's - nose is pointing. The up vector represents the direction - the top of a person's head is pointing. These two vectors are - expected to be linearly independent. For normative requirements - of how these values are to be interpreted, see the [[#Spatialization]]. - - The {{AudioListener/setOrientation()/x!!argument}}, {{AudioListener/setOrientation()/y!!argument}}, and {{AudioListener/setOrientation()/z!!argument}} parameters represent a forward - direction vector in 3D space, with the default value being - (0,0,-1). - - The {{AudioListener/setOrientation()/xUp!!argument}}, {{AudioListener/setOrientation()/yUp!!argument}}, and {{AudioListener/setOrientation()/zUp!!argument}} parameters represent an - up direction vector in 3D space, with the default value - being (0,1,0). - -
-            x: forward x direction fo the {{AudioListener}}
-            y: forward y direction fo the {{AudioListener}}
-            z: forward z direction fo the {{AudioListener}}
-            xUp: up x direction fo the {{AudioListener}}
-            yUp: up y direction fo the {{AudioListener}}
-            zUp: up z direction fo the {{AudioListener}}
-        
- -
- Return type: {{undefined}} -
- - : setPosition(x, y, z) - :: - This method is DEPRECATED. It is equivalent to setting - {{AudioListener/positionX}}.{{AudioParam/value}}, - {{AudioListener/positionY}}.{{AudioParam/value}}, and - {{AudioListener/positionZ}}.{{AudioParam/value}} - directly with the given x, y, and - z values, respectively. - - Consequently, any of the {{AudioListener/positionX}}, {{AudioListener/positionY}}, - and {{AudioListener/positionZ}} {{AudioParam}}s for this - {{AudioListener}} have an automation curve set using - {{AudioParam/setValueCurveAtTime()}} at the time - this method is called, a {{NotSupportedError}} MUST be - thrown. - - {{AudioListener/setPosition()}} sets the position of the listener in a 3D cartesian coordinate - space. {{PannerNode}} objects use this position - relative to individual audio sources for spatialization. - - The {{AudioListener/setPosition()/x!!argument}}, {{AudioListener/setPosition()/y!!argument}}, and {{AudioListener/setPosition()/z!!argument}} parameters represent the coordinates - in 3D space. - - The default value is (0,0,0). - -
-            x: x-coordinate of the position of the {{AudioListener}}
-            y: y-coordinate of the position of the {{AudioListener}}
-            z: z-coordinate of the position of the {{AudioListener}}
-        
+ : setOrientation(x, y, z, xUp, yUp, zUp) + :: + This method is DEPRECATED. It is equivalent to setting + {{forwardX}}.{{AudioParam/value}}, + {{forwardY}}.{{AudioParam/value}}, + {{forwardZ}}.{{AudioParam/value}}, + {{upX}}.{{AudioParam/value}}, + {{upY}}.{{AudioParam/value}}, and + {{upZ}}.{{AudioParam/value}} directly + with the given x, y, z, + xUp, yUp, and zUp + values, respectively. + + Consequently, if any of the {{forwardX}}, {{forwardY}}, + {{forwardZ}}, {{upX}}, {{upY}} and {{upZ}} + {{AudioParam}}s have an automation curve set using + {{AudioParam/setValueCurveAtTime()}} at the time + this method is called, a {{NotSupportedError}} MUST be + thrown. + + {{AudioListener/setOrientation()}} describes which direction the listener is pointing in the 3D + cartesian coordinate space. Both a [=forward=] vector and an + [=up=] vector are provided. In simple human terms, the + forward vector represents which direction the person's + nose is pointing. The up vector represents the direction + the top of a person's head is pointing. These two vectors are + expected to be linearly independent. For normative requirements + of how these values are to be interpreted, see the [[#Spatialization]]. + + The {{AudioListener/setOrientation()/x!!argument}}, {{AudioListener/setOrientation()/y!!argument}}, and {{AudioListener/setOrientation()/z!!argument}} parameters represent a forward + direction vector in 3D space, with the default value being + (0,0,-1). + + The {{AudioListener/setOrientation()/xUp!!argument}}, {{AudioListener/setOrientation()/yUp!!argument}}, and {{AudioListener/setOrientation()/zUp!!argument}} parameters represent an + up direction vector in 3D space, with the default value + being (0,1,0). + +
+			x: forward x direction fo the {{AudioListener}}
+			y: forward y direction fo the {{AudioListener}}
+			z: forward z direction fo the {{AudioListener}}
+			xUp: up x direction fo the {{AudioListener}}
+			yUp: up y direction fo the {{AudioListener}}
+			zUp: up z direction fo the {{AudioListener}}
+		
+ +
+ Return type: void +
+ + : setPosition(x, y, z) + :: + This method is DEPRECATED. It is equivalent to setting + {{AudioListener/positionX}}.{{AudioParam/value}}, + {{AudioListener/positionY}}.{{AudioParam/value}}, and + {{AudioListener/positionZ}}.{{AudioParam/value}} + directly with the given x, y, and + z values, respectively. + + Consequently, any of the {{AudioListener/positionX}}, {{AudioListener/positionY}}, + and {{AudioListener/positionZ}} {{AudioParam}}s for this + {{AudioListener}} have an automation curve set using + {{AudioParam/setValueCurveAtTime()}} at the time + this method is called, a {{NotSupportedError}} MUST be + thrown. + + {{AudioListener/setPosition()}} sets the position of the listener in a 3D cartesian coordinate + space. {{PannerNode}} objects use this position + relative to individual audio sources for spatialization. + + The {{AudioListener/setPosition()/x!!argument}}, {{AudioListener/setPosition()/y!!argument}}, and {{AudioListener/setPosition()/z!!argument}} parameters represent the coordinates + in 3D space. + + The default value is (0,0,0). + +
+			x: x-coordinate of the position of the {{AudioListener}}
+			y: y-coordinate of the position of the {{AudioListener}}
+			z: z-coordinate of the position of the {{AudioListener}}
+		

@@ -6280,7 +5619,7 @@ the graph have the {{AudioListener}} as input. ██ ██ ██ ███████ ██████ ████████ ██████ ██████ ████ ██ ██ ██████ --> -

+

The {{AudioProcessingEvent}} Interface - DEPRECATED

This is an {{Event}} object which is dispatched to @@ -6297,10 +5636,10 @@ synthesized data if there are no inputs) is then placed into the
 [Exposed=Window]
 interface AudioProcessingEvent : Event {
-    constructor (DOMString type, AudioProcessingEventInit eventInitDict);
-    readonly attribute double playbackTime;
-    readonly attribute AudioBuffer inputBuffer;
-    readonly attribute AudioBuffer outputBuffer;
+	constructor (DOMString type, AudioProcessingEventInit eventInitDict);
+	readonly attribute double playbackTime;
+	readonly attribute AudioBuffer inputBuffer;
+	readonly attribute AudioBuffer outputBuffer;
 };
 
@@ -6308,42 +5647,42 @@ interface AudioProcessingEvent : Event { Attributes
- : inputBuffer - :: - An AudioBuffer containing the input audio data. It will have a - number of channels equal to the - numberOfInputChannels parameter of the - createScriptProcessor() method. This AudioBuffer is only valid - while in the scope of the {{ScriptProcessorNode/audioprocess}} event handler functions. - Its values will be meaningless outside of this scope. - - : outputBuffer - :: - An AudioBuffer where the output audio data MUST be written. It - will have a number of channels equal to the - numberOfOutputChannels parameter of the - createScriptProcessor() method. Script code within the scope of - the {{ScriptProcessorNode/audioprocess}} event handler functions are - expected to modify the {{Float32Array}} arrays - representing channel data in this AudioBuffer. Any script - modifications to this AudioBuffer outside of this scope will not - produce any audible effects. - - : playbackTime - :: - The time when the audio will be played in the same time - coordinate system as the {{AudioContext}}'s - {{BaseAudioContext/currentTime}}. + : inputBuffer + :: + An AudioBuffer containing the input audio data. It will have a + number of channels equal to the + numberOfInputChannels parameter of the + createScriptProcessor() method. This AudioBuffer is only valid + while in the scope of the {{ScriptProcessorNode/onaudioprocess}} function. + Its values will be meaningless outside of this scope. + + : outputBuffer + :: + An AudioBuffer where the output audio data MUST be written. It + will have a number of channels equal to the + numberOfOutputChannels parameter of the + createScriptProcessor() method. Script code within the scope of + the {{ScriptProcessorNode/onaudioprocess}} function is + expected to modify the {{Float32Array}} arrays + representing channel data in this AudioBuffer. Any script + modifications to this AudioBuffer outside of this scope will not + produce any audible effects. + + : playbackTime + :: + The time when the audio will be played in the same time + coordinate system as the {{AudioContext}}'s + {{BaseAudioContext/currentTime}}.
-

+

{{AudioProcessingEventInit}}

 dictionary AudioProcessingEventInit : EventInit {
-    required double playbackTime;
-    required AudioBuffer inputBuffer;
-    required AudioBuffer outputBuffer;
+	required double playbackTime;
+	required AudioBuffer inputBuffer;
+	required AudioBuffer outputBuffer;
 };
 
@@ -6351,23 +5690,23 @@ dictionary AudioProcessingEventInit : EventInit { Dictionary {{AudioProcessingEventInit}} Members
- : inputBuffer - :: - Value to be assigned to the {{AudioProcessingEvent/inputBuffer}} attribute - of the event. - - : outputBuffer - :: - Value to be assigned to the {{AudioProcessingEvent/outputBuffer}} attribute - of the event. - - : playbackTime - :: - Value to be assigned to the {{AudioProcessingEvent/playbackTime}} attribute - of the event. + : inputBuffer + :: + Value to be assigned to the {{AudioProcessingEvent/inputBuffer}} attribute + of the event. + + : outputBuffer + :: + Value to be assigned to the {{AudioProcessingEvent/outputBuffer}} attribute + of the event. + + : playbackTime + :: + Value to be assigned to the {{AudioProcessingEvent/playbackTime}} attribute + of the event.
-

+

The {{BiquadFilterNode}} Interface

{{BiquadFilterNode}} is an @@ -6390,7 +5729,7 @@ a compound parameter and are both a-rate. They are used together to determine a computedFrequency value:
-    computedFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)
+	computedFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)
 
The nominal range for this compound parameter is [0, @@ -6400,13 +5739,13 @@ The nominal range for this compound parameter is [0,
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: Yes
-    tail-time-notes:  Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: Yes
+	tail-time-notes:  Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.
 
The number of channels of the output always equals the number of @@ -6414,150 +5753,150 @@ channels of the input.
 enum BiquadFilterType {
-    "lowpass",
-    "highpass",
-    "bandpass",
-    "lowshelf",
-    "highshelf",
-    "peaking",
-    "notch",
-    "allpass"
+	"lowpass",
+	"highpass",
+	"bandpass",
+	"lowshelf",
+	"highshelf",
+	"peaking",
+	"notch",
+	"allpass"
 };
 
- - - - - - - - - - - - - - + + + + + + + + + + +
{{BiquadFilterType}} enumeration description
Enum valueDescription
"lowpass" - - A lowpass - filter allows frequencies below the cutoff frequency to - pass through and attenuates frequencies above the cutoff. It - implements a standard second-order resonant lowpass filter with - 12dB/octave rolloff. - - : frequency - :: The cutoff frequency - : Q - :: Controls how peaked the response will be at the cutoff - frequency. A large value makes the response more peaked. - : gain - :: Not used in this filter type -
"highpass" - - A highpass - filter is the opposite of a lowpass filter. Frequencies - above the cutoff frequency are passed through, but frequencies - below the cutoff are attenuated. It implements a standard - second-order resonant highpass filter with 12dB/octave rolloff. - - : frequency - :: The cutoff frequency below which the frequencies are - attenuated - : Q - :: Controls how peaked the response will be at the cutoff - frequency. A large value makes the response more peaked. - : gain - :: Not used in this filter type -
"bandpass" - - A bandpass - filter allows a range of frequencies to pass through and - attenuates the frequencies below and above this frequency - range. It implements a second-order bandpass filter. - - : frequency - :: The center of the frequency band - : Q - :: Controls the width of the band. The width becomes narrower - as the Q value increases. - : gain - :: Not used in this filter type -
"lowshelf" - - The lowshelf filter allows all frequencies through, but adds a - boost (or attenuation) to the lower frequencies. It implements - a second-order lowshelf filter. - - : frequency - :: The upper limit of the frequences where the boost (or - attenuation) is applied. - : Q - :: Not used in this filter type. - : gain - :: The boost, in dB, to be applied. If the value is negative, - the frequencies are attenuated. -
"highshelf" - - The highshelf filter is the opposite of the lowshelf filter and - allows all frequencies through, but adds a boost to the higher - frequencies. It implements a second-order highshelf filter - - : frequency - :: The lower limit of the frequences where the boost (or - attenuation) is applied. - : Q - :: Not used in this filter type. - : gain - :: The boost, in dB, to be applied. If the value is negative, - the frequencies are attenuated. -
"peaking" - - The peaking filter allows all frequencies through, but adds a - boost (or attenuation) to a range of frequencies. - - : frequency - :: The center frequency of where the boost is applied. - : Q - :: Controls the width of the band of frequencies that are - boosted. A large value implies a narrow width. - : gain - :: The boost, in dB, to be applied. If the value is negative, - the frequencies are attenuated. -
"notch" - - The notch filter (also known as a band-stop or - band-rejection filter) is the opposite of a bandpass - filter. It allows all frequencies through, except for a set of - frequencies. - - : frequency - :: The center frequency of where the notch is applied. - : Q - :: Controls the width of the band of frequencies that are - attenuated. A large value implies a narrow width. - : gain - :: Not used in this filter type. -
"allpass" - - An - allpass filter allows all frequencies through, but changes - the phase relationship between the various frequencies. It - implements a second-order allpass filter - - : frequency - :: The frequency where the center of the phase transition - occurs. Viewed another way, this is the frequency with - maximal group - delay. - : Q - :: Controls how sharp the phase transition is at the center - frequency. A larger value implies a sharper transition and - a larger group delay. - : gain - :: Not used in this filter type. +
+ Enumeration description +
"lowpass" + + A lowpass + filter allows frequencies below the cutoff frequency to + pass through and attenuates frequencies above the cutoff. It + implements a standard second-order resonant lowpass filter with + 12dB/octave rolloff. + + : frequency + :: The cutoff frequency + : Q + :: Controls how peaked the response will be at the cutoff + frequency. A large value makes the response more peaked. + : gain + :: Not used in this filter type +
"highpass" + + A highpass + filter is the opposite of a lowpass filter. Frequencies + above the cutoff frequency are passed through, but frequencies + below the cutoff are attenuated. It implements a standard + second-order resonant highpass filter with 12dB/octave rolloff. + + : frequency + :: The cutoff frequency below which the frequencies are + attenuated + : Q + :: Controls how peaked the response will be at the cutoff + frequency. A large value makes the response more peaked. + : gain + :: Not used in this filter type +
"bandpass" + + A bandpass + filter allows a range of frequencies to pass through and + attenuates the frequencies below and above this frequency + range. It implements a second-order bandpass filter. + + : frequency + :: The center of the frequency band + : Q + :: Controls the width of the band. The width becomes narrower + as the Q value increases. + : gain + :: Not used in this filter type +
"lowshelf" + + The lowshelf filter allows all frequencies through, but adds a + boost (or attenuation) to the lower frequencies. It implements + a second-order lowshelf filter. + + : frequency + :: The upper limit of the frequences where the boost (or + attenuation) is applied. + : Q + :: Not used in this filter type. + : gain + :: The boost, in dB, to be applied. If the value is negative, + the frequencies are attenuated. +
"highshelf" + + The highshelf filter is the opposite of the lowshelf filter and + allows all frequencies through, but adds a boost to the higher + frequencies. It implements a second-order highshelf filter + + : frequency + :: The lower limit of the frequences where the boost (or + attenuation) is applied. + : Q + :: Not used in this filter type. + : gain + :: The boost, in dB, to be applied. If the value is negative, + the frequencies are attenuated. +
"peaking" + + The peaking filter allows all frequencies through, but adds a + boost (or attenuation) to a range of frequencies. + + : frequency + :: The center frequency of where the boost is applied. + : Q + :: Controls the width of the band of frequencies that are + boosted. A large value implies a narrow width. + : gain + :: The boost, in dB, to be applied. If the value is negative, + the frequencies are attenuated. +
"notch" + + The notch filter (also known as a band-stop or + band-rejection filter) is the opposite of a bandpass + filter. It allows all frequencies through, except for a set of + frequencies. + + : frequency + :: The center frequency of where the notch is applied. + : Q + :: Controls the width of the band of frequencies that are + attenuated. A large value implies a narrow width. + : gain + :: Not used in this filter type. +
"allpass" + + An + allpass filter allows all frequencies through, but changes + the phase relationship between the various frequencies. It + implements a second-order allpass filter + + : frequency + :: The frequency where the center of the phase transition + occurs. Viewed another way, this is the frequency with + maximal group + delay. + : Q + :: Controls how sharp the phase transition is at the center + frequency. A larger value implies a sharper transition and + a larger group delay. + : gain + :: Not used in this filter type.
@@ -6566,15 +5905,15 @@ All attributes of the {{BiquadFilterNode}} are a-rate {{AudioParam}}s.
 [Exposed=Window]
 interface BiquadFilterNode : AudioNode {
-    constructor (BaseAudioContext context, optional BiquadFilterOptions options = {});
-    attribute BiquadFilterType type;
-    readonly attribute AudioParam frequency;
-    readonly attribute AudioParam detune;
-    readonly attribute AudioParam Q;
-    readonly attribute AudioParam gain;
-    undefined getFrequencyResponse (Float32Array frequencyHz,
-                                    Float32Array magResponse,
-                                    Float32Array phaseResponse);
+	constructor (BaseAudioContext context, optional BiquadFilterOptions options = {});
+	attribute BiquadFilterType type;
+	readonly attribute AudioParam frequency;
+	readonly attribute AudioParam detune;
+	readonly attribute AudioParam Q;
+	readonly attribute AudioParam gain;
+	void getFrequencyResponse (Float32Array frequencyHz,
+	                           Float32Array magResponse,
+	                           Float32Array phaseResponse);
 };
 
@@ -6582,145 +5921,145 @@ interface BiquadFilterNode : AudioNode { Constructors
- : BiquadFilterNode(context, options) - :: + : BiquadFilterNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{BiquadFilterNode}} will be associated with.
-            options: Optional initial parameter value for this {{BiquadFilterNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{BiquadFilterNode}} will be associated with.
+			options: Optional initial parameter value for this {{BiquadFilterNode}}.
+		

Attributes

- : Q - :: - The Q - factor of the filter. - - For {{BiquadFilterType/lowpass}} and - {{BiquadFilterType/highpass}} filters the - {{BiquadFilterNode/Q}} value is interpreted to be in - dB. For these filters the nominal range is - \([-Q_{lim}, Q_{lim}]\) where \(Q_{lim}\) is the largest - value for which \(10^{Q/20}\) does not overflow. This - is approximately \(770.63678\). - - For the {{BiquadFilterType/bandpass}}, - {{BiquadFilterType/notch}}, - {{BiquadFilterType/allpass}}, and - {{BiquadFilterType/peaking}} filters, this value is a - linear value. The value is related to the bandwidth - of the filter and hence should be a positive value. - The nominal range is \([0, 3.4028235e38]\), the upper - limit being the most-positive-single-float. - - This is not used for the {{BiquadFilterType/lowshelf}} - and {{BiquadFilterType/highshelf}} filters. - -
-        path: audioparam.include
-        macros:
-            default: 1
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38, but see above for the actual limits for different filters
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38, but see above for the actual limits for different filters
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : detune - :: - A detune value, in cents, for the frequency. It forms a - compound parameter with {{BiquadFilterNode/frequency}} to form the computedFrequency. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: \(\approx -153600\)
-            min-notes:
-            max: \(\approx 153600\)
-            max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value.
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : frequency - :: - The frequency at which the {{BiquadFilterNode}} - will operate, in Hz. It forms a compound parameter with - {{BiquadFilterNode/detune}} to form the computedFrequency. - -
-        path: audioparam.include
-        macros:
-            default: 350
-            min: 0
-            max: Nyquist frequency
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : gain - :: - The gain of the filter. Its value is in dB units. The gain is - only used for {{BiquadFilterType/lowshelf}}, - {{BiquadFilterType/highshelf}}, and - {{BiquadFilterType/peaking}} filters. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: \(\approx 1541\)
-            max-notes: This value is approximately \(40\ \log_{10} \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value.
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : type - :: - The type of this {{BiquadFilterNode}}. Its - default value is "{{BiquadFilterType/lowpass}}". The exact meaning of the other - parameters depend on the value of the {{BiquadFilterNode/type}} - attribute. + : Q + :: + The Q + factor of the filter. + + For {{BiquadFilterType/lowpass}} and + {{BiquadFilterType/highpass}} filters the + {{BiquadFilterNode/Q}} value is interpreted to be in + dB. For these filters the nominal range is + \([-Q_{lim}, Q_{lim}]\) where \(Q_{lim}\) is the largest + value for which \(10^{Q/20}\) does not overflow. This + is approximately \(770.63678\). + + For the {{BiquadFilterType/bandpass}}, + {{BiquadFilterType/notch}}, + {{BiquadFilterType/allpass}}, and + {{BiquadFilterType/peaking}} filters, this value is a + linear value. The value is related to the bandwidth + of the filter and hence should be a positive value. + The nominal range is \([0, 3.4028235e38]\), the upper + limit being the most-positive-single-float. + + This is not used for the {{BiquadFilterType/lowshelf}} + and {{BiquadFilterType/highshelf}} filters. + +
+		path: audioparam.include
+		macros:
+			default: 1
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38, but see above for the actual limits for different filters
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38, but see above for the actual limits for different filters
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : detune + :: + A detune value, in cents, for the frequency. It forms a + compound parameter with {{BiquadFilterNode/frequency}} to form the computedFrequency. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: \(\approx -153600\)
+			min-notes:
+			max: \(\approx 153600\)
+			max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value.
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : frequency + :: + The frequency at which the {{BiquadFilterNode}} + will operate, in Hz. It forms a compound parameter with + {{BiquadFilterNode/detune}} to form the computedFrequency. + +
+		path: audioparam.include
+		macros:
+			default: 350
+			min: 0
+			max: Nyquist frequency
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : gain + :: + The gain of the filter. Its value is in dB units. The gain is + only used for {{BiquadFilterType/lowshelf}}, + {{BiquadFilterType/highshelf}}, and + {{BiquadFilterType/peaking}} filters. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: \(\approx 1541\)
+			max-notes: This value is approximately \(40\ \log_{10} \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value.
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : type + :: + The type of this {{BiquadFilterNode}}. Its + default value is "{{BiquadFilterType/lowpass}}". The exact meaning of the other + parameters depend on the value of the {{BiquadFilterNode/type}} + attribute.

Methods

- : getFrequencyResponse(frequencyHz, magResponse, phaseResponse) - :: - Given the {{[[current value]]}} - from each of the filter parameters, synchronously - calculates the frequency response for - the specified frequencies. The three parameters MUST be - {{Float32Array}}s of the same length, or an - {{InvalidAccessError}} MUST be thrown. - - The frequency response returned MUST be computed with the - {{AudioParam}} sampled for the current - processing block. - -
-            frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated.
-            magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the frequencyHz parameter is not within [0, sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the magResponse array MUST be NaN.
-            phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in the frequencyHz parameter is not within [0; sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the phaseResponse array MUST be NaN.
-        
- -
- Return type: {{undefined}} -
+ : getFrequencyResponse(frequencyHz, magResponse, phaseResponse) + :: + Given the {{[[current value]]}} + from each of the filter parameters, synchronously + calculates the frequency response for + the specified frequencies. The three parameters MUST be + {{Float32Array}}s of the same length, or an + {{InvalidAccessError}} MUST be thrown. + + The frequency response returned MUST be computed with the + {{AudioParam}} sampled for the current + processing block. + +
+			frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated.
+			magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the frequencyHz parameter is not within [0, sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the magResponse array MUST be NaN.
+			phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in the frequencyHz parameter is not within [0; sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the phaseResponse array MUST be NaN.
+		
+ +
+ Return type: void +
-

+

{{BiquadFilterOptions}}

This specifies the options to be used when constructing a @@ -6730,11 +6069,11 @@ node.
 dictionary BiquadFilterOptions : AudioNodeOptions {
-    BiquadFilterType type = "lowpass";
-    float Q = 1;
-    float detune = 0;
-    float frequency = 350;
-    float gain = 0;
+	BiquadFilterType type = "lowpass";
+	float Q = 1;
+	float detune = 0;
+	float frequency = 350;
+	float gain = 0;
 };
 
@@ -6742,20 +6081,20 @@ dictionary BiquadFilterOptions : AudioNodeOptions { Dictionary {{BiquadFilterOptions}} Members
- : Q - :: The desired initial value for {{BiquadFilterNode/Q}}. + : Q + :: The desired initial value for {{BiquadFilterNode/Q}}. - : detune - :: The desired initial value for {{BiquadFilterNode/detune}}. + : detune + :: The desired initial value for {{BiquadFilterNode/detune}}. - : frequency - :: The desired initial value for {{BiquadFilterNode/frequency}}. + : frequency + :: The desired initial value for {{BiquadFilterNode/frequency}}. - : gain - :: The desired initial value for {{BiquadFilterNode/gain}}. + : gain + :: The desired initial value for {{BiquadFilterNode/gain}}. - : type - :: The desired initial type of the filter. + : type + :: The desired initial type of the filter.

@@ -6783,7 +6122,7 @@ which is equivalent to a time-domain equation of:
 $$
 a_0 y(n) + a_1 y(n-1) + a_2 y(n-2) =
-    b_0 x(n) + b_1 x(n-1) + b_2 x(n-2)
+	b_0 x(n) + b_1 x(n-1) + b_2 x(n-2)
 $$
 
@@ -6811,142 +6150,142 @@ their computation, based on the computedValue of the * Let \(Q\) be the value of the {{BiquadFilterNode/Q}} {{AudioParam}}. * Finally let - -
-    $$
-    \begin{align*}
-        A &= 10^{\frac{G}{40}} \\
-        \omega_0 &= 2\pi\frac{f_0}{F_s} \\
-        \alpha_Q &= \frac{\sin\omega_0}{2Q} \\
-        \alpha_{Q_{dB}} &= \frac{\sin\omega_0}{2 \cdot 10^{Q/20}} \\
-        S &= 1 \\
-        \alpha_S &= \frac{\sin\omega_0}{2}\sqrt{\left(A+\frac{1}{A}\right)\left(\frac{1}{S}-1\right)+2}
-    \end{align*}
-    $$
-    
+ +
+	$$
+	\begin{align*}
+		A &= 10^{\frac{G}{40}} \\
+		\omega_0 &= 2\pi\frac{f_0}{F_s} \\
+		\alpha_Q &= \frac{\sin\omega_0}{2Q} \\
+		\alpha_{Q_{dB}} &= \frac{\sin\omega_0}{2 \cdot 10^{Q/20}} \\
+		S &= 1 \\
+		\alpha_S &= \frac{\sin\omega_0}{2}\sqrt{\left(A+\frac{1}{A}\right)\left(\frac{1}{S}-1\right)+2}
+	\end{align*}
+	$$
+	
The six coefficients (\(b_0, b_1, b_2, a_0, a_1, a_2\)) for each filter type, are: : "{{lowpass}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= \frac{1 - \cos\omega_0}{2} \\
-            b_1 &= 1 - \cos\omega_0 \\
-            b_2 &= \frac{1 - \cos\omega_0}{2} \\
-            a_0 &= 1 + \alpha_{Q_{dB}} \\
-            a_1 &= -2 \cos\omega_0 \\
-            a_2 &= 1 - \alpha_{Q_{dB}}
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= \frac{1 - \cos\omega_0}{2} \\
+			b_1 &= 1 - \cos\omega_0 \\
+			b_2 &= \frac{1 - \cos\omega_0}{2} \\
+			a_0 &= 1 + \alpha_{Q_{dB}} \\
+			a_1 &= -2 \cos\omega_0 \\
+			a_2 &= 1 - \alpha_{Q_{dB}}
+		\end{align*}
+	$$
+	
: "{{highpass}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= \frac{1 + \cos\omega_0}{2} \\
-            b_1 &= -(1 + \cos\omega_0) \\
-            b_2 &= \frac{1 + \cos\omega_0}{2} \\
-            a_0 &= 1 + \alpha_{Q_{dB}} \\
-            a_1 &= -2 \cos\omega_0 \\
-            a_2 &= 1 - \alpha_{Q_{dB}}
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= \frac{1 + \cos\omega_0}{2} \\
+			b_1 &= -(1 + \cos\omega_0) \\
+			b_2 &= \frac{1 + \cos\omega_0}{2} \\
+			a_0 &= 1 + \alpha_{Q_{dB}} \\
+			a_1 &= -2 \cos\omega_0 \\
+			a_2 &= 1 - \alpha_{Q_{dB}}
+		\end{align*}
+	$$
+	
: "{{bandpass}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= \alpha_Q \\
-            b_1 &= 0 \\
-            b_2 &= -\alpha_Q \\
-            a_0 &= 1 + \alpha_Q \\
-            a_1 &= -2 \cos\omega_0 \\
-            a_2 &= 1 - \alpha_Q
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= \alpha_Q \\
+			b_1 &= 0 \\
+			b_2 &= -\alpha_Q \\
+			a_0 &= 1 + \alpha_Q \\
+			a_1 &= -2 \cos\omega_0 \\
+			a_2 &= 1 - \alpha_Q
+		\end{align*}
+	$$
+	
: "{{notch}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= 1 \\
-            b_1 &= -2\cos\omega_0 \\
-            b_2 &= 1 \\
-            a_0 &= 1 + \alpha_Q \\
-            a_1 &= -2 \cos\omega_0 \\
-            a_2 &= 1 - \alpha_Q
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= 1 \\
+			b_1 &= -2\cos\omega_0 \\
+			b_2 &= 1 \\
+			a_0 &= 1 + \alpha_Q \\
+			a_1 &= -2 \cos\omega_0 \\
+			a_2 &= 1 - \alpha_Q
+		\end{align*}
+	$$
+	
: "{{allpass}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= 1 - \alpha_Q \\
-            b_1 &= -2\cos\omega_0 \\
-            b_2 &= 1 + \alpha_Q \\
-            a_0 &= 1 + \alpha_Q \\
-            a_1 &= -2 \cos\omega_0 \\
-            a_2 &= 1 - \alpha_Q
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= 1 - \alpha_Q \\
+			b_1 &= -2\cos\omega_0 \\
+			b_2 &= 1 + \alpha_Q \\
+			a_0 &= 1 + \alpha_Q \\
+			a_1 &= -2 \cos\omega_0 \\
+			a_2 &= 1 - \alpha_Q
+		\end{align*}
+	$$
+	
: "{{peaking}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= 1 + \alpha_Q\, A \\
-            b_1 &= -2\cos\omega_0 \\
-            b_2 &= 1 - \alpha_Q\,A \\
-            a_0 &= 1 + \frac{\alpha_Q}{A} \\
-            a_1 &= -2 \cos\omega_0 \\
-            a_2 &= 1 - \frac{\alpha_Q}{A}
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= 1 + \alpha_Q\, A \\
+			b_1 &= -2\cos\omega_0 \\
+			b_2 &= 1 - \alpha_Q\,A \\
+			a_0 &= 1 + \frac{\alpha_Q}{A} \\
+			a_1 &= -2 \cos\omega_0 \\
+			a_2 &= 1 - \frac{\alpha_Q}{A}
+		\end{align*}
+	$$
+	
: "{{lowshelf}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= A \left[ (A+1) - (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A})\right] \\
-            b_1 &= 2 A \left[ (A-1) - (A+1) \cos\omega_0 )\right] \\
-            b_2 &= A \left[ (A+1) - (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A}) \right] \\
-            a_0 &= (A+1) + (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A} \\
-            a_1 &= -2 \left[ (A-1) + (A+1) \cos\omega_0\right] \\
-            a_2 &= (A+1) + (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A})
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= A \left[ (A+1) - (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A})\right] \\
+			b_1 &= 2 A \left[ (A-1) - (A+1) \cos\omega_0 )\right] \\
+			b_2 &= A \left[ (A+1) - (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A}) \right] \\
+			a_0 &= (A+1) + (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A} \\
+			a_1 &= -2 \left[ (A-1) + (A+1) \cos\omega_0\right] \\
+			a_2 &= (A+1) + (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A})
+		\end{align*}
+	$$
+	
: "{{highshelf}}" :: -
-    $$
-        \begin{align*}
-            b_0 &= A\left[ (A+1) + (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} )\right] \\
-            b_1 &= -2A\left[ (A-1) + (A+1)\cos\omega_0 )\right] \\
-            b_2 &= A\left[ (A+1) + (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A} )\right] \\
-            a_0 &= (A+1) - (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} \\
-            a_1 &= 2\left[ (A-1) - (A+1)\cos\omega_0\right] \\
-            a_2 &= (A+1) - (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A}
-        \end{align*}
-    $$
-    
+
+	$$
+		\begin{align*}
+			b_0 &= A\left[ (A+1) + (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} )\right] \\
+			b_1 &= -2A\left[ (A-1) + (A+1)\cos\omega_0 )\right] \\
+			b_2 &= A\left[ (A+1) + (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A} )\right] \\
+			a_0 &= (A+1) - (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} \\
+			a_1 &= 2\left[ (A-1) - (A+1)\cos\omega_0\right] \\
+			a_2 &= (A+1) - (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A}
+		\end{align*}
+	$$
+	
-

+

The {{ChannelMergerNode}} Interface

The {{ChannelMergerNode}} is for use in more advanced @@ -6977,15 +6316,15 @@ applications and would often be used in conjunction with
 path: audionode.include
 macros:
-    noi: see notes
-    noi-notes:  Defaults to 6, but is determined by {{ChannelMergerOptions}},{{ChannelMergerOptions/numberOfInputs}} or the value specified by {{BaseAudioContext/createChannelMerger}}.
-    noo: 1
-    cc: 1
-    cc-notes: Has channelCount constraints
-    cc-mode: explicit
-    cc-mode-notes: Has channelCountMode constraints
-    cc-interp: speakers
-    tail-time: No
+	noi: see notes
+	noi-notes:  Defaults to 6, but is determined by {{ChannelMergerOptions}},{{ChannelMergerOptions/numberOfInputs}} or the value specified by {{BaseAudioContext/createChannelMerger}}.
+	noo: 1
+	cc: 1
+	cc-notes: Has channelCount constraints
+	cc-mode: explicit
+	cc-mode-notes: Has channelCountMode constraints
+	cc-interp: speakers
+	tail-time: No
 
This interface represents an {{AudioNode}} for @@ -7003,56 +6342,56 @@ output. Changing input streams does not affect the order of output channels.
- For example, if a default {{ChannelMergerNode}} has - two connected stereo inputs, the first and second input will be - downmixed to mono respectively before merging. The output will be a - 6-channel stream whose first two channels are be filled with the - first two (downmixed) inputs and the rest of channels will be silent. - - Also the {{ChannelMergerNode}} can be used to arrange - multiple audio streams in a certain order for the multi-channel - speaker array such as 5.1 surround set up. The merger does not - interpret the channel identities (such as left, right, etc.), but - simply combines channels in the order that they are input. - -
- channel merger -
- A diagram of ChannelMerger -
-
+ For example, if a default {{ChannelMergerNode}} has + two connected stereo inputs, the first and second input will be + downmixed to mono respectively before merging. The output will be a + 6-channel stream whose first two channels are be filled with the + first two (downmixed) inputs and the rest of channels will be silent. + + Also the {{ChannelMergerNode}} can be used to arrange + multiple audio streams in a certain order for the multi-channel + speaker array such as 5.1 surround set up. The merger does not + interpret the channel identities (such as left, right, etc.), but + simply combines channels in the order that they are input. + +
+ channel merger +
+ A diagram of ChannelMerger +
+
 [Exposed=Window]
 interface ChannelMergerNode : AudioNode {
-    constructor (BaseAudioContext context, optional ChannelMergerOptions options = {});
+	constructor (BaseAudioContext context, optional ChannelMergerOptions options = {});
 };
 

Constructors

-
- : ChannelMergerNode(context, options) - :: +
+ : ChannelMergerNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{ChannelMergerNode}} will be associated with.
-            options: Optional initial parameter value for this {{ChannelMergerNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{ChannelMergerNode}} will be associated with.
+			options: Optional initial parameter value for this {{ChannelMergerNode}}.
+		
-

+

{{ChannelMergerOptions}}

 dictionary ChannelMergerOptions : AudioNodeOptions {
-    unsigned long numberOfInputs = 6;
+	unsigned long numberOfInputs = 6;
 };
 
@@ -7060,8 +6399,8 @@ dictionary ChannelMergerOptions : AudioNodeOptions { Dictionary {{ChannelMergerOptions}} Members
- : numberOfInputs - :: The number inputs for the {{ChannelMergerNode}}. See {{BaseAudioContext/createChannelMerger()}} for constraints on this value. + : numberOfInputs + :: The number inputs for the {{ChannelMergerNode}}. See {{BaseAudioContext/createChannelMerger()}} for constraints on this value.
@@ -7083,7 +6422,7 @@ Dictionary {{ChannelMergerOptions}} Members ██████ ██ ████████ ████ ██ ██ ████████ ██ ██ --> -

+

The {{ChannelSplitterNode}} Interface

The {{ChannelSplitterNode}} is for use in more advanced @@ -7093,16 +6432,16 @@ applications and would often be used in conjunction with
 path: audionode.include
 macros:
-    noi: 1
-    noo: see notes
-    noo-notes:  This defaults to 6, but is otherwise determined from {{ChannelSplitterOptions/numberOfOutputs|ChannelSplitterOptions.numberOfOutputs}} or the value specified by {{BaseAudioContext/createChannelSplitter}} or the {{ChannelSplitterOptions/numberOfOutputs}} member of the {{ChannelSplitterOptions}} dictionary for the {{ChannelSplitterNode/ChannelSplitterNode()|constructor}}.
-    cc: {{AudioNode/numberOfOutputs}}
-    cc-notes: Has channelCount constraints
-    cc-mode: explicit
-    cc-mode-notes: Has channelCountMode constraints
-    cc-interp: discrete
-    cc-interp-notes: Has channelInterpretation constraints
-    tail-time: No
+	noi: 1
+	noo: see notes
+	noo-notes:  This defaults to 6, but is otherwise determined from {{ChannelSplitterOptions/numberOfOutputs|ChannelSplitterOptions.numberOfOutputs}} or the value specified by {{BaseAudioContext/createChannelSplitter}} or the {{ChannelSplitterOptions/numberOfOutputs}} member of the {{ChannelSplitterOptions}} dictionary for the {{ChannelSplitterNode/ChannelSplitterNode()|constructor}}.
+	cc: {{AudioNode/numberOfOutputs}}
+	cc-notes: Has channelCount constraints
+	cc-mode: explicit
+	cc-mode-notes: Has channelCountMode constraints
+	cc-interp: discrete
+	cc-interp-notes: Has channelInterpretation constraints
+	tail-time: No
 
This interface represents an {{AudioNode}} for @@ -7120,16 +6459,16 @@ are not "active" will output silence and would typically not be connected to anything.
-
- channel splitter -
- A diagram of a ChannelSplitter -
-
- - Please note that in this example, the splitter does not - interpret the channel identities (such as left, right, etc.), but - simply splits out channels in the order that they are input. +
+ channel splitter +
+ A diagram of a ChannelSplitter +
+
+ + Please note that in this example, the splitter does not + interpret the channel identities (such as left, right, etc.), but + simply splits out channels in the order that they are input.
One application for {{ChannelSplitterNode}} is for doing @@ -7139,33 +6478,33 @@ desired.
 [Exposed=Window]
 interface ChannelSplitterNode : AudioNode {
-    constructor (BaseAudioContext context, optional ChannelSplitterOptions options = {});
+	constructor (BaseAudioContext context, optional ChannelSplitterOptions options = {});
 };
 

Constructors

-
- : ChannelSplitterNode(context, options) - :: +
+ : ChannelSplitterNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{ChannelSplitterNode}} will be associated with.
-            options: Optional initial parameter value for this {{ChannelSplitterNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{ChannelSplitterNode}} will be associated with.
+			options: Optional initial parameter value for this {{ChannelSplitterNode}}.
+		
-

+

{{ChannelSplitterOptions}}

 dictionary ChannelSplitterOptions : AudioNodeOptions {
-    unsigned long numberOfOutputs = 6;
+	unsigned long numberOfOutputs = 6;
 };
 
@@ -7173,8 +6512,8 @@ dictionary ChannelSplitterOptions : AudioNodeOptions { Dictionary {{ChannelSplitterOptions}} Members
- : numberOfOutputs - :: The number outputs for the {{ChannelSplitterNode}}. See {{BaseAudioContext/createChannelSplitter()}} for constraints on this value. + : numberOfOutputs + :: The number outputs for the {{ChannelSplitterNode}}. See {{BaseAudioContext/createChannelSplitter()}} for constraints on this value.
@@ -7210,60 +6549,60 @@ The single output of this node consists of one channel (mono).
 path: audionode.include
 macros:
-    noi: 0
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: No
+	noi: 0
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: No
 
 [Exposed=Window]
 interface ConstantSourceNode : AudioScheduledSourceNode {
-    constructor (BaseAudioContext context, optional ConstantSourceOptions options = {});
-    readonly attribute AudioParam offset;
+	constructor (BaseAudioContext context, optional ConstantSourceOptions options = {});
+	readonly attribute AudioParam offset;
 };
 

Constructors

-
- : ConstantSourceNode(context, options) - :: +
+ : ConstantSourceNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{ConstantSourceNode}} will be associated with.
-            options: Optional initial parameter value for this {{ConstantSourceNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{ConstantSourceNode}} will be associated with.
+			options: Optional initial parameter value for this {{ConstantSourceNode}}.
+		

Attributes

- : offset - :: - The constant value of the source. - -
-        path: audioparam.include
-        macros:
-            default: 1
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
+ : offset + :: + The constant value of the source. + +
+		path: audioparam.include
+		macros:
+			default: 1
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
-

+

{{ConstantSourceOptions}}

This specifies options for constructing a @@ -7273,7 +6612,7 @@ node.
 dictionary ConstantSourceOptions {
-    float offset = 1;
+	float offset = 1;
 };
 
@@ -7281,8 +6620,8 @@ dictionary ConstantSourceOptions { Dictionary {{ConstantSourceOptions}} Members
- : offset - :: The initial value for the {{ConstantSourceNode/offset}} AudioParam of this node. + : offset + :: The initial value for the {{ConstantSourceNode/offset}} AudioParam of this node.
@@ -7305,15 +6644,15 @@ convolution effect given an impulse response.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-notes: Has channelCount constraints
-    cc-mode: clamped-max
-    cc-mode-notes: Has channelCountMode constraints
-    cc-interp: speakers
-    tail-time: Yes
-    tail-time-notes: Continues to output non-silent audio with zero input for the length of the {{ConvolverNode/buffer}}.
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-notes: Has channelCount constraints
+	cc-mode: clamped-max
+	cc-mode-notes: Has channelCountMode constraints
+	cc-interp: speakers
+	tail-time: Yes
+	tail-time-notes: Continues to output non-silent audio with zero input for the length of the {{ConvolverNode/buffer}}.
 
The input of this node is either mono (1 channel) or stereo (2 @@ -7327,178 +6666,180 @@ input to the node is either mono or stereo.
 [Exposed=Window]
 interface ConvolverNode : AudioNode {
-    constructor (BaseAudioContext context, optional ConvolverOptions options = {});
-    attribute AudioBuffer? buffer;
-    attribute boolean normalize;
+	constructor (BaseAudioContext context, optional ConvolverOptions options = {});
+	attribute AudioBuffer? buffer;
+	attribute boolean normalize;
 };
 

Constructors

-
- : ConvolverNode(context, options) - :: - When the constructor is called with a {{BaseAudioContext}} context and - an option object options, execute these steps: - - 1. Set the attributes {{ConvolverNode/normalize}} to the inverse of the - value of {{ConvolverOptions/disableNormalization}}. - 2. If {{ConvolverNode/buffer}} - exists, set the - {{ConvolverNode/buffer}} attribute to its value. - - Note: This means that the buffer will be normalized according to the - value of the {{ConvolverNode/normalize}} - attribute. - - 3. Let o be new {{AudioNodeOptions}} dictionary. - 4. If {{AudioNodeOptions/channelCount}} - exists in - options, set {{AudioNodeOptions/channelCount}} on o - with the same value. - 5. If {{AudioNodeOptions/channelCountMode}} - exists in - options, set {{AudioNodeOptions/channelCountMode}} on - o with the same value. - 6. If {{AudioNodeOptions/channelInterpretation}} - exists in - options, set {{AudioNodeOptions/channelInterpretation}} on - o with the same value. - 7. Initialize the AudioNode - this, with c and o as argument. - -
-            context: The {{BaseAudioContext}} this new {{ConvolverNode}} will be associated with.
-            options: Optional initial parameter value for this {{ConvolverNode}}.
-        
+
+ : ConvolverNode(context, options) + :: + When the constructor is called with a {{BaseAudioContext}} context and + an option object options, execute these steps: + + 1. Set the attributes {{ConvolverNode/normalize}} to the inverse of the + value of {{ConvolverOptions/disableNormalization}}. + 2. If {{ConvolverNode/buffer}} is + present, set the + {{ConvolverNode/buffer}} attribute to its value. + + Note: This means that the buffer will be normalized according to the + value of the {{ConvolverNode/normalize}} + attribute. + + 3. Let o be new {{AudioNodeOptions}} dictionary. + 4. If {{AudioNodeOptions/channelCount}} is + present in + options, set {{AudioNodeOptions/channelCount}} on o + with the same value. + 5. If {{AudioNodeOptions/channelCountMode}} is + present in + options, set {{AudioNodeOptions/channelCountMode}} on + o with the same value. + 6. If {{AudioNodeOptions/channelInterpretation}} is present in + options, set {{AudioNodeOptions/channelInterpretation}} on + o with the same value. + 7. Initialize the AudioNode + this, with c and o as argument. + +
+			context: The {{BaseAudioContext}} this new {{ConvolverNode}} will be associated with.
+			options: Optional initial parameter value for this {{ConvolverNode}}.
+		

Attributes

- : buffer - :: - At the time when this attribute is set, the {{ConvolverNode/buffer}} and - the state of the {{normalize}} attribute will be used to - configure the {{ConvolverNode}} with this - impulse response having the given normalization. The initial - value of this attribute is null. - -
- When setting the buffer attribute, execute the following steps synchronously: - 1. If the buffer {{AudioBuffer/numberOfChannels|number of channels}} is not 1, 2, 4, or if the - {{AudioBuffer/sampleRate|sample-rate}} of the buffer is not the same as the - {{BaseAudioContext/sampleRate|sample-rate}} of its associated {{BaseAudioContext}}, a - {{NotSupportedError}} MUST be thrown. - 2. Acquire the content of the - {{AudioBuffer}}. -
- - Note: If the {{ConvolverNode/buffer}} is set to an new - buffer, audio may glitch. If this is undesirable, it - is recommended to create a new {{ConvolverNode}} to - replace the old, possibly cross-fading between the - two. - - Note: The {{ConvolverNode}} produces a mono output only in the - single case where there is a single input channel and a - single-channel {{ConvolverNode/buffer}}. In all other cases, the - output is stereo. In particular, when the {{ConvolverNode/buffer}} - has four channels and there are two input channels, the - {{ConvolverNode}} performs matrix "true" stereo - convolution. For normative information please see the - channel - configuration diagrams - - : normalize - :: - Controls whether the impulse response from the buffer will be - scaled by an equal-power normalization when the - {{ConvolverNode/buffer}} atttribute is set. Its default value is - `true` in order to achieve a more uniform output - level from the convolver when loaded with diverse impulse - responses. If {{normalize}} is set to - false, then the convolution will be rendered with - no pre-processing/scaling of the impulse response. Changes to - this value do not take effect until the next time the - {{ConvolverNode/buffer}} attribute is set. - - If the {{normalize}} attribute is false when the - {{ConvolverNode/buffer}} attribute is set then the - {{ConvolverNode}} will perform a linear - convolution given the exact impulse response contained within - the {{ConvolverNode/buffer}}. - - Otherwise, if the {{normalize}} attribute is true when the - {{ConvolverNode/buffer}} attribute is set then the - {{ConvolverNode}} will first perform a scaled - RMS-power analysis of the audio data contained within - {{ConvolverNode/buffer}} to calculate a normalizationScale - given this algorithm: - -
-        function calculateNormalizationScale(buffer) {
-            const GainCalibration = 0.00125;
-            const GainCalibrationSampleRate = 44100;
-            const MinPower = 0.000125;
-
-            // Normalize by RMS power.
-            const numberOfChannels = buffer.numberOfChannels;
-            const length = buffer.length;
-
-            let power = 0;
-
-            for (let i = 0; i < numberOfChannels; i++) {
-                let channelPower = 0;
-                const channelData = buffer.getChannelData(i);
-
-                for (let j = 0; j < length; j++) {
-                    const sample = channelData[j];
-                    channelPower += sample * sample;
-                }
-
-                power += channelPower;
-            }
-
-            power = Math.sqrt(power / (numberOfChannels * length));
-
-            // Protect against accidental overload.
-            if (!isFinite(power) || isNaN(power) || power < MinPower)
-                power = MinPower;
-
-            let scale = 1 / power;
-
-            // Calibrate to make perceived volume same as unprocessed.
-            scale *= GainCalibration;
-
-            // Scale depends on sample-rate.
-            if (buffer.sampleRate)
-                scale *= GainCalibrationSampleRate / buffer.sampleRate;
-
-            // True-stereo compensation.
-            if (numberOfChannels == 4)
-                scale *= 0.5;
-
-            return scale;
-        }
-        
- - During processing, the ConvolverNode will then take this - calculated normalizationScale value and multiply it by - the result of the linear convolution resulting from processing - the input with the impulse response (represented by the - {{ConvolverNode/buffer}}) to produce the final output. Or any - mathematically equivalent operation may be used, such as - pre-multiplying the input by normalizationScale, or - pre-multiplying a version of the impulse-response by - normalizationScale. + : buffer + :: + + At the time when this attribute is set, the {{ConvolverNode/buffer}} and + the state of the {{normalize}} attribute will be used to + configure the {{ConvolverNode}} with this + impulse response having the given normalization. The initial + value of this attribute is null. + + :: +
+ When setting the buffer attribute, execute the following steps synchronously: + 1. If the buffer {{AudioBuffer/numberOfChannels|number of channels}} is not 1, 2, 4, or if the + {{AudioBuffer/sampleRate|sample-rate}} of the buffer is not the same as the + {{BaseAudioContext/sampleRate|sample-rate}} of its associated {{BaseAudioContext}}, a + {{NotSupportedError}} MUST be thrown. + 2. Acquire the content of the + {{AudioBuffer}}. +
+ :: + Note: If the {{ConvolverNode/buffer}} is set to an new + buffer, audio may glitch. If this is undesirable, it + is recommended to create a new {{ConvolverNode}} to + replace the old, possibly cross-fading between the + two. + :: + Note: The {{ConvolverNode}} produces a mono output only in the + single case where there is a single input channel and a + single-channel {{ConvolverNode/buffer}}. In all other cases, the + output is stereo. In particular, when the {{ConvolverNode/buffer}} + has four channels and there are two input channels, the + {{ConvolverNode}} performs matrix "true" stereo + convolution. For normative information please see the + channel + configuration diagrams + + : normalize + :: + Controls whether the impulse response from the buffer will be + scaled by an equal-power normalization when the + {{ConvolverNode/buffer}} atttribute is set. Its default value is + `true` in order to achieve a more uniform output + level from the convolver when loaded with diverse impulse + responses. If {{normalize}} is set to + false, then the convolution will be rendered with + no pre-processing/scaling of the impulse response. Changes to + this value do not take effect until the next time the + {{ConvolverNode/buffer}} attribute is set. + + If the {{normalize}} attribute is false when the + {{ConvolverNode/buffer}} attribute is set then the + {{ConvolverNode}} will perform a linear + convolution given the exact impulse response contained within + the {{ConvolverNode/buffer}}. + + Otherwise, if the {{normalize}} attribute is true when the + {{ConvolverNode/buffer}} attribute is set then the + {{ConvolverNode}} will first perform a scaled + RMS-power analysis of the audio data contained within + {{ConvolverNode/buffer}} to calculate a normalizationScale + given this algorithm: + +
+		function calculateNormalizationScale(buffer) {
+			const GainCalibration = 0.00125;
+			const GainCalibrationSampleRate = 44100;
+			const MinPower = 0.000125;
+
+			// Normalize by RMS power.
+			const numberOfChannels = buffer.numberOfChannels;
+			const length = buffer.length;
+
+			let power = 0;
+
+			for (let i = 0; i < numberOfChannels; i++) {
+				let channelPower = 0;
+				const channelData = buffer.getChannelData(i);
+
+				for (let j = 0; j < length; j++) {
+					const sample = channelData[j];
+					channelPower += sample * sample;
+				}
+
+				power += channelPower;
+			}
+
+			power = Math.sqrt(power / (numberOfChannels * length));
+
+			// Protect against accidental overload.
+			if (!isFinite(power) || isNaN(power) || power < MinPower)
+				power = MinPower;
+
+			let scale = 1 / power;
+
+			// Calibrate to make perceived volume same as unprocessed.
+			scale *= GainCalibration;
+
+			// Scale depends on sample-rate.
+			if (buffer.sampleRate)
+				scale *= GainCalibrationSampleRate / buffer.sampleRate;
+
+			// True-stereo compensation.
+			if (numberOfChannels == 4)
+				scale *= 0.5;
+
+			return scale;
+		}
+		
+ + During processing, the ConvolverNode will then take this + calculated normalizationScale value and multiply it by + the result of the linear convolution resulting from processing + the input with the impulse response (represented by the + {{ConvolverNode/buffer}}) to produce the final output. Or any + mathematically equivalent operation may be used, such as + pre-multiplying the input by normalizationScale, or + pre-multiplying a version of the impulse-response by + normalizationScale.
-

+

{{ConvolverOptions}}

The specifies options for constructing a @@ -7507,8 +6848,8 @@ specified, the node is contructing using the normal defaults.
 dictionary ConvolverOptions : AudioNodeOptions {
-    AudioBuffer? buffer;
-    boolean disableNormalization = false;
+	AudioBuffer? buffer;
+	boolean disableNormalization = false;
 };
 
@@ -7516,17 +6857,17 @@ dictionary ConvolverOptions : AudioNodeOptions { Dictionary {{ConvolverOptions}} Members
- : buffer - :: - The desired buffer for the {{ConvolverNode}}. - This buffer will be normalized according to the value of - {{ConvolverOptions/disableNormalization}}. - - : disableNormalization - :: - The opposite of the desired initial value for the - {{ConvolverNode/normalize}} - attribute of the {{ConvolverNode}}. + : buffer + :: + The desired buffer for the {{ConvolverNode}}. + This buffer will be normalized according to the value of + {{ConvolverOptions/disableNormalization}}. + + : disableNormalization + :: + The opposite of the desired initial value for the + {{ConvolverNode/normalize}} + attribute of the {{ConvolverNode}}.

@@ -7551,12 +6892,12 @@ channel of silence. Note: The diagrams below show the outputs when [=actively processing=].
- reverb matrixing -
- A graphical representation of supported input and output channel - count possibilities when using a - {{ConvolverNode}}. -
+ reverb matrixing +
+ A graphical representation of supported input and output channel + count possibilities when using a + {{ConvolverNode}}. +
@@ -7580,13 +6921,13 @@ input and single output.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: Yes
-    tail-time-notes: Continues to output non-silent audio with zero input up to the {{DelayOptions/maxDelayTime}} of the node.
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: Yes
+	tail-time-notes: Continues to output non-silent audio with zero input up to the {{DelayOptions/maxDelayTime}} of the node.
 
The number of channels of the output always equals the number of @@ -7614,56 +6955,56 @@ latency equal to the amount of the delay.
 [Exposed=Window]
 interface DelayNode : AudioNode {
-    constructor (BaseAudioContext context, optional DelayOptions options = {});
-    readonly attribute AudioParam delayTime;
+	constructor (BaseAudioContext context, optional DelayOptions options = {});
+	readonly attribute AudioParam delayTime;
 };
 

Constructors

-
- : DelayNode(context, options) - :: +
+ : DelayNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{DelayNode}} will be associated with.
-            options: Optional initial parameter value for this {{DelayNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{DelayNode}} will be associated with.
+			options: Optional initial parameter value for this {{DelayNode}}.
+		

Attributes

- : delayTime - :: - An {{AudioParam}} object representing the - amount of delay (in seconds) to apply. Its default - value is 0 (no delay). The minimum value is 0 and - the maximum value is determined by the - {{maxDelayTime!!argument}} argument to the - {{AudioContext}} method {{createDelay()}} or the {{DelayOptions/maxDelayTime}} member of the {{DelayOptions}} dictionary for the {{DelayNode/DelayNode()|constructor}}. - - If {{DelayNode}} is part of a cycle, - then the value of the {{DelayNode/delayTime}} attribute - is clamped to a minimum of one render quantum. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: 0
-            max: {{DelayOptions/maxDelayTime}}
-            rate: "{{AutomationRate/a-rate}}"
-        
+ : delayTime + :: + An {{AudioParam}} object representing the + amount of delay (in seconds) to apply. Its default + value is 0 (no delay). The minimum value is 0 and + the maximum value is determined by the + {{maxDelayTime!!argument}} argument to the + {{AudioContext}} method {{createDelay()}} or the {{DelayOptions/maxDelayTime}} member of the {{DelayOptions}} dictionary for the {{DelayNode/DelayNode()|constructor}}. + + If {{DelayNode}} is part of a cycle, + then the value of the {{DelayNode/delayTime}} attribute + is clamped to a minimum of one render quantum. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: 0
+			max: {{DelayOptions/maxDelayTime}}
+			rate: "{{AutomationRate/a-rate}}"
+		
-

+

{{DelayOptions}}

This specifies options for constructing a @@ -7672,8 +7013,8 @@ given, the node is constructed using the normal defaults.
 dictionary DelayOptions : AudioNodeOptions {
-    double maxDelayTime = 1;
-    double delayTime = 0;
+	double maxDelayTime = 1;
+	double delayTime = 0;
 };
 
@@ -7681,11 +7022,11 @@ dictionary DelayOptions : AudioNodeOptions { Dictionary {{DelayOptions}} Members
- : delayTime - :: The initial delay time for the node. + : delayTime + :: The initial delay time for the node. - : maxDelayTime - :: The maximum delay time for the node. See {{BaseAudioContext/createDelay(maxDelayTime)/maxDelayTime!!argument|createDelay(maxDelayTime)}} for constraints. + : maxDelayTime + :: The maximum delay time for the node. See {{BaseAudioContext/createDelay(maxDelayTime)/maxDelayTime!!argument|createDelay(maxDelayTime)}} for constraints.

Processing

@@ -7739,7 +7080,7 @@ has passed. ██████ ███████ ██ ██ ██ ██ ██ ████████ ██████ ██████ ███████ ██ ██ --> -

+

The {{DynamicsCompressorNode}} Interface

{{DynamicsCompressorNode}} is an @@ -7758,137 +7099,137 @@ speakers.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-notes: Has channelCount constraints
-    cc-mode: clamped-max
-    cc-mode-notes: Has channelCountMode constraints
-    cc-interp: speakers
-    tail-time: Yes
-    tail-time-notes:  This node has a tail-time such that this node continues to output non-silent audio with zero input due to the look-ahead delay.
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-notes: Has channelCount constraints
+	cc-mode: clamped-max
+	cc-mode-notes: Has channelCountMode constraints
+	cc-interp: speakers
+	tail-time: Yes
+	tail-time-notes:  This node has a tail-time such that this node continues to output non-silent audio with zero input due to the look-ahead delay.
 
 [Exposed=Window]
 interface DynamicsCompressorNode : AudioNode {
-    constructor (BaseAudioContext context,
-                 optional DynamicsCompressorOptions options = {});
-    readonly attribute AudioParam threshold;
-    readonly attribute AudioParam knee;
-    readonly attribute AudioParam ratio;
-    readonly attribute float reduction;
-    readonly attribute AudioParam attack;
-    readonly attribute AudioParam release;
+	constructor (BaseAudioContext context,
+	             optional DynamicsCompressorOptions options = {});
+	readonly attribute AudioParam threshold;
+	readonly attribute AudioParam knee;
+	readonly attribute AudioParam ratio;
+	readonly attribute float reduction;
+	readonly attribute AudioParam attack;
+	readonly attribute AudioParam release;
 };
 

Constructors

-
- : DynamicsCompressorNode(context, options) - :: +
+ : DynamicsCompressorNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
- Let [[internal reduction]] - be a private slot on this, that holds a floating point number, in - decibels. Set {{[[internal reduction]]}} to 0.0. + Let [[internal reduction]] + be a private slot on this, that holds a floating point number, in + decibels. Set {{[[internal reduction]]}} to 0.0. -
-            context: The {{BaseAudioContext}} this new {{DynamicsCompressorNode}} will be associated with.
-            options: Optional initial parameter value for this {{DynamicsCompressorNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{DynamicsCompressorNode}} will be associated with.
+			options: Optional initial parameter value for this {{DynamicsCompressorNode}}.
+		

Attributes

- : attack - :: - The amount of time (in seconds) to reduce the gain by 10dB. - -
-        path: audioparam.include
-        macros:
-            default: .003
-            min: 0
-            max: 1
-            rate: "{{AutomationRate/k-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : knee - :: - A decibel value representing the range above the threshold - where the curve smoothly transitions to the "ratio" portion. - -
-        path: audioparam.include
-        macros:
-            default: 30
-            min: 0
-            max: 40
-            rate: "{{AutomationRate/k-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : ratio - :: - The amount of dB change in input for a 1 dB change in output. - -
-        path: audioparam.include
-        macros:
-            default: 12
-            min: 1
-            max: 20
-            rate: "{{AutomationRate/k-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : reduction - :: - A read-only decibel value for metering purposes, representing the - current amount of gain reduction that the compressor is applying - to the signal. If fed no signal the value will be 0 (no gain - reduction). When this attribute is read, return the value of the - private slot {{[[internal reduction]]}}. - - : release - :: - The amount of time (in seconds) to increase the gain by 10dB. - -
-        path: audioparam.include
-        macros:
-            default: .25
-            min: 0
-            max: 1
-            rate: "{{AutomationRate/k-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : threshold - :: - The decibel value above which the compression will start taking effect. - -
-        path: audioparam.include
-        macros:
-            default: -24
-            min: -100
-            max: 0
-            rate: "{{AutomationRate/k-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
+ : attack + :: + The amount of time (in seconds) to reduce the gain by 10dB. + +
+		path: audioparam.include
+		macros:
+			default: .003
+			min: 0
+			max: 1
+			rate: "{{AutomationRate/k-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : knee + :: + A decibel value representing the range above the threshold + where the curve smoothly transitions to the "ratio" portion. + +
+		path: audioparam.include
+		macros:
+			default: 30
+			min: 0
+			max: 40
+			rate: "{{AutomationRate/k-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : ratio + :: + The amount of dB change in input for a 1 dB change in output. + +
+		path: audioparam.include
+		macros:
+			default: 12
+			min: 1
+			max: 20
+			rate: "{{AutomationRate/k-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : reduction + :: + A read-only decibel value for metering purposes, representing the + current amount of gain reduction that the compressor is applying + to the signal. If fed no signal the value will be 0 (no gain + reduction). When this attribute is read, return the value of the + private slot {{[[internal reduction]]}}. + + : release + :: + The amount of time (in seconds) to increase the gain by 10dB. + +
+		path: audioparam.include
+		macros:
+			default: .25
+			min: 0
+			max: 1
+			rate: "{{AutomationRate/k-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : threshold + :: + The decibel value above which the compression will start taking effect. + +
+		path: audioparam.include
+		macros:
+			default: -24
+			min: -100
+			max: 0
+			rate: "{{AutomationRate/k-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
-

+

{{DynamicsCompressorOptions}}

This specifies the options to use in constructing a @@ -7898,11 +7239,11 @@ constructing the node.
 dictionary DynamicsCompressorOptions : AudioNodeOptions {
-    float attack = 0.003;
-    float knee = 30;
-    float ratio = 12;
-    float release = 0.25;
-    float threshold = -24;
+	float attack = 0.003;
+	float knee = 30;
+	float ratio = 12;
+	float release = 0.25;
+	float threshold = -24;
 };
 
@@ -7910,20 +7251,20 @@ dictionary DynamicsCompressorOptions : AudioNodeOptions { Dictionary {{DynamicsCompressorOptions}} Members
- : attack - :: The initial value for the {{DynamicsCompressorNode/attack}} AudioParam. + : attack + :: The initial value for the {{DynamicsCompressorNode/attack}} AudioParam. - : knee - :: The initial value for the {{DynamicsCompressorNode/knee}} AudioParam. + : knee + :: The initial value for the {{DynamicsCompressorNode/knee}} AudioParam. - : ratio - :: The initial value for the {{DynamicsCompressorNode/ratio}} AudioParam. + : ratio + :: The initial value for the {{DynamicsCompressorNode/ratio}} AudioParam. - : release - :: The initial value for the {{DynamicsCompressorNode/release}} AudioParam. + : release + :: The initial value for the {{DynamicsCompressorNode/release}} AudioParam. - : threshold - :: The initial value for the {{DynamicsCompressorNode/threshold}} AudioParam. + : threshold + :: The initial value for the {{DynamicsCompressorNode/threshold}} AudioParam.

@@ -7934,39 +7275,39 @@ Dynamics compression can be implemented in a variety of ways. The has the following characteristics: * Fixed look-ahead (this means that an - {{DynamicsCompressorNode}} adds a fixed latency to the signal - chain). + {{DynamicsCompressorNode}} adds a fixed latency to the signal + chain). * Configurable attack speed, release speed, threshold, knee - hardness and ratio. + hardness and ratio. * Side-chaining is not supported. * The gain reduction is reported via the - reduction property on the - {{DynamicsCompressorNode}}. + reduction property on the + {{DynamicsCompressorNode}}. * The compression curve has three parts: - * The first part is the identity: \(f(x) = x\). + * The first part is the identity: \(f(x) = x\). - * The second part is the soft-knee portion, which MUST be a - monotonically increasing function. + * The second part is the soft-knee portion, which MUST be a + monotonically increasing function. - * The third part is a linear function: \(f(x) = - \frac{1}{ratio} \cdot x \). + * The third part is a linear function: \(f(x) = + \frac{1}{ratio} \cdot x \). - This curve MUST be continuous and piece-wise differentiable, - and corresponds to a target output level, based on the input - level. + This curve MUST be continuous and piece-wise differentiable, + and corresponds to a target output level, based on the input + level. Graphically, such a curve would look something like this:
- Graphical representation of a compression curve -
- A typical compression curve, showing the knee portion (soft or - hard) as well as the threshold. -
+ Graphical representation of a compression curve +
+ A typical compression curve, showing the knee portion (soft or + hard) as well as the threshold. +
Internally, the {{DynamicsCompressorNode}} is described with a @@ -7982,21 +7323,21 @@ special object that behaves like an {{AudioNode}}, described below:
-    const delay = new DelayNode(context, {delayTime: 0.006});
-    const gain = new GainNode(context);
-    const compression = new EnvelopeFollower();
+	const delay = new DelayNode(context, {delayTime: 0.006});
+	const gain = new GainNode(context);
+	const compression = new EnvelopeFollower();
 
-    input.connect(delay).connect(gain).connect(output);
-    input.connect(compression).connect(gain.gain);
+	input.connect(delay).connect(gain).connect(output);
+	input.connect(compression).connect(gain.gain);
 
- Schema of
-    the internal graph used by the DynamicCompressorNode -
- The graph of internal {{AudioNode}}s used as part of the - {{DynamicsCompressorNode}} processing algorithm. -
+ Schema of
+	the internal graph used by the DynamicCompressorNode +
+ The graph of internal {{AudioNode}}s used as part of the + {{DynamicsCompressorNode}} processing algorithm. +
Note: This implements the pre-delay and the application of the reduction gain. @@ -8008,81 +7349,81 @@ signal to produce the gain reduction value. An values. Those values persist accros invocation of this algorithm. * Let [[detector average]] be a floating point - number, initialized to 0.0. + number, initialized to 0.0. * Let [[compressor gain]] be a floating point - number, initialized to 1.0. + number, initialized to 1.0.
- The following algorithm allow determining a value for - reduction gain, for each sample of input, for a render - quantum of audio. + The following algorithm allow determining a value for + reduction gain, for each sample of input, for a render + quantum of audio. - 1. Let attack and release have the values of - {{DynamicsCompressorNode/attack}} and {{DynamicsCompressorNode/release}}, respectively, sampled at the time of - processing (those are k-rate parameters), mutiplied by the - sample-rate of the {{BaseAudioContext}} this - {{DynamicsCompressorNode}} is associated with. + 1. Let attack and release have the values of + {{DynamicsCompressorNode/attack}} and {{DynamicsCompressorNode/release}}, respectively, sampled at the time of + processing (those are k-rate parameters), mutiplied by the + sample-rate of the {{BaseAudioContext}} this + {{DynamicsCompressorNode}} is associated with. - 1. Let detector average be the value of the slot {{[[detector average]]}}. + 1. Let detector average be the value of the slot {{[[detector average]]}}. - 1. Let compressor gain be the value of the slot {{[[compressor gain]]}}. + 1. Let compressor gain be the value of the slot {{[[compressor gain]]}}. - 1. For each sample input of the render quantum to be - processed, execute the following steps: + 1. For each sample input of the render quantum to be + processed, execute the following steps: - 1. If the absolute value of input is less than - 0.0001, let attenuation be 1.0. Else, let - shaped input be the value of applying the compression curve to the absolute - value of input. Let attenuation be - shaped input divided by the absolute value of input. + 1. If the absolute value of input is less than + 0.0001, let attenuation be 1.0. Else, let + shaped input be the value of applying the compression curve to the absolute + value of input. Let attenuation be + shaped input divided by the absolute value of input. - 2. Let releasing be `true` if - attenuation is greater than compressor - gain, false otherwise. + 2. Let releasing be `true` if + attenuation is greater than compressor + gain, false otherwise. - 3. Let detector rate be the result of applying the - detector curve to - attenuation. + 3. Let detector rate be the result of applying the + detector curve to + attenuation. - 4. Subtract detector average from - attenuation, and multiply the result by - detector rate. Add this new result to detector average. + 4. Subtract detector average from + attenuation, and multiply the result by + detector rate. Add this new result to detector average. - 5. Clamp detector average to a maximum of 1.0. + 5. Clamp detector average to a maximum of 1.0. - 6. Let envelope rate be the result of computing the envelope rate based on values of attack and release. + 6. Let envelope rate be the result of computing the envelope rate based on values of attack and release. - 7. If releasing is `true`, set - compressor gain to be the product of - compressor gain and envelope rate, clamped - to a maximum of 1.0. + 7. If releasing is `true`, set + compressor gain to be the product of + compressor gain and envelope rate, clamped + to a maximum of 1.0. - 8. Else, if releasing is false, let - gain increment to be detector average - minus compressor gain. Multiply gain - increment by envelope rate, and add the result - to compressor gain. + 8. Else, if releasing is false, let + gain increment to be detector average + minus compressor gain. Multiply gain + increment by envelope rate, and add the result + to compressor gain. - 9. Compute reduction gain to be compressor - gain multiplied by the return value of computing the - makeup gain. + 9. Compute reduction gain to be compressor + gain multiplied by the return value of computing the + makeup gain. - 10. Compute metering gain to be reduction gain, converted to - decibel. + 10. Compute metering gain to be reduction gain, converted to + decibel. - 1. Set {{[[compressor gain]]}} to compressor - gain. + 1. Set {{[[compressor gain]]}} to compressor + gain. - 1. Set {{[[detector average]]}} to detector - average. + 1. Set {{[[detector average]]}} to detector + average. - 1. Atomically set the internal slot {{[[internal reduction]]}} - to the value of metering gain. + 1. Atomically set the internal slot {{[[internal reduction]]}} + to the value of metering gain. - Note: This step makes the metering gain update once per block, at the - end of the block processing. + Note: This step makes the metering gain update once per block, at the + end of the block processing.
The makeup gain is a fixed gain stage that only depends on ratio, @@ -8091,40 +7432,40 @@ input signal. The intent here is to increase the output level of the compressor so it is comparable to the input level.
- Computing the makeup gain means executing the following steps: + Computing the makeup gain means executing the following steps: - 1. Let full range gain be the value returned by - applying the compression curve to the value 1.0. + 1. Let full range gain be the value returned by + applying the compression curve to the value 1.0. - 2. Let full range makeup gain be the inverse of full - range gain. + 2. Let full range makeup gain be the inverse of full + range gain. - 3. Return the result of taking the 0.6 power of full range makeup - gain. + 3. Return the result of taking the 0.6 power of full range makeup + gain.
- Computing the envelope rate is done - by applying a function to the ratio of the compressor - gain and the detector average. User-agents are - allowed to choose the shape of the envelope function. However, this - function MUST respect the following constraints: + Computing the envelope rate is done + by applying a function to the ratio of the compressor + gain and the detector average. User-agents are + allowed to choose the shape of the envelope function. However, this + function MUST respect the following constraints: - * The envelope rate MUST be the calculated from the ratio of the - compressor gain and the detector average. + * The envelope rate MUST be the calculated from the ratio of the + compressor gain and the detector average. - Note: When attacking, this number less than or equal to 1, when - releasing, this number is strictly greater than 1. + Note: When attacking, this number less than or equal to 1, when + releasing, this number is strictly greater than 1. - * The attack curve MUST be a continuous, monotonically increasing - function in the range \([0, 1]\). The shape of this curve MAY be controlled by {{DynamicsCompressorNode/attack}}. + * The attack curve MUST be a continuous, monotonically increasing + function in the range \([0, 1]\). The shape of this curve MAY be controlled by {{DynamicsCompressorNode/attack}}. - * The release curve MUST be a continuous, monotonically - decreasing function that is always greater than 1. The shape of this curve MAY be controlled by {{DynamicsCompressorNode/release}}. + * The release curve MUST be a continuous, monotonically + decreasing function that is always greater than 1. The shape of this curve MAY be controlled by {{DynamicsCompressorNode/release}}. - This operation returns the value computed by applying this function - to the ratio of compressor gain and detector - average. + This operation returns the value computed by applying this function + to the ratio of compressor gain and detector + average.
Applying the detector curve to the @@ -8142,59 +7483,50 @@ compression, or to have curves for attack and release that are not of the same shape.
- Applying a compression curve to a value means computing - the value of this sample when passed to a function, and returning - the computed value. This function MUST respect the following - characteristics: - - 1. Let threshold and knee have the - values of {{DynamicsCompressorNode/threshold}} and - {{DynamicsCompressorNode/knee}}, respectively, - [=decibels to linear gain unit|converted to linear - units=] and sampled at the time of processing of this - block (as [=k-rate=] parameters). - - 1. Calculate the sum of {{DynamicsCompressorNode/threshold}} plus - {{DynamicsCompressorNode/knee}} also sampled at the time - of processing of this block (as [=k-rate=] parameters). - - 1. Let knee end threshold have the value of this - sum [=decibels to linear gain unit|converted to linear - units=]. - - 1. Let ratio have the value of the - {{DynamicsCompressorNode/ratio}}, sampled at the time - of processing of this block (as a [=k-rate=] - parameter). - - 1. This function is the identity up to the value of the linear - threshold (i.e., \(f(x) = x\)). - - 1. From the threshold up to the - knee end threshold, User-Agents can choose the - curve shape. The whole function MUST be monotonically - increasing and continuous. - - Note: If the knee is 0, the - {{DynamicsCompressorNode}} is called a hard-knee compressor. - - 1. This function is linear, based on the ratio, after the - threshold and the soft knee (i.e., \(f(x) = - \frac{1}{ratio} \cdot x \)). + Applying a compression curve to a value means computing + the value of this sample when passed to a function, and returning + the computed value. This function MUST respect the following + characteristics: + + 1. Let threshold and knee have the + values of {{DynamicsCompressorNode/threshold}} and + {{DynamicsCompressorNode/knee}}, respectively, + [=decibels to linear gain unit|converted to linear + units=] and sampled at the time of processing of this + block (as [=k-rate=] parameters). + + 1. Let ratio have the value of the + {{DynamicsCompressorNode/ratio}}, sampled at the time + of processing of this block (as a [=k-rate=] + parameter). + + 1. This function is the identity up to the value of the linear + threshold (i.e., \(f(x) = x\)). + + 1. From the threshold up to the threshold + + knee, User-Agents can choose the curve shape. The whole + function MUST be monotonically increasing and continuous. + + Note: If the knee is 0, the + {{DynamicsCompressorNode}} is called a hard-knee compressor. + + 1. This function is linear, based on the ratio, after the + threshold and the soft knee (i.e., \(f(x) = + \frac{1}{ratio} \cdot x \)).
- Converting a value \(v\) in linear gain - unit to decibel means executing the following steps: + Converting a value \(v\) in linear gain + unit to decibel means executing the following steps: - 1. If \(v\) is equal to zero, return -1000. + 1. If \(v\) is equal to zero, return -1000. - 2. Else, return \( 20 \, \log_{10}{v} \). + 2. Else, return \( 20 \, \log_{10}{v} \).
- Converting a value \(v\) in decibels to - linear gain unit means returning \(10^{v/20}\). + Converting a value \(v\) in decibels to + linear gain unit means returning \(10^{v/20}\).
@@ -8208,7 +7540,7 @@ of the same shape. ██████ ██ ██ ████ ██ ██ ██ ██ ███████ ████████ ████████ --> -

+

The {{GainNode}} Interface

Changing the gain of an audio signal is a fundamental operation in @@ -8219,12 +7551,12 @@ single output:
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: No
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: No
 
Each sample of each channel of the input data of the @@ -8234,49 +7566,49 @@ Each sample of each channel of the input data of the
 [Exposed=Window]
 interface GainNode : AudioNode {
-    constructor (BaseAudioContext context, optional GainOptions options = {});
-    readonly attribute AudioParam gain;
+	constructor (BaseAudioContext context, optional GainOptions options = {});
+	readonly attribute AudioParam gain;
 };
 

Constructors

-
- : GainNode(context, options) - :: +
+ : GainNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{GainNode}} will be associated with.
-            options: Optional initial parameter values for this {{GainNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{GainNode}} will be associated with.
+			options: Optional initial parameter values for this {{GainNode}}.
+		

Attributes

- : gain - :: - Represents the amount of gain to apply. - -
-        path: audioparam.include
-        macros:
-            default: 1
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-        
+ : gain + :: + Represents the amount of gain to apply. + +
+		path: audioparam.include
+		macros:
+			default: 1
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+		
-

+

{{GainOptions}}

This specifies options to use in constructing a @@ -8285,7 +7617,7 @@ specified, the normal defaults are used in constructing the node.
 dictionary GainOptions : AudioNodeOptions {
-    float gain = 1.0;
+	float gain = 1.0;
 };
 
@@ -8293,8 +7625,8 @@ dictionary GainOptions : AudioNodeOptions { Dictionary {{GainOptions}} Members
- : gain - :: The initial gain value for the {{GainNode/gain}} AudioParam. + : gain + :: The initial gain value for the {{GainNode/gain}} AudioParam.
@@ -8308,7 +7640,7 @@ Dictionary {{GainOptions}} Members ████ ████ ██ ██ ██ ████ ████████ ██ ████████ ██ ██ --> -

+

The {{IIRFilterNode}} Interface

{{IIRFilterNode}} is an {{AudioNode}} @@ -8331,13 +7663,13 @@ Once created, the coefficients of the IIR filter cannot be changed.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: Yes
-    tail-time-notes: Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: Yes
+	tail-time-notes: Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.
 
The number of channels of the output always equals the number of @@ -8346,63 +7678,63 @@ channels of the input.
 [Exposed=Window]
 interface IIRFilterNode : AudioNode {
-    constructor (BaseAudioContext context, IIRFilterOptions options);
-    undefined getFrequencyResponse (Float32Array frequencyHz,
-                                    Float32Array magResponse,
-                                    Float32Array phaseResponse);
+	constructor (BaseAudioContext context, IIRFilterOptions options);
+	void getFrequencyResponse (Float32Array frequencyHz,
+	                           Float32Array magResponse,
+	                           Float32Array phaseResponse);
 };
 

Constructors

-
- : IIRFilterNode(context, options) - :: +
+ : IIRFilterNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{IIRFilterNode}} will be associated with.
-            options: Initial parameter value for this {{IIRFilterNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{IIRFilterNode}} will be associated with.
+			options: Initial parameter value for this {{IIRFilterNode}}.
+		

Methods

- : getFrequencyResponse(frequencyHz, magResponse, phaseResponse) - :: - Given the current filter parameter - settings, synchronously calculates the frequency response for the - specified frequencies. The three parameters MUST be - {{Float32Array}}s of the same length, or an - {{InvalidAccessError}} MUST be thrown. - -
-            frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated.
-            magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the frequencyHz parameter is not within [0, sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the magResponse array MUST be NaN.
-            phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in the frequencyHz parameter is not within [0; sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the phaseResponse array MUST be NaN.
-        
- -
- Return type: {{undefined}} -
+ : getFrequencyResponse(frequencyHz, magResponse, phaseResponse) + :: + Given the current filter parameter + settings, synchronously calculates the frequency response for the + specified frequencies. The three parameters MUST be + {{Float32Array}}s of the same length, or an + {{InvalidAccessError}} MUST be thrown. + +
+			frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated.
+			magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the frequencyHz parameter is not within [0, sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the magResponse array MUST be NaN.
+			phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in the frequencyHz parameter is not within [0; sampleRate/2], where sampleRate is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of the phaseResponse array MUST be NaN.
+		
+ +
+ Return type: void +
-

-{{IIRFilterOptions}}

+

+IIRFilterOptions

The IIRFilterOptions dictionary is used to specify the filter coefficients of the {{IIRFilterNode}}. dictionary IIRFilterOptions : AudioNodeOptions { - required sequence<double> feedforward; - required sequence<double> feedback; + required sequence<double> feedforward; + required sequence<double> feedback; }; @@ -8410,15 +7742,15 @@ dictionary IIRFilterOptions : AudioNodeOptions { Dictionary {{IIRFilterOptions}} Members
- : feedforward - :: - The feedforward coefficients for the - {{IIRFilterNode}}. This member is required. See {{BaseAudioContext/createIIRFilter()/feedforward}} argument of {{BaseAudioContext/createIIRFilter()}} for other constraints. - - : feedback - :: - The feedback coefficients for the - {{IIRFilterNode}}. This member is required. See {{BaseAudioContext/createIIRFilter()/feedback}} argument of {{BaseAudioContext/createIIRFilter()}} for other constraints. + : feedforward + :: + The feedforward coefficients for the + {{IIRFilterNode}}. This member is required. See {{BaseAudioContext/createIIRFilter()/feedforward}} argument of {{BaseAudioContext/createIIRFilter()}} for other constraints. + + : feedback + :: + The feedback coefficients for the + {{IIRFilterNode}}. This member is required. See {{BaseAudioContext/createIIRFilter()/feedback}} argument of {{BaseAudioContext/createIIRFilter()}} for other constraints.

@@ -8431,7 +7763,7 @@ transfer function of the general IIR filter is given by
 $$
-    H(z) = \frac{\sum_{m=0}^{M} b_m z^{-m}}{\sum_{n=0}^{N} a_n z^{-n}}
+	H(z) = \frac{\sum_{m=0}^{M} b_m z^{-m}}{\sum_{n=0}^{N} a_n z^{-n}}
 $$
 
@@ -8443,7 +7775,7 @@ Equivalently, the time-domain equation is:
 $$
-    \sum_{k=0}^{N} a_k y(n-k) = \sum_{k=0}^{M} b_k x(n-k)
+	\sum_{k=0}^{N} a_k y(n-k) = \sum_{k=0}^{M} b_k x(n-k)
 $$
 
@@ -8471,7 +7803,7 @@ Note: The UA may produce a warning to notify the user that NaN values have occur ██ ██ ███████ ████████ ████ ███████ ██████ ███████ ███████ ██ ██ ██████ ████████ --> -

+

The {{MediaElementAudioSourceNode}} Interface

This interface represents an audio source from an <{audio}> @@ -8480,8 +7812,8 @@ or <{video}> element.
 path: audionode-noinput.include
 macros:
-    noo: 1
-    tail-time: No
+	noo: 1
+	tail-time: No
 
The number of channels of the output corresponds to the number of @@ -8490,11 +7822,6 @@ channels of the media referenced by the src attribute can change the number of channels output by this node. -If the sample rate of the {{HTMLMediaElement}} differs from the sample -rate of the associated {{AudioContext}}, then the output from the -{{HTMLMediaElement}} must be resampled to match the context's -{{BaseAudioContext/sampleRate|sample rate}}. - A {{MediaElementAudioSourceNode}} is created given an {{HTMLMediaElement}} using the {{AudioContext}} {{createMediaElementSource()}} method or the {{MediaElementAudioSourceOptions/mediaElement}} member of the {{MediaElementAudioSourceOptions}} dictionary for the {{MediaElementAudioSourceNode/MediaElementAudioSourceNode()|constructor}}. @@ -8515,16 +7842,16 @@ attribute changes, and other aspects of the not used with a {{MediaElementAudioSourceNode}}.
-    const mediaElement = document.getElementById('mediaElementID');
-    const sourceNode = context.createMediaElementSource(mediaElement);
-    sourceNode.connect(filterNode);
+	const mediaElement = document.getElementById('mediaElementID');
+	const sourceNode = context.createMediaElementSource(mediaElement);
+	sourceNode.connect(filterNode);
 
 [Exposed=Window]
 interface MediaElementAudioSourceNode : AudioNode {
-    constructor (AudioContext context, MediaElementAudioSourceOptions options);
-    [SameObject] readonly attribute HTMLMediaElement mediaElement;
+	constructor (AudioContext context, MediaElementAudioSourceOptions options);
+	[SameObject] readonly attribute HTMLMediaElement mediaElement;
 };
 
@@ -8532,28 +7859,28 @@ interface MediaElementAudioSourceNode : AudioNode { Constructors
- : MediaElementAudioSourceNode(context, options) - :: - 1. initialize the AudioNode - this, with context and options as arguments. - -
-            context: The {{AudioContext}} this new {{MediaElementAudioSourceNode}} will be associated with.
-            options: Initial parameter value for this {{MediaElementAudioSourceNode}}.
-        
+ : MediaElementAudioSourceNode(context, options) + :: + 1. initialize the AudioNode + this, with context and options as arguments. + +
+			context: The {{AudioContext}} this new {{MediaElementAudioSourceNode}} will be associated with.
+			options: Initial parameter value for this {{MediaElementAudioSourceNode}}.
+		

Attributes

- : mediaElement - :: - The {{HTMLMediaElement}} used when constructing this - {{MediaElementAudioSourceNode}}. + : mediaElement + :: + The {{HTMLMediaElement}} used when constructing this + {{MediaElementAudioSourceNode}}.
-

+

{{MediaElementAudioSourceOptions}}

This specifies the options to use in constructing a @@ -8561,7 +7888,7 @@ This specifies the options to use in constructing a
 dictionary MediaElementAudioSourceOptions {
-    required HTMLMediaElement mediaElement;
+	required HTMLMediaElement mediaElement;
 };
 
@@ -8569,8 +7896,8 @@ dictionary MediaElementAudioSourceOptions { Dictionary {{MediaElementAudioSourceOptions}} Members
- : mediaElement - :: The media element that will be re-routed. This MUST be specified. + : mediaElement + :: The media element that will be re-routed. This MUST be specified.

@@ -8597,7 +7924,7 @@ algorithm [[!FETCH]] labeled the resource as -

+

The {{MediaStreamAudioDestinationNode}} Interface

This interface is an audio destination representing a @@ -8613,12 +7940,12 @@ remote peer using the RTCPeerConnection (described in
 path: audionode.include
 macros:
-    noi: 1
-    noo: 0
-    cc: 2
-    cc-mode: explicit
-    cc-interp: speakers
-    tail-time: No
+	noi: 1
+	noo: 0
+	cc: 2
+	cc-mode: explicit
+	cc-interp: speakers
+	tail-time: No
 
The number of channels of the input is by default 2 (stereo). @@ -8626,8 +7953,8 @@ The number of channels of the input is by default 2 (stereo).
 [Exposed=Window]
 interface MediaStreamAudioDestinationNode : AudioNode {
-    constructor (AudioContext context, optional AudioNodeOptions options = {});
-    readonly attribute MediaStream stream;
+	constructor (AudioContext context, optional AudioNodeOptions options = {});
+	readonly attribute MediaStream stream;
 };
 
@@ -8635,33 +7962,33 @@ interface MediaStreamAudioDestinationNode : AudioNode { Constructors
- : MediaStreamAudioDestinationNode(context, options) - :: - 1. Initialize the AudioNode - this, with context and options as arguments. - -
-            context: The {{BaseAudioContext}} this new {{MediaStreamAudioDestinationNode}} will be associated with.
-            options: Optional initial parameter value for this {{MediaStreamAudioDestinationNode}}.
-        
+ : MediaStreamAudioDestinationNode(context, options) + :: + 1. Initialize the AudioNode + this, with context and options as arguments. + +
+			context: The {{BaseAudioContext}} this new {{MediaStreamAudioDestinationNode}} will be associated with.
+			options: Optional initial parameter value for this {{MediaStreamAudioDestinationNode}}.
+		

Attributes

- : stream - :: - A {{MediaStream}} containing a single {{MediaStreamTrack}} with the same - number of channels as the node itself, and whose - kind attribute has the value "audio". + : stream + :: + A {{MediaStream}} containing a single {{MediaStreamTrack}} with the same + number of channels as the node itself, and whose + kind attribute has the value "audio".
-

+

The {{MediaStreamAudioSourceNode}} Interface

This interface represents an audio source from a @@ -8670,8 +7997,8 @@ This interface represents an audio source from a
 path: audionode-noinput.include
 macros:
-    noo: 1
-    tail-time: No
+	noo: 1
+	tail-time: No
 
The number of channels of the output corresponds to the number of channels of @@ -8679,16 +8006,11 @@ the {{MediaStreamTrack}}. When the {{MediaStreamTrack}} ends, this {{AudioNode}} outputs one channel of silence. -If the sample rate of the {{MediaStreamTrack}} differs from the sample -rate of the associated {{AudioContext}}, then the output of the -{{MediaStreamTrack}} is resampled to match the context's -{{BaseAudioContext/sampleRate|sample rate}}. -
 [Exposed=Window]
 interface MediaStreamAudioSourceNode : AudioNode {
-    constructor (AudioContext context, MediaStreamAudioSourceOptions options);
-    [SameObject] readonly attribute MediaStream mediaStream;
+	constructor (AudioContext context, MediaStreamAudioSourceOptions options);
+	[SameObject] readonly attribute MediaStream mediaStream;
 };
 
@@ -8696,70 +8018,70 @@ interface MediaStreamAudioSourceNode : AudioNode { Constructors
- : MediaStreamAudioSourceNode(context, options) - :: - 1. If the {{MediaStreamAudioSourceOptions/mediaStream}} member of - {{MediaStreamAudioSourceNode/constructor(context, options)/options!!argument}} does not reference a - {{MediaStream}} that has at least one - {{MediaStreamTrack}} whose - kind attribute has the value "audio", - throw an {{InvalidStateError}} and abort these steps. Else, let - this stream be inputStream. - 1. Let tracks be the list of all - {{MediaStreamTrack}}s of - inputStream that have a kind of - "audio". - 1. Sort the elements in tracks based on their id - attribute using an ordering on sequences of [=code unit=] - values. - 1. Initialize the AudioNode - this, with context and options as arguments. - 1. Set an internal slot [[input track]] on this - {{MediaStreamAudioSourceNode}} to be the first element of - tracks. This is the track used as the input audio for this - {{MediaStreamAudioSourceNode}}. - - After construction, any change to the - {{MediaStream}} that was passed to - the constructor do not affect the underlying output of this {{AudioNode}}. - - The slot {{[[input track]]}} is only used to keep a reference to the - {{MediaStreamTrack}}. - - Note: This means that when removing the track chosen by the constructor - of the {{MediaStreamAudioSourceNode}} from the - {{MediaStream}} passed into this - constructor, the {{MediaStreamAudioSourceNode}} will still take its input - from the same track. - - Note: The behaviour for picking the track to output is arbitrary for - legacy reasons. {{MediaStreamTrackAudioSourceNode}} can be used - instead to be explicit about which track to use as input. - -
-            context: The {{AudioContext}} this new {{MediaStreamAudioSourceNode}} will be associated with.
-            options: Initial parameter value for this {{MediaStreamAudioSourceNode}}.
-        
- + : MediaStreamAudioSourceNode(context, options) + :: + 1. If the {{MediaStreamAudioSourceOptions/mediaStream}} member of + {{MediaStreamAudioSourceNode/MediaStreamAudioSourceNode()/options!!argument}} does not reference a + {{MediaStream}} that has at least one + {{MediaStreamTrack}} whose + kind attribute has the value "audio", + throw an {{InvalidStateError}} and abort these steps. Else, let + this stream be inputStream. + 1. Let tracks be the list of all + {{MediaStreamTrack}}s of + inputStream that have a kind of + "audio". + 1. Sort the elements in tracks based on their id + attribute using an ordering on sequences of [=code unit=] + values. + 1. Initialize the AudioNode + this, with context and options as arguments. + 1. Set an internal slot [[input track]] on this + {{MediaStreamAudioSourceNode}} to be the first element of + tracks. This is the track used as the input audio for this + {{MediaStreamAudioSourceNode}}. + + After construction, any change to the + {{MediaStream}} that was passed to + the constructor do not affect the underlying output of this {{AudioNode}}. + + The slot {{[[input track]]}} is only used to keep a reference to the + {{MediaStreamTrack}}. + + Note: This means that when removing the track chosen by the constructor + of the {{MediaStreamAudioSourceNode}} from the + {{MediaStream}} passed into this + constructor, the {{MediaStreamAudioSourceNode}} will still take its input + from the same track. + + Note: The behaviour for picking the track to output is arbitrary for + legacy reasons. {{MediaStreamTrackAudioSourceNode}} can be used + instead to be explicit about which track to use as input. + +
+			context: The {{AudioContext}} this new {{MediaStreamAudioSourceNode}} will be associated with.
+			options: Initial parameter value for this {{MediaStreamAudioSourceNode}}.
+		
+

Attributes

- : mediaStream - :: The {{MediaStream}} used when constructing this {{MediaStreamAudioSourceNode}}. + : mediaStream + :: The {{MediaStream}} used when constructing this {{MediaStreamAudioSourceNode}}.
-

+

{{MediaStreamAudioSourceOptions}}

This specifies the options for constructing a {{MediaStreamAudioSourceNode}}.
 dictionary MediaStreamAudioSourceOptions {
-    required MediaStream mediaStream;
+	required MediaStream mediaStream;
 };
 
@@ -8767,8 +8089,8 @@ dictionary MediaStreamAudioSourceOptions { Dictionary {{MediaStreamAudioSourceOptions}} Members
- : mediaStream - :: The media stream that will act as a source. This MUST be specified. + : mediaStream + :: The media stream that will act as a source. This MUST be specified.
@@ -8776,7 +8098,7 @@ Dictionary {{MediaStreamAudioSourceOptions}} Members -

+

The {{MediaStreamTrackAudioSourceNode}} Interface

This interface represents an audio source from a {{MediaStreamTrack}}. @@ -8784,22 +8106,17 @@ This interface represents an audio source from a {{MediaStreamTrack}}.
 path: audionode-noinput.include
 macros:
-    noo: 1
-    tail-time: No
+	noo: 1
+	tail-time: No
 
The number of channels of the output corresponds to the number of -channels of the {{MediaStreamTrackAudioSourceOptions/mediaStreamTrack}}. - -If the sample rate of the {{MediaStreamTrack}} differs from the sample -rate of the associated {{AudioContext}}, then the output of the -{{MediaStreamTrackAudioSourceOptions/mediaStreamTrack}} is resampled -to match the context's {{BaseAudioContext/sampleRate|sample rate}}. +channels of the {{MediaStreamTrack}}.
 [Exposed=Window]
 interface MediaStreamTrackAudioSourceNode : AudioNode {
-    constructor (AudioContext context, MediaStreamTrackAudioSourceOptions options);
+	constructor (AudioContext context, MediaStreamTrackAudioSourceOptions options);
 };
 
@@ -8807,21 +8124,21 @@ interface MediaStreamTrackAudioSourceNode : AudioNode { Constructors
- : MediaStreamTrackAudioSourceNode(context, options) - :: - 1. If the {{MediaStreamTrackAudioSourceOptions/mediaStreamTrack}}'s - kind attribute is not "audio", throw an - {{InvalidStateError}} and abort these steps. - 1. Initialize the AudioNode - this, with context and options as arguments. - -
-            context: The {{AudioContext}} this new {{MediaStreamTrackAudioSourceNode}} will be associated with.
-            options: Initial parameter value for this {{MediaStreamTrackAudioSourceNode}}.
-        
+ : MediaStreamTrackAudioSourceNode(context, options) + :: + 1. If the {{MediaStreamTrackAudioSourceOptions/mediaStreamTrack}}'s + kind attribute is not "audio", throw an + {{InvalidStateError}} and abort these steps. + 1. Initialize the AudioNode + this, with context and options as arguments. + +
+			context: The {{AudioContext}} this new {{MediaStreamTrackAudioSourceNode}} will be associated with.
+			options: Initial parameter value for this {{MediaStreamTrackAudioSourceNode}}.
+		
-

+

{{MediaStreamTrackAudioSourceOptions}}

This specifies the options for constructing a @@ -8830,7 +8147,7 @@ required.
 dictionary MediaStreamTrackAudioSourceOptions {
-    required MediaStreamTrack mediaStreamTrack;
+	required MediaStreamTrack mediaStreamTrack;
 };
 
@@ -8838,18 +8155,18 @@ dictionary MediaStreamTrackAudioSourceOptions { Dictionary {{MediaStreamTrackAudioSourceOptions}} Members
- : mediaStreamTrack - :: - The media stream track that will act as a source. If this - {{MediaStreamTrack}} kind attribute is - not "audio", an {{InvalidStateError}} - MUST be thrown. + : mediaStreamTrack + :: + The media stream track that will act as a source. If this + {{MediaStreamTrack}} kind attribute is + not "audio", an {{InvalidStateError}} + MUST be thrown.
-

+

The {{OscillatorNode}} Interface

{{OscillatorNode}} represents an audio source @@ -8891,7 +8208,7 @@ parameters, and form a compound parameter. They are used together to determine a computedOscFrequency value:
-    computedOscFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)
+	computedOscFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)
 
The OscillatorNode's instantaneous phase at each time is the definite @@ -8899,51 +8216,48 @@ time integral of computedOscFrequency, assuming a phase angle of zero at the node's exact start time. Its nominal range is [-Nyquist frequency, Nyquist frequency]. -The single output of this node consists of one channel (mono). -
 path: audionode.include
 macros:
-    noi: 0
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: No
+	noi: 0
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: No
 
 enum OscillatorType {
-    "sine",
-    "square",
-    "sawtooth",
-    "triangle",
-    "custom"
+	"sine",
+	"square",
+	"sawtooth",
+	"triangle",
+	"custom"
 };
 
- - - - - + +
{{OscillatorType}} enumeration description
Enum valueDescription
"sine" A sine wave -
"square" A square wave of duty period 0.5 -
"sawtooth" A sawtooth wave -
"triangle" A triangle wave -
"custom" A custom periodic wave +
Enumeration description +
"sine" A sine wave +
"square" A square wave of duty period 0.5 +
"sawtooth" A sawtooth wave +
"triangle" A triangle wave +
"custom" A custom periodic wave
 [Exposed=Window]
 interface OscillatorNode : AudioScheduledSourceNode {
-    constructor (BaseAudioContext context, optional OscillatorOptions options = {});
-    attribute OscillatorType type;
-    readonly attribute AudioParam frequency;
-    readonly attribute AudioParam detune;
-    undefined setPeriodicWave (PeriodicWave periodicWave);
+	constructor (BaseAudioContext context, optional OscillatorOptions options = {});
+	attribute OscillatorType type;
+	readonly attribute AudioParam frequency;
+	readonly attribute AudioParam detune;
+	void setPeriodicWave (PeriodicWave periodicWave);
 };
 
@@ -8951,90 +8265,90 @@ interface OscillatorNode : AudioScheduledSourceNode { Constructors
- : OscillatorNode(context, options) - :: + : OscillatorNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{OscillatorNode}} will be associated with.
-            options: Optional initial parameter value for this {{OscillatorNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{OscillatorNode}} will be associated with.
+			options: Optional initial parameter value for this {{OscillatorNode}}.
+		

Attributes

- : detune - :: - A detuning value (in cents) which will offset the - {{OscillatorNode/frequency}} by the given amount. Its default - value is 0. This parameter is a-rate. It - forms a compound parameter with {{OscillatorNode/frequency}} - to form the computedOscFrequency. The nominal - range listed below allows this parameter to detune the - {{OscillatorNode/frequency}} over the entire possible - range of frequencies. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: \(\approx -153600\)
-            min-notes:
-            max: \(\approx 153600\)
-            max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value.
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : frequency - :: - The frequency (in Hertz) of the periodic waveform. Its default - value is 440. This parameter is a-rate. It - forms a compound parameter with {{OscillatorNode/detune}} to - form the computedOscFrequency. Its nominal range - is [-Nyquist frequency, Nyquist frequency]. - -
-        path: audioparam.include
-        macros:
-            default: 440
-            min: -Nyquist frequency
-            max: Nyquist frequency
-            rate: "{{AutomationRate/a-rate}}"
-        
- - : type - :: The shape of the periodic waveform. It may directly be set to any - of the type constant values except for "{{OscillatorType/custom}}". Doing so MUST throw an - {{InvalidStateError}} exception. The - {{OscillatorNode/setPeriodicWave()}} method can be - used to set a custom waveform, which results in this attribute - being set to "{{OscillatorType/custom}}". The default value is "{{OscillatorType/sine}}". When this - attribute is set, the phase of the oscillator MUST be conserved. + : detune + :: + A detuning value (in cents) which will offset the + {{OscillatorNode/frequency}} by the given amount. Its default + value is 0. This parameter is a-rate. It + forms a compound parameter with {{OscillatorNode/frequency}} + to form the computedOscFrequency. The nominal + range listed below allows this parameter to detune the + {{OscillatorNode/frequency}} over the entire possible + range of frequencies. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: \(\approx -153600\)
+			min-notes:
+			max: \(\approx 153600\)
+			max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value.
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : frequency + :: + The frequency (in Hertz) of the periodic waveform. Its default + value is 440. This parameter is a-rate. It + forms a compound parameter with {{OscillatorNode/detune}} to + form the computedOscFrequency. Its nominal range + is [-Nyquist frequency, Nyquist frequency]. + +
+		path: audioparam.include
+		macros:
+			default: 440
+			min: -Nyquist frequency
+			max: Nyquist frequency
+			rate: "{{AutomationRate/a-rate}}"
+		
+ + : type + :: The shape of the periodic waveform. It may directly be set to any + of the type constant values except for "{{OscillatorType/custom}}". Doing so MUST throw an + {{InvalidStateError}} exception. The + {{OscillatorNode/setPeriodicWave()}} method can be + used to set a custom waveform, which results in this attribute + being set to "{{OscillatorType/custom}}". The default value is "{{OscillatorType/sine}}". When this + attribute is set, the phase of the oscillator MUST be conserved.

Methods

- : setPeriodicWave(periodicWave) - :: - Sets an arbitrary custom periodic waveform given a {{PeriodicWave}}. + : setPeriodicWave(periodicWave) + :: + Sets an arbitrary custom periodic waveform given a {{PeriodicWave}}. -
-            periodicWave: custom waveform to be used by the oscillator
-        
+
+			periodicWave: custom waveform to be used by the oscillator
+		
-
- Return type: {{undefined}} -
+
+ Return type: void +
-

+

{{OscillatorOptions}}

This specifies the options to be used when constructing an @@ -9044,10 +8358,10 @@ constructing the oscillator.
 dictionary OscillatorOptions : AudioNodeOptions {
-    OscillatorType type = "sine";
-    float frequency = 440;
-    float detune = 0;
-    PeriodicWave periodicWave;
+	OscillatorType type = "sine";
+	float frequency = 440;
+	float detune = 0;
+	PeriodicWave periodicWave;
 };
 
@@ -9055,27 +8369,27 @@ dictionary OscillatorOptions : AudioNodeOptions { Dictionary {{OscillatorOptions}} Members
- : detune - :: The initial detune value for the {{OscillatorNode}}. - - : frequency - :: The initial frequency for the {{OscillatorNode}}. - - : periodicWave - :: - The {{PeriodicWave}} for the - {{OscillatorNode}}. If this is specified, then - any valid value for {{OscillatorOptions/type}} is ignored; it is - treated as if "{{OscillatorType/custom}}" were specified. - - : type - :: - The type of oscillator to be constructed. If this is set to - "custom" without also specifying a {{OscillatorOptions/periodicWave}}, then an - {{InvalidStateError}} - exception MUST be thrown. If {{OscillatorOptions/periodicWave}} is specified, - then any valid value for {{OscillatorOptions/type}} is ignored; it is - treated as if it were set to "custom". + : detune + :: The initial detune value for the {{OscillatorNode}}. + + : frequency + :: The initial frequency for the {{OscillatorNode}}. + + : periodicWave + :: + The {{PeriodicWave}} for the + {{OscillatorNode}}. If this is specified, then + any valid value for {{OscillatorOptions/type}} is ignored; it is + treated as if "{{OscillatorType/custom}}" were specified. + + : type + :: + The type of oscillator to be constructed. If this is set to + "custom" without also specifying a {{OscillatorOptions/periodicWave}}, then an + {{InvalidStateError}} + exception MUST be thrown. If {{OscillatorOptions/periodicWave}} is specified, + then any valid value for {{OscillatorOptions/type}} is ignored; it is + treated as if it were set to "custom".

@@ -9094,64 +8408,64 @@ basic waveforms. : "{{OscillatorType/sine}}" :: - The waveform for sine oscillator is: + The waveform for sine oscillator is: -
-    $$
-        x(t) = \sin t
-    $$
-    
+
+	$$
+		x(t) = \sin t
+	$$
+	
: "{{OscillatorType/square}}" :: - The waveform for the square wave oscillator is: + The waveform for the square wave oscillator is: -
-    $$
-        x(t) = \begin{cases}
-                     1 & \mbox{for } 0≤ t < \pi \\
-                     -1 & \mbox{for } -\pi < t < 0.
-                     \end{cases}
-    $$
-    
+
+	$$
+		x(t) = \begin{cases}
+					 1 & \mbox{for } 0≤ t < \pi \\
+					 -1 & \mbox{for } -\pi < t < 0.
+					 \end{cases}
+	$$
+	
- This is extended to all \(t\) by using the fact that the - waveform is an odd function with period \(2\pi\). + This is extended to all \(t\) by using the fact that the + waveform is an odd function with period \(2\pi\). : "{{OscillatorType/sawtooth}}" :: - The waveform for the sawtooth oscillator is the ramp: + The waveform for the sawtooth oscillator is the ramp: -
-    $$
-        x(t) = \frac{t}{\pi} \mbox{ for } -\pi < t ≤ \pi;
-    $$
-    
+
+	$$
+		x(t) = \frac{t}{\pi} \mbox{ for } -\pi < t ≤ \pi;
+	$$
+	
- This is extended to all \(t\) by using the fact that the - waveform is an odd function with period \(2\pi\). + This is extended to all \(t\) by using the fact that the + waveform is an odd function with period \(2\pi\). : "{{OscillatorType/triangle}}" :: - The waveform for the triangle oscillator is: + The waveform for the triangle oscillator is: -
-    $$
-        x(t) = \begin{cases}
-                         \frac{2}{\pi} t & \mbox{for } 0 ≤ t ≤ \frac{\pi}{2} \\
-                         1-\frac{2}{\pi} \left(t-\frac{\pi}{2}\right) & \mbox{for }
-                         \frac{\pi}{2} < t ≤ \pi.
-                     \end{cases}
-    $$
-    
+
+	$$
+		x(t) = \begin{cases}
+						 \frac{2}{\pi} t & \mbox{for } 0 ≤ t ≤ \frac{\pi}{2} \\
+						 1-\frac{2}{\pi} \left(t-\frac{\pi}{2}\right) & \mbox{for }
+						 \frac{\pi}{2} < t ≤ \pi.
+					 \end{cases}
+	$$
+	
- This is extended to all \(t\) by using the fact that the - waveform is an odd function with period \(2\pi\). + This is extended to all \(t\) by using the fact that the + waveform is an odd function with period \(2\pi\). -

+

The {{PannerNode}} Interface

This interface represents a processing node which @@ -9163,15 +8477,15 @@ to the {{BaseAudioContext}}'s {{AudioListener}}
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-notes: Has channelCount constraints
-    cc-mode: clamped-max
-    cc-mode-notes: Has channelCountMode constraints
-    cc-interp: speakers
-    tail-time: Maybe
-    tail-time-notes:  If the {{PannerNode/panningModel}} is set to "{{PanningModelType/HRTF}}", the node will produce non-silent output for silent input due to the inherent processing for head responses. Otherwise the tail-time is zero.
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-notes: Has channelCount constraints
+	cc-mode: clamped-max
+	cc-mode-notes: Has channelCountMode constraints
+	cc-interp: speakers
+	tail-time: Maybe
+	tail-time-notes:  If the {{PannerNode/panningModel}} is set to "{{PanningModelType/HRTF}}", the node will produce non-silent output for silent input due to the inherent processing for head responses. Otherwise the tail-time is zero.
 
The input of this node is either mono (1 channel) or stereo (2 @@ -9190,33 +8504,32 @@ space. The default is "{{PanningModelType/equalpower}}".
 enum PanningModelType {
-        "equalpower",
-        "HRTF"
+		"equalpower",
+		"HRTF"
 };
 
- - - - - + +
{{PanningModelType}} enumeration description
Enum valueDescription
"equalpower" - - A simple and efficient spatialization algorithm using equal-power - panning. - - Note: When this panning model is used, all the {{AudioParam}}s - used to compute the output of this node are a-rate. - -
"HRTF" - - A higher quality spatialization algorithm using a convolution - with measured impulse responses from human subjects. This panning - method renders stereo output. - - Note:When this panning model is used, all the {{AudioParam}}s - used to compute the output of this node are k-rate. +
Enumeration description +
"equalpower" + + A simple and efficient spatialization algorithm using equal-power + panning. + + Note: When this panning model is used, all the {{AudioParam}}s + used to compute the output of this node are a-rate. + +
"HRTF" + + A higher quality spatialization algorithm using a convolution + with measured impulse responses from human subjects. This panning + method renders stereo output. + + Note:When this panning model is used, all the {{AudioParam}}s + used to compute the output of this node are k-rate.
@@ -9241,92 +8554,91 @@ value of the {{PannerNode/rolloffFactor}} attribute.
 enum DistanceModelType {
-    "linear",
-    "inverse",
-    "exponential"
+	"linear",
+	"inverse",
+	"exponential"
 };
 
- - - - - - - - + + + + +
{{DistanceModelType}} enumeration description
Enum valueDescription
"linear" - - A linear distance model which calculates distanceGain - according to: - -
-                $$
-                    1 - f\ \frac{\max\left[\min\left(d, d'_{max}\right), d'_{ref}\right] - d'_{ref}}{d'_{max} - d'_{ref}}
-                $$
-                
- - where \(d'_{ref} = \min\left(d_{ref}, d_{max}\right)\) and \(d'_{max} = - \max\left(d_{ref}, d_{max}\right)\). In the case where \(d'_{ref} = - d'_{max}\), the value of the linear model is taken to be - \(1-f\). - - Note that \(d\) is clamped to the interval \(\left[d'_{ref},\, - d'_{max}\right]\). - -
"inverse" - - An inverse distance model which calculates - distanceGain according to: - -
-                $$
-                    \frac{d_{ref}}{d_{ref} + f\ \left[\max\left(d, d_{ref}\right) - d_{ref}\right]}
-                $$
-                
- - That is, \(d\) is clamped to the interval \(\left[d_{ref},\, - \infty\right)\). If \(d_{ref} = 0\), the value of the inverse model - is taken to be 0, independent of the value of \(d\) and \(f\). - -
"exponential" - - An exponential distance model which calculates - distanceGain according to: - -
-                $$
-                    \left[\frac{\max\left(d, d_{ref}\right)}{d_{ref}}\right]^{-f}
-                $$
-                
- - That is, \(d\) is clamped to the interval \(\left[d_{ref},\, - \infty\right)\). If \(d_{ref} = 0\), the value of the exponential - model is taken to be 0, independent of \(d\) and \(f\). +
Enumeration description +
"linear" + + A linear distance model which calculates distanceGain + according to: + +
+				$$
+					1 - f\ \frac{\max\left[\min\left(d, d'_{max}\right), d'_{ref}\right] - d'_{ref}}{d'_{max} - d'_{ref}}
+				$$
+				
+ + where \(d'_{ref} = \min\left(d_{ref}, d_{max}\right)\) and \(d'_{max} = + \max\left(d_{ref}, d_{max}\right)\). In the case where \(d'_{ref} = + d'_{max}\), the value of the linear model is taken to be + \(1-f\). + + Note that \(d\) is clamped to the interval \(\left[d'_{ref},\, + d'_{max}\right]\). + +
"inverse" + + An inverse distance model which calculates + distanceGain according to: + +
+				$$
+					\frac{d_{ref}}{d_{ref} + f\ \left[\max\left(d, d_{ref}\right) - d_{ref}\right]}
+				$$
+				
+ + That is, \(d\) is clamped to the interval \(\left[d_{ref},\, + \infty\right)\). If \(d_{ref} = 0\), the value of the inverse model + is taken to be 0, independent of the value of \(d\) and \(f\). + +
"exponential" + + An exponential distance model which calculates + distanceGain according to: + +
+				$$
+					\left[\frac{\max\left(d, d_{ref}\right)}{d_{ref}}\right]^{-f}
+				$$
+				
+ + That is, \(d\) is clamped to the interval \(\left[d_{ref},\, + \infty\right)\). If \(d_{ref} = 0\), the value of the exponential + model is taken to be 0, independent of \(d\) and \(f\).
 [Exposed=Window]
 interface PannerNode : AudioNode {
-    constructor (BaseAudioContext context, optional PannerOptions options = {});
-    attribute PanningModelType panningModel;
-    readonly attribute AudioParam positionX;
-    readonly attribute AudioParam positionY;
-    readonly attribute AudioParam positionZ;
-    readonly attribute AudioParam orientationX;
-    readonly attribute AudioParam orientationY;
-    readonly attribute AudioParam orientationZ;
-    attribute DistanceModelType distanceModel;
-    attribute double refDistance;
-    attribute double maxDistance;
-    attribute double rolloffFactor;
-    attribute double coneInnerAngle;
-    attribute double coneOuterAngle;
-    attribute double coneOuterGain;
-    undefined setPosition (float x, float y, float z);
-    undefined setOrientation (float x, float y, float z);
+	constructor (BaseAudioContext context, optional PannerOptions options = {});
+	attribute PanningModelType panningModel;
+	readonly attribute AudioParam positionX;
+	readonly attribute AudioParam positionY;
+	readonly attribute AudioParam positionZ;
+	readonly attribute AudioParam orientationX;
+	readonly attribute AudioParam orientationY;
+	readonly attribute AudioParam orientationZ;
+	attribute DistanceModelType distanceModel;
+	attribute double refDistance;
+	attribute double maxDistance;
+	attribute double rolloffFactor;
+	attribute double coneInnerAngle;
+	attribute double coneOuterAngle;
+	attribute double coneOuterGain;
+	void setPosition (float x, float y, float z);
+	void setOrientation (float x, float y, float z);
 };
 
@@ -9334,278 +8646,274 @@ interface PannerNode : AudioNode { Constructors
- : PannerNode(context, options) - :: + : PannerNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{PannerNode}} will be associated with.
-            options: Optional initial parameter value for this {{PannerNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{PannerNode}} will be associated with.
+			options: Optional initial parameter value for this {{PannerNode}}.
+		

Attributes

- : coneInnerAngle - :: - A parameter for directional audio sources that is an angle, in - degrees, inside of which there will be no volume reduction. The - default value is 360. The behavior is undefined if the angle is - outside the interval [0, 360]. - - : coneOuterAngle - :: - A parameter for directional audio sources that is an angle, in - degrees, outside of which the volume will be reduced to a - constant value of {{PannerNode/coneOuterGain}}. The default - value is 360. The behavior is undefined if the angle is outside - the interval [0, 360]. - - : coneOuterGain - :: - A parameter for directional audio sources that is the gain - outside of the {{PannerNode/coneOuterAngle}}. The default - value is 0. It is a linear value (not dB) in the range [0, 1]. An - {{InvalidStateError}} MUST be thrown if the parameter is - outside this range. - - : distanceModel - :: - Specifies the distance model used by this - {{PannerNode}}. Defaults to - "{{inverse}}". - - : maxDistance - :: - The maximum distance between source and listener, after which the - volume will not be reduced any further. The default value is 10000. + : coneInnerAngle + :: + A parameter for directional audio sources that is an angle, in + degrees, inside of which there will be no volume reduction. The + default value is 360. The behavior is undefined if the angle is + outside the interval [0, 360]. + + : coneOuterAngle + :: + A parameter for directional audio sources that is an angle, in + degrees, outside of which the volume will be reduced to a + constant value of {{PannerNode/coneOuterGain}}. The default + value is 360. The behavior is undefined if the angle is outside + the interval [0, 360]. + + : coneOuterGain + :: + A parameter for directional audio sources that is the gain + outside of the {{PannerNode/coneOuterAngle}}. The default + value is 0. It is a linear value (not dB) in the range [0, 1]. An + {{InvalidStateError}} MUST be thrown if the parameter is + outside this range. + + : distanceModel + :: + Specifies the distance model used by this + {{PannerNode}}. Defaults to + "{{inverse}}". + + : maxDistance + :: + The maximum distance between source and listener, after which the + volume will not be reduced any further. The default value is 10000. A {{RangeError}} exception MUST be thrown if this - is set to a non-positive value. - - : orientationX - :: - Describes the \(x\)-component of the vector of the direction the - audio source is pointing in 3D Cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 1
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : orientationY - :: - Describes the \(y\)-component of the vector of the direction the - audio source is pointing in 3D cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : orientationZ - :: - Describes the \(z\)-component of the vector of the direction the - audio source is pointing in 3D cartesian coordinate space. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : panningModel - :: - Specifies the panning model used by this - {{PannerNode}}. Defaults to - "{{PanningModelType/equalpower}}". - - : positionX - :: - Sets the \(x\)-coordinate position of the audio source in a 3D - Cartesian system. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : positionY - :: - Sets the \(y\)-coordinate position of the audio source in a 3D - Cartesian system. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : positionZ - :: - Sets the \(z\)-coordinate position of the audio source in a 3D - Cartesian system. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: most-negative-single-float
-            min-notes: Approximately -3.4028235e38
-            max: most-positive-single-float
-            max-notes: Approximately 3.4028235e38
-            rate: "{{AutomationRate/a-rate}}"
-            rate-notes: Has [=automation rate constraints=]
-        
- - : refDistance - :: - A reference distance for reducing volume as source moves further - from the listener. For distances less than this, the volume is not reduced. The default value is 1. A - {{RangeError}} exception MUST be thrown if this is set - to a negative value. - - : rolloffFactor - :: - Describes how quickly the volume is reduced as source moves - away from listener. The default value is 1. A - {{RangeError}} exception MUST be thrown if this is set to - a negative value. - - The nominal range for the {{PannerNode/rolloffFactor}} specifies - the minimum and maximum values the rolloffFactor - can have. Values outside the range are clamped to lie within - this range. The nominal range depends on the {{PannerNode/distanceModel}} as follows: - -
- : "{{linear}}" - :: The nominal range is \([0, 1]\). - - : "{{inverse}}" - :: The nominal range is \([0, \infty)\). - - : "{{exponential}}" - :: The nominal range is \([0, \infty)\). -
- - Note that the clamping happens as part of the - processing of the distance computation. The attribute - reflects the value that was set and is not modified. + is set to a non-positive value. + + : orientationX + :: + Describes the \(x\)-component of the vector of the direction the + audio source is pointing in 3D Cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 1
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : orientationY + :: + Describes the \(y\)-component of the vector of the direction the + audio source is pointing in 3D cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : orientationZ + :: + Describes the \(z\)-component of the vector of the direction the + audio source is pointing in 3D cartesian coordinate space. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : panningModel + :: + Specifies the panning model used by this + {{PannerNode}}. Defaults to + "{{PanningModelType/equalpower}}". + + : positionX + :: + Sets the \(x\)-coordinate position of the audio source in a 3D + Cartesian system. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : positionY + :: + Sets the \(y\)-coordinate position of the audio source in a 3D + Cartesian system. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : positionZ + :: + Sets the \(z\)-coordinate position of the audio source in a 3D + Cartesian system. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: most-negative-single-float
+			min-notes: Approximately -3.4028235e38
+			max: most-positive-single-float
+			max-notes: Approximately 3.4028235e38
+			rate: "{{AutomationRate/a-rate}}"
+			rate-notes: Has [=automation rate constraints=]
+		
+ + : refDistance + :: + A reference distance for reducing volume as source moves further + from the listener. For distances less than this, the volume is not reduced. The default value is 1. A + {{RangeError}} exception MUST be thrown if this is set + to a negative value. + + : rolloffFactor + :: + Describes how quickly the volume is reduced as source moves + away from listener. The default value is 1. A + {{RangeError}} exception MUST be thrown if this is set to + a negative value. + + The nominal range for the rolloffFactor specifies + the minimum and maximum values the rolloffFactor + can have. Values outside the range are clamped to lie within + this range. The nominal range depends on the {{PannerNode/distanceModel}} as follows: + +
+ : "{{linear}}" + :: The nominal range is \([0, 1]\). + + : "{{inverse}}" + :: The nominal range is \([0, \infty)\). + + : "{{exponential}}" + :: The nominal range is \([0, \infty)\). +

Methods

- : setOrientation(x, y, z) - :: - This method is DEPRECATED. It is equivalent to setting - {{PannerNode/orientationX}}.{{AudioParam/value}}, - {{PannerNode/orientationY}}.{{AudioParam/value}}, - and {{PannerNode/orientationZ}}.{{AudioParam/value}} - attribute directly, with the x, y and - z parameters, respectively. - - Consequently, if any of the {{PannerNode/orientationX}}, - {{PannerNode/orientationY}}, and {{PannerNode/orientationZ}} {{AudioParam}}s - have an automation curve set using {{AudioParam/setValueCurveAtTime()}} at the time - this method is called, a {{NotSupportedError}} MUST be - thrown. - - Describes which direction the audio source is pointing in the - 3D cartesian coordinate space. Depending on how directional the - sound is (controlled by the cone attributes), a sound - pointing away from the listener can be very quiet or completely - silent. - - The x, y, z parameters represent a direction - vector in 3D space. - - The default value is (1,0,0). - -
-            x:
-            y:
-            z:
-        
- -
- Return type: {{undefined}} -
- - : setPosition(x, y, z) - :: - This method is DEPRECATED. It is equivalent to setting - {{PannerNode/positionX}}.{{AudioParam/value}}, - {{PannerNode/positionY}}.{{AudioParam/value}}, and - {{PannerNode/positionZ}}.{{AudioParam/value}} - attribute directly with the x, y and - z parameters, respectively. - - Consequently, if any of the {{PannerNode/positionX}}, {{PannerNode/positionY}}, - and {{PannerNode/positionZ}} {{AudioParam}}s have an automation - curve set using {{AudioParam/setValueCurveAtTime()}} at the time - this method is called, a {{NotSupportedError}} MUST be - thrown. - - Sets the position of the audio source relative to the - {{BaseAudioContext/listener}} attribute. A 3D cartesian - coordinate system is used. - - The x, y, z parameters represent the coordinates - in 3D space. - - The default value is (0,0,0). - -
-            x:
-            y:
-            z:
-        
- -
- Return type: {{undefined}} -
+ : setOrientation(x, y, z) + :: + This method is DEPRECATED. It is equivalent to setting + {{PannerNode/orientationX}}.{{AudioParam/value}}, + {{PannerNode/orientationY}}.{{AudioParam/value}}, + and {{PannerNode/orientationZ}}.{{AudioParam/value}} + attribute directly, with the x, y and + z parameters, respectively. + + Consequently, if any of the {{PannerNode/orientationX}}, + {{PannerNode/orientationY}}, and {{PannerNode/orientationZ}} {{AudioParam}}s + have an automation curve set using {{AudioParam/setValueCurveAtTime()}} at the time + this method is called, a {{NotSupportedError}} MUST be + thrown. + + Describes which direction the audio source is pointing in the + 3D cartesian coordinate space. Depending on how directional the + sound is (controlled by the cone attributes), a sound + pointing away from the listener can be very quiet or completely + silent. + + The x, y, z parameters represent a direction + vector in 3D space. + + The default value is (1,0,0). + +
+			x:
+			y:
+			z:
+		
+ +
+ Return type: void +
+ + : setPosition(x, y, z) + :: + This method is DEPRECATED. It is equivalent to setting + {{PannerNode/positionX}}.{{AudioParam/value}}, + {{PannerNode/positionY}}.{{AudioParam/value}}, and + {{PannerNode/positionZ}}.{{AudioParam/value}} + attribute directly with the x, y and + z parameters, respectively. + + Consequently, if any of the {{PannerNode/positionX}}, {{PannerNode/positionY}}, + and {{PannerNode/positionZ}} {{AudioParam}}s have an automation + curve set using {{AudioParam/setValueCurveAtTime()}} at the time + this method is called, a {{NotSupportedError}} MUST be + thrown. + + Sets the position of the audio source relative to the + {{BaseAudioContext/listener}} attribute. A 3D cartesian + coordinate system is used. + + The x, y, z parameters represent the coordinates + in 3D space. + + The default value is (0,0,0). + +
+			x:
+			y:
+			z:
+		
+ +
+ Return type: void +
-

+

{{PannerOptions}}

This specifies options for constructing a @@ -9614,20 +8922,20 @@ specified, the normal default is used in constructing the node.
 dictionary PannerOptions : AudioNodeOptions {
-    PanningModelType panningModel = "equalpower";
-    DistanceModelType distanceModel = "inverse";
-    float positionX = 0;
-    float positionY = 0;
-    float positionZ = 0;
-    float orientationX = 1;
-    float orientationY = 0;
-    float orientationZ = 0;
-    double refDistance = 1;
-    double maxDistance = 10000;
-    double rolloffFactor = 1;
-    double coneInnerAngle = 360;
-    double coneOuterAngle = 360;
-    double coneOuterGain = 0;
+	PanningModelType panningModel = "equalpower";
+	DistanceModelType distanceModel = "inverse";
+	float positionX = 0;
+	float positionY = 0;
+	float positionZ = 0;
+	float orientationX = 1;
+	float orientationY = 0;
+	float orientationZ = 0;
+	double refDistance = 1;
+	double maxDistance = 10000;
+	double rolloffFactor = 1;
+	double coneInnerAngle = 360;
+	double coneOuterAngle = 360;
+	double coneOuterGain = 0;
 };
 
@@ -9635,47 +8943,47 @@ dictionary PannerOptions : AudioNodeOptions { Dictionary {{PannerOptions}} Members
- : coneInnerAngle - :: The initial value for the {{PannerNode/coneInnerAngle}} attribute of the node. + : coneInnerAngle + :: The initial value for the {{PannerNode/coneInnerAngle}} attribute of the node. - : coneOuterAngle - :: The initial value for the {{PannerNode/coneOuterAngle}} attribute of the node. + : coneOuterAngle + :: The initial value for the {{PannerNode/coneOuterAngle}} attribute of the node. - : coneOuterGain - :: The initial value for the {{PannerNode/coneOuterGain}} attribute of the node. + : coneOuterGain + :: The initial value for the {{PannerNode/coneOuterGain}} attribute of the node. - : distanceModel - :: The distance model to use for the node. + : distanceModel + :: The distance model to use for the node. - : maxDistance - :: The initial value for the {{PannerNode/maxDistance}} attribute of the node. + : maxDistance + :: The initial value for the {{PannerNode/maxDistance}} attribute of the node. - : orientationX - :: The initial \(x\)-component value for the {{PannerNode/orientationX}} AudioParam. + : orientationX + :: The initial \(x\)-component value for the {{PannerNode/orientationX}} AudioParam. - : orientationY - :: The initial \(y\)-component value for the {{PannerNode/orientationY}} AudioParam. + : orientationY + :: The initial \(y\)-component value for the {{PannerNode/orientationY}} AudioParam. - : orientationZ - :: The initial \(z\)-component value for the {{PannerNode/orientationZ}} AudioParam. + : orientationZ + :: The initial \(z\)-component value for the {{PannerNode/orientationZ}} AudioParam. - : panningModel - :: The panning model to use for the node. + : panningModel + :: The panning model to use for the node. - : positionX - :: The initial \(x\)-coordinate value for the {{PannerNode/positionX}} AudioParam. + : positionX + :: The initial \(x\)-coordinate value for the {{PannerNode/positionX}} AudioParam. - : positionY - :: The initial \(y\)-coordinate value for the {{PannerNode/positionY}} AudioParam. + : positionY + :: The initial \(y\)-coordinate value for the {{PannerNode/positionY}} AudioParam. - : positionZ - :: The initial \(z\)-coordinate value for the {{PannerNode/positionZ}} AudioParam. + : positionZ + :: The initial \(z\)-coordinate value for the {{PannerNode/positionZ}} AudioParam. - : refDistance - :: The initial value for the {{PannerNode/refDistance}} attribute of the node. + : refDistance + :: The initial value for the {{PannerNode/refDistance}} attribute of the node. - : rolloffFactor - :: The initial value for the {{PannerNode/rolloffFactor}} attribute of the node. + : rolloffFactor + :: The initial value for the {{PannerNode/rolloffFactor}} attribute of the node.

@@ -9687,7 +8995,7 @@ to {{PannerNode}}. -

+

The {{PeriodicWave}} Interface

{{PeriodicWave}} represents an arbitrary periodic waveform to be used @@ -9699,7 +9007,7 @@ up to at least 8192 elements.
 [Exposed=Window]
 interface PeriodicWave {
-    constructor (BaseAudioContext context, optional PeriodicWaveOptions options = {});
+	constructor (BaseAudioContext context, optional PeriodicWaveOptions options = {});
 };
 
@@ -9707,46 +9015,46 @@ interface PeriodicWave { Constructors
- : PeriodicWave(context, options) - :: -
- 1. Let p be a new {{PeriodicWave}} object. Let \[[real]] and \[[imag]] be two internal slots of type {{Float32Array}}, and let \[[normalize]] be an internal slot. - - 1. Process {{PeriodicWave/constructor(context, options)/options!!argument}} according to one of the following cases: - 1. If both {{PeriodicWaveOptions/real|options.real}} and {{PeriodicWaveOptions/imag|options.imag}} are present - 1. If the lengths of {{PeriodicWaveOptions/real|options.real}} and {{PeriodicWaveOptions/imag|options.imag}} are different or if either length is less than 2, throw an {{IndexSizeError}} and abort this algorithm. - 1. Set {{[[real]]}} and {{[[imag]]}} to new arrays with the same length as {{PeriodicWaveOptions/real|options.real}}. - 1. Copy all elements from {{PeriodicWaveOptions/real|options.real}} to {{[[real]]}} and {{PeriodicWaveOptions/imag|options.imag}} to {{[[imag]]}}. - 1. If only {{PeriodicWaveOptions/real|options.real}} is present - 1. If length of {{PeriodicWaveOptions/real|options.real}} is less than 2, throw an {{IndexSizeError}} and abort this algorithm. - 1. Set {{[[real]]}} and {{[[imag]]}} to arrays with the same length as {{PeriodicWaveOptions/real|options.real}}. - 1. Copy {{PeriodicWaveOptions/real|options.real}} to {{[[real]]}} and set {{[[imag]]}} to all zeros. - 1. If only {{PeriodicWaveOptions/imag|options.imag}} is present - 1. If length of {{PeriodicWaveOptions/imag|options.imag}} is less than 2, throw an {{IndexSizeError}} and abort this algorithm. - 1. Set {{[[real]]}} and {{[[imag]]}} to arrays with the same length as {{PeriodicWaveOptions/real|options.imag}}. - 1. Copy {{PeriodicWaveOptions/imag|options.imag}} to {{[[imag]]}} and set {{[[real]]}} to all zeros. - 1. Otherwise - 1. Set {{[[real]]}} and {{[[imag]]}} to zero-filled arrays of length 2. - 1. Set element at index 1 of {{[[imag]]}} to 1. - - Note: When setting this {{PeriodicWave}} on an {{OscillatorNode}}, this is equivalent to using the built-in type "{{OscillatorType/sine}}". - 1. Set element at index 0 of both {{[[real]]}} and {{[[imag]]}} to 0. (This sets the DC component to 0.) - - 5. Initialize {{[[normalize]]}} to the inverse of the - {{PeriodicWaveConstraints/disableNormalization}} attribute of the - {{PeriodicWaveConstraints}} on the - {{PeriodicWaveOptions}}. - - 6. Return p. -
- -
-            context: The {{BaseAudioContext}} this new {{PeriodicWave}} will be associated with. Unlike {{AudioBuffer}}, {{PeriodicWave}}s can't be shared accross {{AudioContext}}s or {{OfflineAudioContext}}s. It is associated with a particular {{BaseAudioContext}}.
-            options: Optional initial parameter value for this {{PeriodicWave}}.
-        
+ : PeriodicWave(context, options) + :: +
+ 1. Let p be a new {{PeriodicWave}} object. Let \[[real]] and \[[imag]] be two internal slots of type {{Float32Array}}, and let \[[normalize]] be an internal slot. + + 1. Process {{PeriodicWave/PeriodicWave(context, options)/options!!argument}} according to one of the following cases: + 1. If both {{PeriodicWaveOptions/real|options.real}} and {{PeriodicWaveOptions/imag|options.imag}} are present + 1. If the lengths of {{PeriodicWaveOptions/real|options.real}} and {{PeriodicWaveOptions/imag|options.imag}} are different or if either length is less than 2, throw an {{IndexSizeError}} and abort this algorithm. + 1. Set {{[[real]]}} and {{[[imag]]}} to new arrays with the same length as {{PeriodicWaveOptions/real|options.real}}. + 1. Copy all elements from {{PeriodicWaveOptions/real|options.real}} to {{[[real]]}} and {{PeriodicWaveOptions/imag|options.imag}} to {{[[imag]]}}. + 1. If only {{PeriodicWaveOptions/real|options.real}} is present + 1. If length of {{PeriodicWaveOptions/real|options.real}} is less than 2, throw an {{IndexSizeError}} and abort this algorithm. + 1. Set {{[[real]]}} and {{[[imag]]}} to arrays with the same length as {{PeriodicWaveOptions/real|options.real}}. + 1. Copy {{PeriodicWaveOptions/real|options.real}} to {{[[real]]}} and set {{[[imag]]}} to all zeros. + 1. If only {{PeriodicWaveOptions/imag|options.imag}} is present + 1. If length of {{PeriodicWaveOptions/imag|options.imag}} is less than 2, throw an {{IndexSizeError}} and abort this algorithm. + 1. Set {{[[real]]}} and {{[[imag]]}} to arrays with the same length as {{PeriodicWaveOptions/real|options.imag}}. + 1. Copy {{PeriodicWaveOptions/imag|options.imag}} to {{[[imag]]}} and set {{[[real]]}} to all zeros. + 1. Otherwise + 1. Set {{[[real]]}} and {{[[imag]]}} to zero-filled arrays of length 2. + 1. Set element at index 1 of {{[[imag]]}} to 1. + + Note: When setting this {{PeriodicWave}} on an {{OscillatorNode}}, this is equivalent to using the built-in type "{{OscillatorType/sine}}". + 1. Set element at index 0 of both {{[[real]]}} and {{[[imag]]}} to 0. (This sets the DC component to 0.) + + 5. Initialize {{[[normalize]]}} to the inverse of the + {{PeriodicWaveConstraints/disableNormalization}} attribute of the + {{PeriodicWaveConstraints}} on the + {{PeriodicWaveOptions}}. + + 6. Return p. +
+ +
+			context: The {{BaseAudioContext}} this new {{PeriodicWave}} will be associated with. Unlike {{AudioBuffer}}, {{PeriodicWave}}s can't be shared accross {{AudioContext}}s or {{OfflineAudioContext}}s. It is associated with a particular {{BaseAudioContext}}.
+			options: Optional initial parameter value for this {{PeriodicWave}}.
+		
-

+

{{PeriodicWaveConstraints}}

The {{PeriodicWaveConstraints}} dictionary is used to @@ -9754,7 +9062,7 @@ specify how the waveform is [[#waveform-normalization|normalized]].
 dictionary PeriodicWaveConstraints {
-    boolean disableNormalization = false;
+	boolean disableNormalization = false;
 };
 
@@ -9762,14 +9070,14 @@ dictionary PeriodicWaveConstraints { Dictionary {{PeriodicWaveConstraints}} Members
- : disableNormalization - :: - Controls whether the periodic wave is normalized or not. If - `true`, the waveform is not normalized; otherwise, - the waveform is normalized. + : disableNormalization + :: + Controls whether the periodic wave is normalized or not. If + `true`, the waveform is not normalized; otherwise, + the waveform is normalized.
-

+

{{PeriodicWaveOptions}}

The {{PeriodicWaveOptions}} dictionary is used to specify @@ -9787,8 +9095,8 @@ an error of type dictionary PeriodicWaveOptions : PeriodicWaveConstraints { - sequence<float> real; - sequence<float> imag; + sequence<float> real; + sequence<float> imag; }; @@ -9796,21 +9104,21 @@ dictionary PeriodicWaveOptions : PeriodicWaveConstraints { Dictionary {{PeriodicWaveOptions}} Members
- : imag - :: - The {{PeriodicWaveOptions/imag}} parameter represents an array of - sine terms. The first element (index 0) does not - exist in the Fourier series. The second element - (index 1) represents the fundamental frequency. The - third represents the first overtone and so on. - - : real - :: - The {{PeriodicWaveOptions/real}} parameter represents an array of - cosine terms. The first element (index 0) is the - DC-offset of the periodic waveform. The second element - (index 1) represents the fundmental frequency. The - third represents the first overtone and so on. + : imag + :: + The {{PeriodicWaveOptions/imag}} parameter represents an array of + sine terms. The first element (index 0) does not + exist in the Fourier series. The second element + (index 1) represents the fundamental frequency. The + third represents the first overtone and so on. + + : real + :: + The {{PeriodicWaveOptions/real}} parameter represents an array of + cosine terms. The first element (index 0) is the + DC-offset of the periodic waveform. The second element + (index 1) represents the fundmental frequency. The + third represents the first overtone and so on.

@@ -9824,7 +9132,7 @@ arrays of length \(L\), respectively. Then the basic time-domain waveform,
 $$
-    x(t) = \sum_{k=1}^{L-1} \left[a[k]\cos2\pi k t + b[k]\sin2\pi k t\right]
+	x(t) = \sum_{k=1}^{L-1} \left[a[k]\cos2\pi k t + b[k]\sin2\pi k t\right]
 $$
 
@@ -9843,7 +9151,7 @@ Let
 $$
-    \tilde{x}(n) = \sum_{k=1}^{L-1} \left(a[k]\cos\frac{2\pi k n}{N} + b[k]\sin\frac{2\pi k n}{N}\right)
+	\tilde{x}(n) = \sum_{k=1}^{L-1} \left(a[k]\cos\frac{2\pi k n}{N} + b[k]\sin\frac{2\pi k n}{N}\right)
 $$
 
@@ -9853,7 +9161,7 @@ normalization factor \(f\) is computed as follows.
 $$
-    f = \max_{n = 0, \ldots, N - 1} |\tilde{x}(n)|
+	f = \max_{n = 0, \ldots, N - 1} |\tilde{x}(n)|
 $$
 
@@ -9861,7 +9169,7 @@ Thus, the actual normalized waveform \(\hat{x}(n)\) is:
 $$
-    \hat{x}(n) = \frac{\tilde{x}(n)}{f}
+	\hat{x}(n) = \frac{\tilde{x}(n)}{f}
 $$
 
@@ -9884,47 +9192,47 @@ functions. Also, \(b[0] = 0\) in all cases. Hence, only \(b[n]\) for \(n \ge 1\) is specified below.
- : "{{sine}}" - :: -
-        $$
-            b[n] = \begin{cases}
-                             1 & \mbox{for } n = 1 \\
-                             0 & \mbox{otherwise}
-                         \end{cases}
-        $$
-        
- - : "{{square}}" - :: -
-        $$
-            b[n] = \frac{2}{n\pi}\left[1 - (-1)^n\right]
-        $$
-        
- - : "{{sawtooth}}" - :: -
-        $$
-            b[n] = (-1)^{n+1} \dfrac{2}{n\pi}
-        $$
-        
- - : "{{triangle}}" - :: -
-        $$
-            b[n] = \frac{8\sin\dfrac{n\pi}{2}}{(\pi n)^2}
-        $$
-        
+ : "{{sine}}" + :: +
+		$$
+			b[n] = \begin{cases}
+							 1 & \mbox{for } n = 1 \\
+							 0 & \mbox{otherwise}
+						 \end{cases}
+		$$
+		
+ + : "{{square}}" + :: +
+		$$
+			b[n] = \frac{2}{n\pi}\left[1 - (-1)^n\right]
+		$$
+		
+ + : "{{sawtooth}}" + :: +
+		$$
+			b[n] = (-1)^{n+1} \dfrac{2}{n\pi}
+		$$
+		
+ + : "{{triangle}}" + :: +
+		$$
+			b[n] = \frac{8\sin\dfrac{n\pi}{2}}{(\pi n)^2}
+		$$
+		
-

+

The {{ScriptProcessorNode}} Interface - DEPRECATED

This interface is an {{AudioNode}} which can @@ -9936,21 +9244,21 @@ purposes until implementations remove this node type.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: {{BaseAudioContext/createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)/numberOfInputChannels}}
-    cc-notes: This is the number of channels specified when constructing this node. There are channelCount constraints.
-    cc-mode: explicit
-    cc-mode-notes:  Has channelCountMode constraints
-    cc-interp: speakers
-    tail-time: No
+	noi: 1
+	noo: 1
+	cc: {{BaseAudioContext/createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)/numberOfInputChannels}}
+	cc-notes: This is the number of channels specified when constructing this node. There are channelCount constraints.
+	cc-mode: explicit
+	cc-mode-notes:  Has channelCountMode constraints
+	cc-interp: speakers
+	tail-time: No
 
The {{ScriptProcessorNode}} is constructed with a {{BaseAudioContext/createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)/bufferSize}} which MUST be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. This value controls how -frequently the {{ScriptProcessorNode/audioprocess}} event is dispatched and how -many sample-frames need to be processed each call. {{ScriptProcessorNode/audioprocess}} events are only +frequently the {{ScriptProcessorNode/onaudioprocess}} event is dispatched and how +many sample-frames need to be processed each call. {{ScriptProcessorNode/onaudioprocess}} events are only dispatched if the {{ScriptProcessorNode}} has at least one input or one output connected. Lower numbers for {{ScriptProcessorNode/bufferSize}} will result in @@ -9967,8 +9275,8 @@ to be zero.
 [Exposed=Window]
 interface ScriptProcessorNode : AudioNode {
-    attribute EventHandler onaudioprocess;
-    readonly attribute long bufferSize;
+	attribute EventHandler onaudioprocess;
+	readonly attribute long bufferSize;
 };
 
@@ -9976,25 +9284,27 @@ interface ScriptProcessorNode : AudioNode { Attributes
- : bufferSize - :: - The size of the buffer (in sample-frames) which needs to be - processed each time {{ScriptProcessorNode/audioprocess}} is fired. - Legal values are (256, 512, 1024, 2048, 4096, 8192, 16384). - - : onaudioprocess - :: - A property used to set an [=event handler=] for the - audioprocess event type that is dispatched to - {{ScriptProcessorNode}} node types. The event dispatched to the event - handler uses the {{AudioProcessingEvent}} interface. + : bufferSize + :: + The size of the buffer (in sample-frames) which needs to be + processed each time {{ScriptProcessorNode/onaudioprocess}} is called. + Legal values are (256, 512, 1024, 2048, 4096, 8192, 16384). + + : onaudioprocess + :: + A property used to set the EventHandler (described + in + HTML[[!HTML]]) for the {{ScriptProcessorNode/onaudioprocess}} event that + is dispatched to {{ScriptProcessorNode}} node + types. An event of type {{AudioProcessingEvent}} + will be dispatched to the event handler.
-

+

The {{StereoPannerNode}} Interface

This interface represents a processing node which positions an @@ -10005,14 +9315,14 @@ components in a stereo stream.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-notes: Has channelCount constraints
-    cc-mode: clamped-max
-    cc-mode-notes: Has channelCountMode constraints
-    cc-interp: speakers
-    tail-time: No
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-notes: Has channelCount constraints
+	cc-mode: clamped-max
+	cc-mode-notes: Has channelCountMode constraints
+	cc-interp: speakers
+	tail-time: No
 
The input of this node is stereo (2 channels) and cannot be @@ -10026,8 +9336,8 @@ cannot be configured.
 [Exposed=Window]
 interface StereoPannerNode : AudioNode {
-    constructor (BaseAudioContext context, optional StereoPannerOptions options = {});
-    readonly attribute AudioParam pan;
+	constructor (BaseAudioContext context, optional StereoPannerOptions options = {});
+	readonly attribute AudioParam pan;
 };
 
@@ -10035,39 +9345,39 @@ interface StereoPannerNode : AudioNode { Constructors
- : StereoPannerNode(context, options) - :: + : StereoPannerNode(context, options) + :: -
-            path: audionode-init.include
-        
+
+			path: audionode-init.include
+		
-
-            context: The {{BaseAudioContext}} this new {{StereoPannerNode}} will be associated with.
-            options: Optional initial parameter value for this {{StereoPannerNode}}.
-        
+
+			context: The {{BaseAudioContext}} this new {{StereoPannerNode}} will be associated with.
+			options: Optional initial parameter value for this {{StereoPannerNode}}.
+		

Attributes

- : pan - :: - The position of the input in the output's stereo image. -1 - represents full left, +1 represents full right. - -
-        path: audioparam.include
-        macros:
-            default: 0
-            min: -1
-            max: 1
-            rate: "{{AutomationRate/a-rate}}"
-        
+ : pan + :: + The position of the input in the output's stereo image. -1 + represents full left, +1 represents full right. + +
+		path: audioparam.include
+		macros:
+			default: 0
+			min: -1
+			max: 1
+			rate: "{{AutomationRate/a-rate}}"
+		
-

+

{{StereoPannerOptions}}

This specifies the options to use in constructing a @@ -10076,7 +9386,7 @@ not specified, the normal default is used in constructing the node.
 dictionary StereoPannerOptions : AudioNodeOptions {
-    float pan = 0;
+	float pan = 0;
 };
 
@@ -10084,8 +9394,8 @@ dictionary StereoPannerOptions : AudioNodeOptions { Dictionary {{StereoPannerOptions}} Members
- : pan - :: The initial value for the {{StereoPannerNode/pan}} AudioParam. + : pan + :: The initial value for the {{StereoPannerNode/pan}} AudioParam.

@@ -10104,7 +9414,7 @@ approaches to panning and mixing. -

+

The {{WaveShaperNode}} Interface

{{WaveShaperNode}} is an @@ -10118,13 +9428,13 @@ non-linear shaping curves may be specified.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: Maybe
-    tail-time-notes: There is a tail-time only if the {{WaveShaperNode/oversample}} attribute is set to "{{OverSampleType/2x}}" or "{{OverSampleType/4x}}". The actual duration of this tail-time depends on the implementation.
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: Maybe
+	tail-time-notes: There is a tail-time only if the {{WaveShaperNode/oversample}} attribute is set to "{{OverSampleType/2x}}" or "{{OverSampleType/4x}}". The actual duration of this tail-time depends on the implementation.
 
The number of channels of the output always equals the number of @@ -10132,35 +9442,34 @@ channels of the input.
 enum OverSampleType {
-    "none",
-    "2x",
-    "4x"
+	"none",
+	"2x",
+	"4x"
 };
 
- - - - - + +
{{OverSampleType}} enumeration description
Enum valueDescription
"none" - Don't oversample - -
"2x" - Oversample two times - -
"4x" - Oversample four times +
Enumeration description +
"none" + Don't oversample + +
"2x" + Oversample two times + +
"4x" + Oversample four times
 [Exposed=Window]
 interface WaveShaperNode : AudioNode {
-    constructor (BaseAudioContext context, optional WaveShaperOptions options = {});
-    attribute Float32Array? curve;
-    attribute OverSampleType oversample;
+	constructor (BaseAudioContext context, optional WaveShaperOptions options = {});
+	attribute Float32Array? curve;
+	attribute OverSampleType oversample;
 };
 
@@ -10168,157 +9477,150 @@ interface WaveShaperNode : AudioNode { Constructors
- : WaveShaperNode(context, options) - :: - -
-            path: audionode-init.include
-        
- - Also, let [[curve set]] be an internal - slot of this {{WaveShaperNode}}. Initialize this slot to false. If - {{WaveShaperNode/constructor(context, options)/options}} is given and specifies a - {{WaveShaperOptions/curve}}, set {{[[curve set]]}} to true. - -
-            context: The {{BaseAudioContext}} this new {{WaveShaperNode}} will be associated with.
-            options: Optional initial parameter value for this {{WaveShaperNode}}.
-        
+ : WaveShaperNode(context, options) + :: + +
+			path: audionode-init.include
+		
+ +
+			context: The {{BaseAudioContext}} this new {{WaveShaperNode}} will be associated with.
+			options: Optional initial parameter value for this {{WaveShaperNode}}.
+		

Attributes

- : curve - :: - The shaping curve used for the waveshaping effect. The input - signal is nominally within the range [-1, 1]. Each input sample - within this range will index into the shaping curve, with a - signal level of zero corresponding to the center value of the - curve array if there are an odd number of entries, or - interpolated between the two centermost values if there are an - even number of entries in the array. Any sample value less than - -1 will correspond to the first value in the curve array. Any - sample value greater than +1 will correspond to the last value - in the curve array. - - The implementation MUST perform linear interpolation between - adjacent points in the curve. Initially the curve attribute is - null, which means that the WaveShaperNode will pass its input - to its output without modification. - - Values of the curve are spread with equal spacing in the [-1; - 1] range. This means that a {{WaveShaperNode/curve}} with a - even number of value will not have a value for a signal at - zero, and a {{WaveShaperNode/curve}} with an odd number of - value will have a value for a signal at zero. The - output is determined by the following algorithm. - -
- 1. Let \(x\) be the input sample, \(y\) be the - corresponding output of the node, - \(c_k\) be the \(k\)'th element of the - {{WaveShaperNode/curve}}, and \(N\) be - the length of the - {{WaveShaperNode/curve}}. - - 1. Let -
-                $$
-                    \begin{align*}
-                    v &= \frac{N-1}{2}(x + 1) \\
-                    k &= \lfloor v \rfloor \\
-                    f &= v - k
-                    \end{align*}
-                $$
-                
- 1. Then -
-                $$
-                    \begin{align*}
-                    y &=
-                        \begin{cases}
-                        c_0 & v \lt 0 \\
-                        c_{N-1} & v \ge N - 1 \\
-                        (1-f)\,c_k + fc_{k+1} & \mathrm{otherwise}
-                        \end{cases}
-                    \end{align*}
-                $$
-                
-
- - A {{InvalidStateError}} MUST be thrown if this - attribute is set with a {{Float32Array}} that has a - length less than 2. - - When this attribute is set, an internal copy of the curve is - created by the {{WaveShaperNode}}. Subsequent - modifications of the contents of the array used to set the - attribute therefore have no effect. - -
- To set the {{WaveShaperNode/curve}} attribute, execute these steps: - - 1. Let new curve be a {{Float32Array}} to be assigned to {{WaveShaperNode/curve}} or null. - . - - 2. If new curve is not null and - {{WaveShaperNode/[[curve set]]}} is true, throw an - {{InvalidStateError}} and abort these steps. - - 3. If new curve is not null, set - {{WaveShaperNode/[[curve set]]}} to true. - - 4. Assign new curve to the {{WaveShaperNode/curve}} - attribute. -
- - Note: The use of a curve that produces a non-zero - output value for zero input value will cause this node - to produce a DC signal even if there are no inputs - connected to this node. This will persist until the - node is disconnected from downstream nodes. - - : oversample - :: - Specifies what type of oversampling (if any) should be used - when applying the shaping curve. The default value is "{{OverSampleType/none}}", - meaning the curve will be applied directly to the input - samples. A value of "{{2x}}" or "{{4x}}" can improve the quality of the - processing by avoiding some aliasing, with the "{{4x}}" value - yielding the highest quality. For some applications, it's - better to use no oversampling in order to get a very precise - shaping curve. - - -
- A value of "{{2x}}" or "{{4x}}" means that the following steps MUST be - performed: - - 1. Up-sample the input samples to 2x or 4x the sample-rate of - the {{AudioContext}}. Thus for each render - quantum, generate twice (for 2x) or four times (for 4x) samples. - - 2. Apply the shaping curve. - - 3. Down-sample the result back to the sample-rate of the - {{AudioContext}}. Thus taking the previously processed samples - processed samples, generating a single render quantum worth of - samples as the final result. -
- - The exact up-sampling and down-sampling filters are not - specified, and can be tuned for sound quality (low aliasing, - etc.), low latency, or performance. - - Note: Use of oversampling introduces some degree of audio processing - latency due to the up-sampling and down-sampling filters. The - amount of this latency can vary from one implementation to - another. + : curve + :: + The shaping curve used for the waveshaping effect. The input + signal is nominally within the range [-1, 1]. Each input sample + within this range will index into the shaping curve, with a + signal level of zero corresponding to the center value of the + curve array if there are an odd number of entries, or + interpolated between the two centermost values if there are an + even number of entries in the array. Any sample value less than + -1 will correspond to the first value in the curve array. Any + sample value greater than +1 will correspond to the last value + in the curve array. + + The implementation MUST perform linear interpolation between + adjacent points in the curve. Initially the curve attribute is + null, which means that the WaveShaperNode will pass its input + to its output without modification. + + Values of the curve are spread with equal spacing in the [-1; + 1] range. This means that a {{WaveShaperNode/curve}} with a + even number of value will not have a value for a signal at + zero, and a {{WaveShaperNode/curve}} with an odd number of + value will have a value for a signal at zero. The + output is determined by the following algorithm. + +
+ 1. Let \(x\) be the input sample, \(y\) be the + corresponding output of the node, + \(c_k\) be the \(k\)'th element of the + {{WaveShaperNode/curve}}, and \(N\) be + the length of the + {{WaveShaperNode/curve}}. + + 1. Let +
+				$$
+					\begin{align*}
+					v &= \frac{N-1}{2}(x + 1) \\
+					k &= \lfloor v \rfloor \\
+					f &= v - k
+					\end{align*}
+				$$
+				
+ 1. Then +
+				$$
+					\begin{align*}
+					y &=
+						\begin{cases}
+						c_0 & v \lt 0 \\
+						c_{N-1} & v \ge N - 1 \\
+						(1-f)\,c_k + fc_{k+1} & \mathrm{otherwise}
+						\end{cases}
+					\end{align*}
+				$$
+				
+
+ + A {{InvalidStateError}} MUST be thrown if this + attribute is set with a {{Float32Array}} that has a + length less than 2. + + When this attribute is set, an internal copy of the curve is + created by the {{WaveShaperNode}}. Subsequent + modifications of the contents of the array used to set the + attribute therefore have no effect. + +
+ To set the {{WaveShaperNode/curve}} attribute, execute these steps: + + 1. Let new curve be a {{Float32Array}} to be assigned to {{WaveShaperNode/curve}} or null. + . + + 2. If new curve is not null and + {{WaveShaperNode/[[curve set]]}} is true, throw an + {{InvalidStateError}} and abort these steps. + + 3. If new curve is not null, set + {{WaveShaperNode/[[curve set]]}} to true. + + 4. Assign new curve to the {{WaveShaperNode/curve}} + attribute. +
+ + Note: The use of a curve that produces a non-zero + output value for zero input value will cause this node + to produce a DC signal even if there are no inputs + connected to this node. This will persist until the + node is disconnected from downstream nodes. + + : oversample + :: + Specifies what type of oversampling (if any) should be used + when applying the shaping curve. The default value is "{{none}}", + meaning the curve will be applied directly to the input + samples. A value of "{{2x}}" or "{{4x}}" can improve the quality of the + processing by avoiding some aliasing, with the "{{4x}}" value + yielding the highest quality. For some applications, it's + better to use no oversampling in order to get a very precise + shaping curve. + +
+ A value of "{{2x}}" or "{{4x}}" means that the following steps MUST be + performed: + + 1. Up-sample the input samples to 2x or 4x the sample-rate of + the {{AudioContext}}. Thus for each render + quantum, generate 256 (for 2x) or 512 (for 4x) samples. + + 2. Apply the shaping curve. + + 3. Down-sample the result back to the sample-rate of the + {{AudioContext}}. Thus taking the 256 (or 512) + processed samples, generating 128 as the final result. +
+ + The exact up-sampling and down-sampling filters are not + specified, and can be tuned for sound quality (low aliasing, + etc.), low latency, or performance. + + Note: Use of oversampling introduces some degree of audio processing + latency due to the up-sampling and down-sampling filters. The + amount of this latency can vary from one implementation to + another.
-

+

{{WaveShaperOptions}}

This specifies the options for constructing a @@ -10327,8 +9629,8 @@ not specified, the normal default is used in constructing the node. dictionary WaveShaperOptions : AudioNodeOptions { - sequence<float> curve; - OverSampleType oversample = "none"; + sequence<float> curve; + OverSampleType oversample = "none"; }; @@ -10336,42 +9638,25 @@ dictionary WaveShaperOptions : AudioNodeOptions { Dictionary {{WaveShaperOptions}} Members
- : curve - :: The shaping curve for the waveshaping effect. + : curve + :: The shaping curve for the waveshaping effect. - : oversample - :: The type of oversampling to use for the shaping curve. + : oversample + :: The type of oversampling to use for the shaping curve.
-

+

The {{AudioWorklet}} Interface

 [Exposed=Window, SecureContext]
 interface AudioWorklet : Worklet {
-  readonly attribute MessagePort port;
 };
 
-

Attributes

- -
- : port - :: - A {{MessagePort}} connected to the port on the - {{AudioWorkletGlobalScope}}. - - Note: Authors that register an event listener on the - "message" event of this {{AudioWorklet/port}} should - call {{MessagePort/close}} on either end of the {{MessageChannel}} - (either in the {{AudioWorklet}} or the {{AudioWorkletGlobalScope}} - side) to allow for resources to be - [[html#ports-and-garbage-collection|collected]]. -
-

Concepts

@@ -10390,11 +9675,11 @@ objects, and the latter implements the internal audio processing within a special scope named {{AudioWorkletGlobalScope}}.
- AudioWorklet concept -
- {{AudioWorkletNode}} and - {{AudioWorkletProcessor}} -
+ AudioWorklet concept +
+ {{AudioWorkletNode}} and + {{AudioWorkletProcessor}} +
Each {{BaseAudioContext}} possesses exactly one @@ -10415,26 +9700,26 @@ instances created from the constructor. {{AudioWorklet}} has one internal slot: - node name to parameter descriptor map which is a map containing - an identical set of string keys from node name to processor - constructor map that are associated with the matching - parameterDescriptors values. This internal storage is - populated as a consequence of calling the {{registerProcessor()}} - method in the rendering thread. The population is guaranteed - to complete prior to the resolution of the promise returned by - {{addModule()}} on a context's {{BaseAudioContext/audioWorklet}}. + an identical set of string keys from node name to processor + constructor map that are associated with the matching + parameterDescriptors values. This internal storage is + populated as a consequence of calling the {{registerProcessor()}} + method in the rendering thread. The population is guaranteed + to complete prior to the resolution of the promise returned by + {{addModule()}} on a context's {{BaseAudioContext/audioWorklet}}.
 // bypass-processor.js script file, runs on AudioWorkletGlobalScope
 class BypassProcessor extends AudioWorkletProcessor {
-    process (inputs, outputs) {
-        // Single input, single channel.
-        const input = inputs[0];
-        const output = outputs[0];
-        output[0].set(input[0]);
-
-        // Process only while there are active inputs.
-        return false;
-    }
+	process (inputs, outputs) {
+		// Single input, single channel.
+		const input = inputs[0];
+		const output = outputs[0];
+		output[0].set(input[0]);
+
+		// Process only while there are active inputs.
+		return false;
+	}
 };
 
 registerProcessor('bypass-processor', BypassProcessor);
@@ -10444,7 +9729,7 @@ registerProcessor('bypass-processor', BypassProcessor);
 // The main global scope
 const context = new AudioContext();
 context.audioWorklet.addModule('bypass-processor.js').then(() => {
-    const bypassNode = new AudioWorkletNode(context, 'bypass-processor');
+	const bypassNode = new AudioWorkletNode(context, 'bypass-processor');
 });
 
@@ -10454,7 +9739,7 @@ created in {{AudioWorkletGlobalScope}}. These two objects communicate via the asynchronous message passing described in [[#processing-model]]. -

+

The {{AudioWorkletGlobalScope}} Interface

This special execution context is designed to enable the @@ -10468,31 +9753,28 @@ with {{AudioWorkletNode}}s in the main scope. Exactly one {{AudioWorkletGlobalScope}} exists for each {{AudioContext}} that contains one or more {{AudioWorkletNode}}s. The running of imported scripts is -performed by the UA as defined in [[!HTML]]. Overriding the default -specified in [[!HTML]], {{AudioWorkletGlobalScope}}s must not be -[=terminate a worklet global scope|terminated=] arbitrarily by the user -agent. +performed by the UA as defined in [[!worklets-1]]. An {{AudioWorkletGlobalScope}} has the following internal slots: - node name to processor constructor map which is a map - that stores key-value pairs of - processor name → - {{AudioWorkletProcessorConstructor}} instance. - Initially this map is empty and populated when the - {{registerProcessor()}} method is called. + that stores key-value pairs of + processor name → + {{AudioWorkletProcessorConstructor}} instance. + Initially this map is empty and populated when the + {{registerProcessor()}} method is called. - pending processor construction data stores temporary data - generated by the {{AudioWorkletNode}} constructor for the - instantiation of the corresponding {{AudioWorkletProcessor}}. The - [=pending processor construction data=] contains the following items: - - node reference - which is initially empty. This storage is for an - {{AudioWorkletNode}} reference that is transferred from the - {{AudioWorkletNode}} constructor. - - transferred port - which is initially empty. This storage is for a deserialized - {{MessagePort}} that is transferred from the - {{AudioWorkletNode}} constructor. + generated by the {{AudioWorkletNode}} constructor for the + instantiation of the corresponding {{AudioWorkletProcessor}}. The + [=pending processor construction data=] contains the following items: + - node reference + which is initially empty. This storage is for an + {{AudioWorkletNode}} reference that is transferred from the + {{AudioWorkletNode}} constructor. + - transferred port + which is initially empty. This storage is for a deserialized + {{MessagePort}} that is transferred from the + {{AudioWorkletNode}} constructor. Note: The {{AudioWorkletGlobalScope}} may also contain any other data and code to be shared by these instances. As an example, multiple @@ -10509,13 +9791,11 @@ callback AudioWorkletProcessorConstructor = AudioWorkletProcessor (object option [Global=(Worklet, AudioWorklet), Exposed=AudioWorklet] interface AudioWorkletGlobalScope : WorkletGlobalScope { - undefined registerProcessor (DOMString name, - AudioWorkletProcessorConstructor processorCtor); - readonly attribute unsigned long long currentFrame; - readonly attribute double currentTime; - readonly attribute float sampleRate; - readonly attribute unsigned long renderQuantumSize; - readonly attribute MessagePort port; + void registerProcessor (DOMString name, + AudioWorkletProcessorConstructor processorCtor); + readonly attribute unsigned long long currentFrame; + readonly attribute double currentTime; + readonly attribute float sampleRate; }; @@ -10523,156 +9803,140 @@ interface AudioWorkletGlobalScope : WorkletGlobalScope { Attributes
- : currentFrame - :: - The current frame of the block of audio being - processed. This must be equal to the value of the - {{[[current frame]]}} internal slot of the - {{BaseAudioContext}}. - - : currentTime - :: - The context time of the block of audio being processed. By - definition this will be equal to the value of - {{BaseAudioContext}}'s {{BaseAudioContext/currentTime}} attribute that was most - recently observable in the control thread. - - : sampleRate - :: - The sample rate of the associated {{BaseAudioContext}}. - - : renderQuantumSize - :: - The value of the private slot [[render quantum size]] of the associated - {{BaseAudioContext}}. - - : port - :: - A {{MessagePort}} connected to the port on the {{AudioWorklet}}. - - Note: Authors that register an event listener on the "message" - event of this {{AudioWorkletGlobalScope/port}} should call - {{MessagePort/close}} on either end of the {{MessageChannel}} (either - in the {{AudioWorklet}} or the {{AudioWorkletGlobalScope}} side) to - allow for resources to be - [[html#ports-and-garbage-collection|collected]]. + : currentFrame + :: + The current frame of the block of audio being + processed. This must be equal to the value of the + {{[[current frame]]}} internal slot of the + {{BaseAudioContext}}. + + : currentTime + :: + The context time of the block of audio being processed. By + definition this will be equal to the value of + {{BaseAudioContext}}'s {{BaseAudioContext/currentTime}} attribute that was most + recently observable in the control thread. + + : sampleRate + :: + The sample rate of the associated {{BaseAudioContext}}.
Methods
- : registerProcessor(name, processorCtor) - :: - Registers a class constructor derived from - {{AudioWorkletProcessor}}. - -
- When the {{AudioWorkletGlobalScope/registerProcessor(name, processorCtor)}} - method is called, perform the following steps. If an - exception is thrown in any step, abort the remaining - steps. - - 1. If name is an empty string, - throw a - {{NotSupportedError}}. - - 1. If name already exists as a key in the - node name to processor constructor map, - throw a - {{NotSupportedError}}. - - 1. If the result of - IsConstructor(argument=processorCtor) - is false, - throw a {{TypeError}} - . - - 1. Let prototype be the result of - - Get(O=processorCtor, - P="prototype"). - - 1. If the result of - Type(argument=prototype) - is not Object, - throw a {{TypeError}} - . - - 1. Let parameterDescriptorsValue be the - result of - Get(O=processorCtor, P="parameterDescriptors"). - - 1. If parameterDescriptorsValue is not {{undefined}}, - execute the following steps: - - 1. Let parameterDescriptorSequence - be the result of - - the conversion from - parameterDescriptorsValue - to an IDL value of type - sequence<AudioParamDescriptor>. - - 1. Let paramNames be an empty Array. - - 1.
- For each descriptor of - parameterDescriptorSequence: - 1. Let paramName be the value of - the member {{AudioParamDescriptor/name}} - in descriptor. Throw - a {{NotSupportedError}} if - paramNames already - contains paramName value. - - 1. Append paramName to - the paramNames array. - - 1. Let defaultValue be the value of - the member - {{AudioParamDescriptor/defaultValue}} - in descriptor. - - 1. Let minValue be the value of - the member - {{AudioParamDescriptor/minValue}} - in descriptor. - - 1. Let maxValue be the value of - the member - {{AudioParamDescriptor/maxValue}} - in descriptor. - - 1. If the expresstion - minValue <= - defaultValue <= - maxValue is false, - throw - an {{InvalidStateError}}. - - 1. Append the key-value pair name → - processorCtor to - node name to processor constructor map - of the associated {{AudioWorkletGlobalScope}}. - - 1. - queue a media element task to append the key-value pair |name| → - |parameterDescriptorSequence| to the node name to parameter descriptor map of the - associated {{BaseAudioContext}}. -
- - Note: The class constructor should only be looked up once, thus it - does not have the opportunity to dynamically change after registration. - -
-            name: A string key that represents a class constructor to be registered. This key is used to look up the constructor of {{AudioWorkletProcessor}} during construction of an {{AudioWorkletNode}}.
-            processorCtor: A class constructor extended from {{AudioWorkletProcessor}}.
-        
- -
- Return type: {{undefined}} -
+ : registerProcessor(name, processorCtor) + :: + Registers a class constructor derived from + {{AudioWorkletProcessor}}. + +
+ When the {{AudioWorkletGlobalScope/registerProcessor(name, processorCtor)}} + method is called, perform the following steps. If an + exception is thrown in any step, abort the remaining + steps. + + 1. If name is an empty string, + throw a + {{NotSupportedError}}. + + 1. If name alredy exists as a key in the + node name to processor constructor map, + throw a + {{NotSupportedError}}. + + 1. If the result of + IsConstructor(argument=processorCtor) + is false, + throw a {{TypeError}} + . + + 1. Let prototype be the result of + + Get(O=processorCtor, + P="prototype"). + + 1. If the result of + Type(argument=prototype) + is not Object, + throw a {{TypeError}} + . + + 1. Let parameterDescriptorsValue be the + result of + Get(O=processorCtor, P="parameterDescriptors"). + + 1. If parameterDescriptorsValue is not undefined, + execute the following steps: + + 1. Let parameterDescriptorSequence + be the result of + + the conversion from + parameterDescriptorsValue + to an IDL value of type + sequence<AudioParamDescriptor>. + + 1. Let paramNames be an empty Array. + + 1. For each descriptor of + parameterDescriptorSequence: + + 1. Let paramName be the value of + the member {{AudioParamDescriptor/name}} + in descriptor. Throw + a {{NotSupportedError}} if + paramNames already + contains paramName value. + + 1. Append paramName to + the paramNames array. + + 1. Let defaultValue be the value of + the member + {{AudioParamDescriptor/defaultValue}} + in descriptor. + + 1. Let minValue be the value of + the member + {{AudioParamDescriptor/minValue}} + in descriptor. + + 1. Let maxValue be the value of + the member + {{AudioParamDescriptor/maxValue}} + in descriptor. + + 1. If defaultValue is less than + minValue or greater than + maxValue, + throw a + {{InvalidStateError}}. + + 1. Append the key-value pair name → + processorCtor to + node name to processor constructor map + of the associated {{AudioWorkletGlobalScope}}. + + 1. Queue a task to the control thread to + append the key-value pair name → + parameterDescriptorSequence to + the node name to parameter descriptor map + of the associated {{BaseAudioContext}}. +
+ + Note: The class constructor should only be looked up once, thus it + does not have the opportunity to dynamically change after registration. + +
+			name: A string key that represents a class constructor to be registered. This key is used to look up the constructor of {{AudioWorkletProcessor}} during construction of an {{AudioWorkletNode}}.
+			processorCtor: A class constructor extended from {{AudioWorkletProcessor}}.
+		
+ +
+ Return type: void +
@@ -10686,66 +9950,63 @@ will be prepared for cross-thread transfer. This items: - name which is a {{DOMString}} - that is to be looked up in the - node name to processor constructor map. + that is to be looked up in the + node name to processor constructor map. - node which is a reference to - the {{AudioWorkletNode}} created. + the {{AudioWorkletNode}} created. - options which is a serialized - {{AudioWorkletNodeOptions}} given to the {{AudioWorkletNode}}'s - {{AudioWorkletNode()|constructor}}. + {{AudioWorkletNodeOptions}} given to the {{AudioWorkletNode}}'s + {{AudioWorkletNode()|constructor}}. - port which is a serialized - {{MessagePort}} paired with the {{AudioWorkletNode}}'s - {{AudioWorkletNode/port}}. + {{MessagePort}} paired with the {{AudioWorkletNode}}'s + {{AudioWorkletNode/port}}. Upon the arrival of the transferred data on the {{AudioWorkletGlobalScope}}, the rendering thread will invoke the algorithm below: -
- 1. Let constructionData be the - [=processor construction data=] transferred from the - [=control thread=]. - - 1. Let processorName, nodeReference and - serializedPort be - constructionData's - [=processor construction data/name=], - [=processor construction data/node=], and - [=processor construction data/port=] respectively. - - 1. Let serializedOptions be - constructionData's - [=processor construction data/options=]. - - 1. Let deserializedPort be the result of - [$StructuredDeserialize$](serializedPort, - the current Realm). - - 1. Let deserializedOptions be the result of - [$StructuredDeserialize$](serializedOptions, - the current Realm). - - 1. Let processorCtor be the result of looking - up processorName on the - {{AudioWorkletGlobalScope}}'s - node name to processor constructor map. - - 1. Store nodeReference and - deserializedPort to - [=pending processor construction data/node reference=] - and - [=pending processor construction data/transferred port=] - of this {{AudioWorkletGlobalScope}}'s - [=pending processor construction data=] respectively. - - 1. Construct a callback function from |processorCtor| with - the argument of |deserializedOptions|. If any exceptions are thrown in the callback, - queue a task to the control thread to [=fire an event=] named - {{AudioWorkletNode/processorerror}} at |nodeReference| using {{ErrorEvent}}. - - 1. Empty the [=pending processor construction data=] slot. -
- -

+
+ 1. Let constructionData be the + [=processor construction data=] transferred from the + [=control thread=]. + + 1. Let processorName, nodeReference and + serializedPort be + constructionData's + [=processor construction data/name=], + [=processor construction data/node=], and + [=processor construction data/port=] respectively. + + 1. Let serializedOptions be + constructionData's + [=processor construction data/options=]. + + 1. Let deserializedPort be the result of + [$StructuredDeserialize$](serializedPort, + the current Realm). + + 1. Let deserializedOptions be the result of + [$StructuredDeserialize$](serializedOptions, + the current Realm). + + 1. Let processorCtor be the result of looking + up processorName on the + {{AudioWorkletGlobalScope}}'s + node name to processor constructor map. + + 1. Store nodeReference and + deserializedPort to + [=pending processor construction data/node reference=] + and + [=pending processor construction data/transferred port=] + of this {{AudioWorkletGlobalScope}}'s + [=pending processor construction data=] respectively. + + 1. Construct a callback function + from processorCtor with the argument of + deserializedOptions. +
+ +

The {{AudioWorkletNode}} Interface

This interface represents a user-defined {{AudioNode}} which @@ -10757,26 +10018,23 @@ an audio graph.
 path: audionode.include
 macros:
-    noi: 1
-    noo: 1
-    cc: 2
-    cc-mode: max
-    cc-interp: speakers
-    tail-time: See notes
-    tail-time-notes:  Any tail-time is handled by the node itself
+	noi: 1
+	noo: 1
+	cc: 2
+	cc-mode: max
+	cc-interp: speakers
+	tail-time: See notes
+	tail-time-notes:  Any tail-time is handled by the node itself
 
Every {{AudioWorkletProcessor}} has an associated active source flag, initially `true`. This flag causes the node to be retained in memory and perform audio processing in the absence of any connected inputs. -All tasks posted from an {{AudioWorkletNode}} are posted to the task queue of -its associated {{BaseAudioContext}}. - [Exposed=Window] interface AudioParamMap { - readonly maplike<DOMString, AudioParam>; + readonly maplike<DOMString, AudioParam>; }; @@ -10787,11 +10045,11 @@ This interface has "entries", "forEach", "get", "has", "keys",
 [Exposed=Window, SecureContext]
 interface AudioWorkletNode : AudioNode {
-    constructor (BaseAudioContext context, DOMString name,
+	constructor (BaseAudioContext context, DOMString name,
                optional AudioWorkletNodeOptions options = {});
-    readonly attribute AudioParamMap parameters;
-    readonly attribute MessagePort port;
-    attribute EventHandler onprocessorerror;
+	readonly attribute AudioParamMap parameters;
+	readonly attribute MessagePort port;
+	attribute EventHandler onprocessorerror;
 };
 
@@ -10799,167 +10057,169 @@ interface AudioWorkletNode : AudioNode { Constructors
- : AudioWorkletNode(context, name, options) - :: -
-            context: The {{BaseAudioContext}} this new {{AudioWorkletNode}} will be associated with.
-            name: A string that is a key for the {{BaseAudioContext}}’s node name to parameter descriptor map.
-            options: Optional initial parameters value for this {{AudioWorkletNode}}.
-        
- - When the constructor is called, the user agent MUST perform the - following steps on the control thread: - -
- When the {{AudioWorkletNode()|AudioWorkletNode}} constructor - is invoked with context, nodeName, options: - - 1. If nodeName does not exist as a key in the - {{BaseAudioContext}}’s node name to parameter - descriptor map, throw a {{InvalidStateError}} - exception and abort these steps. - - 1. Let node be - this value. - - 1. Initialize the AudioNode - node with context and options as - arguments. - - 1. - Configure input, output and output channels - of node with options. - Abort the remaining steps if any exception is - thrown. - - 1. Let messageChannel be a new {{MessageChannel}}. - - 1. Let nodePort be the value of - messageChannel's {{MessageChannel/port1}} attribute. - - 1. Let processorPortOnThisSide be the value of - messageChannel's {{MessageChannel/port2}} attribute. - - 1. Let serializedProcessorPort be the result of - [$StructuredSerializeWithTransfer$](processorPortOnThisSide, - « processorPortOnThisSide »). - - 1. Convert - options dictionary to optionsObject. - - 1. Let serializedOptions be the result of - [$StructuredSerialize$](optionsObject). - - 1. Set node's {{AudioWorkletNode/port}} to nodePort. - - 1. Let parameterDescriptors be the result of retrieval - of nodeName from node name to parameter descriptor map: - - 1. Let audioParamMap be a new {{AudioParamMap}} object. - - 1. For each descriptor of - parameterDescriptors: - - 1. Let paramName be the value of - {{AudioParamDescriptor/name}} member in - descriptor. - - 1. Let audioParam be a new - {{AudioParam}} instance with - {{AudioParamDescriptor/automationRate}}, - {{AudioParamDescriptor/defaultValue}}, - {{AudioParamDescriptor/minValue}}, and - {{AudioParamDescriptor/maxValue}} - having values equal to the values of - corresponding members on - descriptor. - - 1. Append a key-value pair - paramName → - audioParam to - audioParamMap's - entries. - - 1. If {{AudioWorkletNodeOptions/parameterData}} is - present on options, perform the - following steps: - - 1. Let parameterData be the value of - {{AudioWorkletNodeOptions/parameterData}}. - - 1. For each paramName → - paramValue of - parameterData: - - 1. If there exists a map entry on - audioParamMap with - key paramName, let - audioParamInMap be - such entry. - - 1. Set {{AudioParam/value}} property - of audioParamInMap - to paramValue. - - 1. Set node's {{AudioWorkletNode/parameters}} to audioParamMap. - - 1. Queue a control message to - invoke - the {{AudioWorkletProcessor()|constructor}} of - the corresponding {{AudioWorkletProcessor}} with - the [=processor construction data=] that consists of: - nodeName, - node, - serializedOptions, and - serializedProcessorPort. -
+ : AudioWorkletNode(context, name, options) + :: +
+			context: The {{BaseAudioContext}} this new {{AudioWorkletNode}} will be associated with.
+			name: A string that is a key for the {{BaseAudioContext}}’s node name to parameter descriptor map.
+			options: Optional initial parameters value for this {{AudioWorkletNode}}.
+		
+ + When the constructor is called, the user agent MUST perform the + following steps on the control thread: + +
+ When the {{AudioWorkletNode()|AudioWorkletNode}} constructor + is invoked with context, nodeName, options: + + 1. If nodeName does not exist as a key in the + {{BaseAudioContext}}’s node name to parameter + descriptor map, throw a {{InvalidStateError}} + exception and abort these steps. + + 1. Let node be + this value. + + 1. Initialize the AudioNode + node with context and options as + arguments. + + 1. + Configure input, output and output channels + of node with options. + Abort the remaining steps if any exception is + thrown. + + 1. Let messageChannel be a new {{MessageChannel}}. + + 1. Let nodePort be the value of + messageChannel's {{MessageChannel/port1}} attribute. + + 1. Let processorPortOnThisSide be the value of + messageChannel's {{MessageChannel/port2}} attribute. + + 1. Let serializedProcessorPort be the result of + [$StructuredSerializeWithTransfer$](processorPortOnThisSide, + « processorPortOnThisSide »). + + 1. Convert + options dictionary to optionsObject. + + 1. Let serializedOptions be the result of + [$StructuredSerialize$](optionsObject). + + 1. Set node's {{AudioWorkletNode/port}} to nodePort. + + 1. Let parameterDescriptors be the result of retrieval + of nodeName from node name to parameter descriptor map: + + 1. Let audioParamMap be a new {{AudioParamMap}} object. + + 1. For each descriptor of + parameterDescriptors: + + 1. Let paramName be the value of + {{AudioParamDescriptor/name}} member in + descriptor. + + 1. Let audioParam be a new + {{AudioParam}} instance with + {{AudioParamDescriptor/automationRate}}, + {{AudioParamDescriptor/defaultValue}}, + {{AudioParamDescriptor/minValue}}, and + {{AudioParamDescriptor/maxValue}} + having values equal to the values of + corresponding members on + descriptor. + + 1. Append a key-value pair + paramName → + audioParam to + audioParamMap's + entries. + + 1. If {{AudioWorkletNodeOptions/parameterData}} is + present on options, perform the + following steps: + + 1. Let parameterData be the value of + {{AudioWorkletNodeOptions/parameterData}}. + + 1. For each paramName → + paramValue of + parameterData: + + 1. If there exists a map entry on + audioParamMap with + key paramName, let + audioParamInMap be + such entry. + + 1. Set {{AudioParam/value}} property + of audioParamInMap + to paramValue. + + 1. Set node's {{AudioWorkletNode/parameters}} to audioParamMap. + + 1. Queue a control message to + invoke + the {{AudioWorkletProcessor()|constructor}} of + the corresponding {{AudioWorkletProcessor}} with + the [=processor construction data=] that consists of: + nodeName, + node, + serializedOptions, and + serializedProcessorPort. +
Attributes
- : onprocessorerror - :: - When an unhandled exception is thrown from the processor's - constructor, process method, - or any user-defined class method, the processor will - [=queue a media element task=] to [=fire an event=] named processorerror - at the associated {{AudioWorkletNode}} using {{ErrorEvent}}. - - The ErrorEvent is created and initialized - appropriately with its message, - filename, lineno, colno - attributes on the control thread. - - Note that once a unhandled exception is thrown, the processor - will output silence throughout its lifetime. - - : parameters - :: - The parameters attribute is a collection of - {{AudioParam}} objects with associated names. This maplike - object is populated from a list of {{AudioParamDescriptor}}s - in the {{AudioWorkletProcessor}} class constructor at the - instantiation. - - : port - :: - Every {{AudioWorkletNode}} has an associated - port which is the - {{MessagePort}}. It is connected to the port on the - corresponding {{AudioWorkletProcessor}} object allowing - bidirectional communication between the - {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}. - - Note: Authors that register an event listener on the "message" - event of this {{AudioWorkletNode/port}} should call {{MessagePort/close}} on - either end of the {{MessageChannel}} (either in the - {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for - resources to be [[html#ports-and-garbage-collection|collected]]. + : onprocessorerror + :: + When an unhandled exception is thrown from the processor's + constructor, process method, + or any user-defined class method, the processor will + queue a task to + fire an event named processorerror using + + ErrorEvent at the associated {{AudioWorkletNode}}. + + The ErrorEvent is created and initialized + appropriately with its message, + filename, lineno, colno + attributes on the control thread. + + Note that once a unhandled exception is thrown, the processor + will output silence throughout its lifetime. + + : parameters + :: + The parameters attribute is a collection of + {{AudioParam}} objects with associated names. This maplike + object is populated from a list of {{AudioParamDescriptor}}s + in the {{AudioWorkletProcessor}} class constructor at the + instantiation. + + : port + :: + Every {{AudioWorkletNode}} has an associated + port which is the + {{MessagePort}}. It is connected to the port on the + corresponding {{AudioWorkletProcessor}} object allowing + bidirectional communication between the + {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}. + + Note: Authors that register a event listener on the "message" + event of this {{AudioWorkletNode/port}} should call {{MessagePort/close}} on + either end of the {{MessageChannel}} (either in the + {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for + resources to be [[html#ports-and-garbage-collection|collected]].
-
+
{{AudioWorkletNodeOptions}}
The {{AudioWorkletNodeOptions}} dictionary can be used @@ -10967,11 +10227,11 @@ to initialize attibutes in the instance of an {{AudioWorkletNode}}. dictionary AudioWorkletNodeOptions : AudioNodeOptions { - unsigned long numberOfInputs = 1; - unsigned long numberOfOutputs = 1; - sequence<unsigned long> outputChannelCount; - record<DOMString, double> parameterData; - object processorOptions; + unsigned long numberOfInputs = 1; + unsigned long numberOfOutputs = 1; + sequence<unsigned long> outputChannelCount; + record<DOMString, double> parameterData; + object processorOptions; }; @@ -10979,32 +10239,32 @@ dictionary AudioWorkletNodeOptions : AudioNodeOptions { Dictionary {{AudioWorkletNodeOptions}} Members
- : numberOfInputs - :: - This is used to initialize the value of the {{AudioNode}} - {{AudioNode/numberOfInputs}} attribute. - - : numberOfOutputs - :: - This is used to initialize the value of the {{AudioNode}} - {{AudioNode/numberOfOutputs}} attribute. - - : outputChannelCount - :: - This array is used to configure the number of channels in - each output. - - : parameterData - :: - This is a list of user-defined key-value pairs that are used - to set the initial {{AudioParam/value}} of an {{AudioParam}} - with the matched name in the {{AudioWorkletNode}}. - - : processorOptions - :: - This holds any user-defined data that may be used to initialize - custom properties in an {{AudioWorkletProcessor}} instance - that is associated with the {{AudioWorkletNode}}. + : numberOfInputs + :: + This is used to initialize the value of the {{AudioNode}} + {{AudioNode/numberOfInputs}} attribute. + + : numberOfOutputs + :: + This is used to initialize the value of the {{AudioNode}} + {{AudioNode/numberOfOutputs}} attribute. + + : outputChannelCount + :: + This array is used to configure the number of channels in + each output. + + : parameterData + :: + This is a list of user-defined key-value pairs that are used + to set the initial {{AudioParam/value}} of an {{AudioParam}} + with the matched name in the {{AudioWorkletNode}}. + + : processorOptions + :: + This holds any user-defined data that may be used to initialize + custom properties in an {{AudioWorkletProcessor}} instance + that is associated with the {{AudioWorkletNode}}.
@@ -11014,57 +10274,57 @@ The following algorithm describes how an {{AudioWorkletNodeOptions}} can be used to configure various channel configurations.
- 1. Let node be an {{AudioWorkletNode}} instance that is - given to this algorithm. - - 1. If both {{AudioWorkletNodeOptions/numberOfInputs}} and - {{AudioWorkletNodeOptions/numberOfOutputs}} are zero, - throw a {{NotSupportedError}} and abort the remaining steps. - - 1. If {{AudioWorkletNodeOptions/outputChannelCount}} - [=map/exists=], - - 1. If any value in - {{AudioWorkletNodeOptions/outputChannelCount}} is zero - or greater than the implementation’s maximum number - of channels, throw a {{NotSupportedError}} and abort - the remaining steps. - - 1. If the length of - {{AudioWorkletNodeOptions/outputChannelCount}} does not - equal {{AudioWorkletNodeOptions/numberOfOutputs}}, - throw an {{IndexSizeError}} and abort the remaining - steps. - - 1. If both {{AudioWorkletNodeOptions/numberOfInputs}} and - {{AudioWorkletNodeOptions/numberOfOutputs}} are 1, - set the channel count of the node output to - the one value in - {{AudioWorkletNodeOptions/outputChannelCount}}. - - 1. Otherwise set the channel count of the kth output - of the node to the kth element - of {{AudioWorkletNodeOptions/outputChannelCount}} - sequence and return. - - 1. If {{AudioWorkletNodeOptions/outputChannelCount}} - does not [=map/exists=], - - 1. If both {{AudioWorkletNodeOptions/numberOfInputs}} and - {{AudioWorkletNodeOptions/numberOfOutputs}} are 1, - set the initial channel count of the node - output to 1 and return. - - NOTE: For this case, the output chanel count will - change to computedNumberOfChannels dynamically - based on the input and the - {{AudioNode/channelCountMode}} at runtime. - - 1. Otherwise set the channel count of each output of the - node to 1 and return. + 1. Let node be an {{AudioWorkletNode}} instance that is + given to this algorithm. + + 1. If both {{AudioWorkletNodeOptions/numberOfInputs}} and + {{AudioWorkletNodeOptions/numberOfOutputs}} are zero, + throw a {{NotSupportedError}} and abort the remaining steps. + + 1. If {{AudioWorkletNodeOptions/outputChannelCount}} is + present, + + 1. If any value in + {{AudioWorkletNodeOptions/outputChannelCount}} is zero + or greater than the implementation’s maximum number + of channels, throw a {{NotSupportedError}} and abort + the remaining steps. + + 1. If the length of + {{AudioWorkletNodeOptions/outputChannelCount}} does not + equal {{AudioWorkletNodeOptions/numberOfOutputs}}, + throw an {{IndexSizeError}} and abort the remaining + steps. + + 1. If both {{AudioWorkletNodeOptions/numberOfInputs}} and + {{AudioWorkletNodeOptions/numberOfOutputs}} are 1, + set the channel count of the node output to + the one value in + {{AudioWorkletNodeOptions/outputChannelCount}}. + + 1. Otherwise set the channel count of the kth output + of the node to the kth element + of {{AudioWorkletNodeOptions/outputChannelCount}} + sequence and return. + + 1. If {{AudioWorkletNodeOptions/outputChannelCount}} is + not present, + + 1. If both {{AudioWorkletNodeOptions/numberOfInputs}} and + {{AudioWorkletNodeOptions/numberOfOutputs}} are 1, + set the initial channel count of the node + output to 1 and return. + + NOTE: For this case, the output chanel count will + change to computedNumberOfChannels dynamically + based on the input and the + {{AudioNode/channelCountMode}} at runtime. + + 1. Otherwise set the channel count of each output of the + node to 1 and return.
-

+

The {{AudioWorkletProcessor}} Interface

This interface represents an audio processing code that runs on the @@ -11073,144 +10333,184 @@ and the definition of the class manifests the actual audio processing. Note that the an {{AudioWorkletProcessor}} construction can only happen as a result of an {{AudioWorkletNode}} contruction. - +<pre class="idl"> [Exposed=AudioWorklet] interface AudioWorkletProcessor { - constructor (); - readonly attribute MessagePort port; + constructor (); + readonly attribute MessagePort port; }; - -callback AudioWorkletProcessCallback = - boolean (FrozenArray<FrozenArray<Float32Array>> inputs, - FrozenArray<FrozenArray<Float32Array>> outputs, - object parameters); - + {{AudioWorkletProcessor}} has two internal slots:
- : [[node reference]] - :: - A reference to the associated {{AudioWorkletNode}}. - - : [[callable process]] - :: - A boolean flag representing whether [=process()=] is - a valid function that can be invoked. + : [[node reference]] + :: + A reference to the associated {{AudioWorkletNode}}. + + : [[callable process]] + :: + A boolean flag representing whether {{AudioWorkletProcessor/process()}} is + a valid function that can be invoked.
Constructors
- : AudioWorkletProcessor() - :: - When the constructor for {{AudioWorkletProcessor}} is invoked, - the following steps are performed on the rendering thread. - -
- 1. Let nodeReference be the result of - looking up - [=pending processor construction data/node reference=] - on the - [=pending processor construction data=] of the - current {{AudioWorkletGlobalScope}}. - Throw a {{TypeError}} exception if the slot is - empty. - - 1. Let processor be the - this value. - - 1. Set processor's {{[[node reference]]}} to - nodeReference. - - 1. Set processor's {{[[callable process]]}} - to `true`. - - 1. Let deserializedPort be the result of - looking up - [=pending processor construction data/transferred port=] - from the - [=pending processor construction data=]. - - 1. Set processor’s - {{AudioWorkletProcessor/port}} - to deserializedPort. - - 1. Empty the [=pending processor construction data=] - slot. -
+ : AudioWorkletProcessor() + :: + When the constructor for {{AudioWorkletProcessor}} is invoked, + the following steps are performed on the rendering thread. + +
+ 1. Let nodeReference be the result of + looking up + [=pending processor construction data/node reference=] + on the + [=pending processor construction data=] of the + current {{AudioWorkletGlobalScope}}. + Throw a {{TypeError}} exception if the slot is + empty. + + 1. If any of following steps throws any exception, + perform these substeps: + + 1. Empty the + [=pending processor construction data=] + slot. + + 2. Abort the rest of the constructor algorithm + and queue a task to the + control thread + to fire + an event named + processorerror using + ErrorEvent + at nodeReference. + + 1. Let processor be the + this value. + + 1. Set processor's {{[[node reference]]}} to + nodeReference. + + 1. Set processor's {{[[callable process]]}} + to `true`. + + 1. Let deserializedPort be the result of + looking up + [=pending processor construction data/transferred port=] + from the + [=pending processor construction data=]. + + 1. Set processor’s + {{AudioWorkletProcessor/port}} + to deserializedPort. + + 1. Empty the [=pending processor construction data=] + slot. +
Attributes
- : port - :: - Every {{AudioWorkletProcessor}} has an associated - port which is a {{MessagePort}}. It is connected to - the port on the corresponding {{AudioWorkletNode}} object - allowing bidirectional communication between an - {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}. - - Note: Authors that register an event listener on the "message" - event of this {{AudioWorkletProcessor/port}} should call - {{MessagePort/close}} on either end of the {{MessageChannel}} (either in the - {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for - resources to be [[html#ports-and-garbage-collection|collected]]. + : port + :: + Every {{AudioWorkletProcessor}} has an associated + port which is a {{MessagePort}}. It is connected to + the port on the corresponding {{AudioWorkletNode}} object + allowing bidirectional communication between an + {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}. + + Note: Authors that register a event listener on the "message" + event of this {{AudioWorkletProcessor/port}} should call + {{MessagePort/close}} on either end of the {{MessageChannel}} (either in the + {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for + resources to be [[html#ports-and-garbage-collection|collected]].
-
-Callback {{AudioWorkletProcessCallback}}
+
+Methods
Users can define a custom audio processor by extending -{{AudioWorkletProcessor}}. The subclass MUST define an {{AudioWorkletProcessCallback}} -named process() that implements the audio processing +{{AudioWorkletProcessor}}. The subclass MUST define a method +named {{process()}} that implements the audio processing algorithm and may have a static property named parameterDescriptors which is an iterable of {{AudioParamDescriptor}}s. -The [=process()=] callback function is handled as specified when rendering a graph. - -
- The return value of this callback controls the lifetime - of the {{AudioWorkletProcessor}}'s associated - {{AudioWorkletNode}}. - - This lifetime policy can support a variety of approaches - found in built-in nodes, including the following: - - * Nodes that transform their inputs, and are active only - while connected inputs and/or script references exist. Such - nodes SHOULD return false from - [=process()=] which allows the presence or absence of - connected inputs to determine whether the {{AudioWorkletNode}} is - [=actively processing=]. - - * Nodes that transform their inputs, but which remain active - for a tail-time after their inputs are disconnected. In - this case, [=process()=] SHOULD return - `true` for some period of time after - inputs is found to contain zero channels. The - current time may be obtained from the global scope's - {{AudioWorkletGlobalScope/currentTime}} to - measure the start and end of this tail-time interval, or the - interval could be calculated dynamically depending on the - processor's internal state. - - * Nodes that act as sources of output, typically with a - lifetime. Such nodes SHOULD return `true` from - [=process()=] until the point at which they are no - longer producing an output. - - Note that the preceding definition implies that when no - return value is provided from an implementation of - [=process()=], the effect is identical to returning - false (since the effective return value is the falsy - value {{undefined}}). This is a reasonable behavior for - any {{AudioWorkletProcessor}} that is active only when it has - active inputs. -
+
+ : process(inputs, outputs, parameters) + :: + Implements the audio processing algorithm for the + {{AudioWorkletProcessor}}. + + The {{process()}} method is called + synchronously by the audio rendering thread at + every render quantum, if {{AudioWorkletNode}} is + [=actively processing=]. + +
+ The return value of this method controls the lifetime + of the {{AudioWorkletProcessor}}'s associated + {{AudioWorkletNode}}. + + This lifetime policy can support a variety of approaches + found in built-in nodes, including the following: + + * Nodes that transform their inputs, and are active only + while connected inputs and/or script references exist. Such + nodes SHOULD return false from + {{process()}} which allows the presence or absence of + connected inputs to determine whether the {{AudioWorkletNode}} is + [=actively processing=]. + + * Nodes that transform their inputs, but which remain active + for a tail-time after their inputs are disconnected. In + this case, {{process()}} SHOULD return + `true` for some period of time after + inputs is found to contain zero channels. The + current time may be obtained from the global scope's + {{AudioWorkletGlobalScope/currentTime}} to + measure the start and end of this tail-time interval, or the + interval could be calculated dynamically depending on the + processor's internal state. + + * Nodes that act as sources of output, typically with a + lifetime. Such nodes SHOULD return `true` from + {{process()}} until the point at which they are no + longer producing an output. + + Note that the preceding definition implies that when no + return value is provided from an implementation of + {{process()}}, the effect is identical to returning + false (since the effective return value is the falsy + value undefined). This is a reasonable behavior for + any {{AudioWorkletProcessor}} that is active only when it has + active inputs. +
+ +
+			inputs:
+				The input audio buffer from the incoming connections provided by the user agent. It has type sequence<sequence<Float32Array>>. inputs[n][m] is a {{Float32Array}} of audio samples for the \(m\)th channel of the \(n\)th input. While the number of inputs is fixed at construction, the number of channels can be changed dynamically based on [=computedNumberOfChannels=].
+
+				If there are no [=actively processing=] {{AudioNode}}s connected to the \(n\)th input of the {{AudioWorkletNode}} for the current render quantum, then the content of inputs[n] is an empty array, indicating that zero channels of input are available. This is the only circumstance under which the number of elements of inputs[n] can be zero.
+
+			outputs:
+				The output audio buffer that is to be consumed by the user agent. It has type sequence<sequence<Float32Array>>. outputs[n][m] is a {{Float32Array}} object containing the audio samples for \(m\)th channel of \(n\)th output. Each of the {{Float32Array}}s are zero-filled. The number of channels in the output will match [=computedNumberOfChannels=] only when the node has a single output.
+
+			parameters:
+				An [=ordered map=] of name → parameterValues. parameters["name"] returns parameterValues, which is a {{Float32Array}} with the automation values of the name {{AudioParam}}.
+
+				For each array, the array contains the [=computedValue=] of the parameter for all frames in the [=render quantum=]. However, if no automation is scheduled during this render quantum, the array MAY have length 1 with the array element being the constant value of the {{AudioParam}} for the [=render quantum=].
+		
+ +
+ Return type: {{boolean}} +
+
The example below shows how {{AudioParam}} can be defined and used in an {{AudioWorkletProcessor}}. @@ -11245,52 +10545,7 @@ class MyProcessor extends AudioWorkletProcessor { } -
-Callback {{AudioWorkletProcessCallback}} Parameters -
-The following describes the parameters to the {{AudioWorkletProcessCallback}} function. - -In general, the {{AudioWorkletProcessCallback/inputs!!argument}} and -{{AudioWorkletProcessCallback/outputs!!argument}} arrays will be reused -between calls so that no memory allocation is done. However, if the -topology changes, because, say, the number of channels in the input or the -output changes, new arrays are reallocated. New arrays are also -reallocated if any part of the -{{AudioWorkletProcessCallback/inputs!!argument}} or -{{AudioWorkletProcessCallback/outputs!!argument}} arrays are -transferred. - -
- : {{AudioWorkletProcessCallback/inputs!!argument}}, of type {{FrozenArray}}<{{FrozenArray}}<{{Float32Array}}>> - :: The input audio buffer from the incoming connections provided by the user agent. inputs[n][m] is a {{Float32Array}} of audio samples for the \(m\)th channel of the \(n\)th input. While the number of inputs is fixed at construction, the number of channels can be changed dynamically based on [=computedNumberOfChannels=]. - - If there are no [=actively processing=] {{AudioNode}}s connected to the \(n\)th input of the {{AudioWorkletNode}} for the current render quantum, then the content of inputs[n] is an empty array, indicating that zero channels of input are available. This is the only circumstance under which the number of elements of inputs[n] can be zero. - - : {{AudioWorkletProcessCallback/outputs!!argument}}, of type {{FrozenArray}}<{{FrozenArray}}<{{Float32Array}}>> - :: The output audio buffer that is to be consumed by the user agent. outputs[n][m] is a {{Float32Array}} object containing the audio samples for \(m\)th channel of \(n\)th output. Each of the {{Float32Array}}s are zero-filled. The number of channels in the output will match [=computedNumberOfChannels=] only when the node has a single output. - - : {{AudioWorkletProcessCallback/parameters!!argument}}, of type {{object}} - :: An [=ordered map=] of name → parameterValues. parameters["name"] returns parameterValues, which is a {{FrozenArray}}<{{Float32Array}}> with the automation values of the name {{AudioParam}}. - - For each array, the array contains the [=computedValue=] of the parameter for all frames in the [=render quantum=]. However, if no automation is scheduled during this render quantum, the array MAY have length 1 with the array element being the constant value of the {{AudioParam}} for the [=render quantum=]. - - This object is frozen according the the following steps -
- 1. Let |parameter| be the [=ordered map=] of the name and parameter values. - 1. SetIntegrityLevel(|parameter|, frozen) -
- - This frozen [=ordered map=] computed in the algorithm is passed to the - {{AudioWorkletProcessCallback/parameters!!argument}} - argument. - - Note: This means the object cannot be modified and - hence the same object can be used for successive calls - unless length of an array changes. - -
- -
+
{{AudioParamDescriptor}}
The {{AudioParamDescriptor}} dictionary is used to @@ -11299,40 +10554,51 @@ that is used in an {{AudioWorkletNode}}.
 dictionary AudioParamDescriptor {
-    required DOMString name;
-    float defaultValue = 0;
-    float minValue = -3.4028235e38;
-    float maxValue = 3.4028235e38;
-    AutomationRate automationRate = "a-rate";
+	required DOMString name;
+	float defaultValue = 0;
+	float minValue = -3.4028235e38;
+	float maxValue = 3.4028235e38;
+	AutomationRate automationRate = "a-rate";
 };
 
Dictionary {{AudioParamDescriptor}} Members
-There are constraints on the values for these members. See the algorithm for handling an -AudioParamDescriptor for the constraints. -
- : automationRate - :: - Represents the default automation rate. - : defaultValue - :: - Represents the default value of the parameter. - - : maxValue - :: - Represents the maximum value. - - : minValue - :: - Represents the minimum value. - - : name - :: - Represents the name of the parameter. + : automationRate + :: + Represents the default automation rate. + + : defaultValue + :: + Represents the default value of the parameter. If this value + is out of the range of float data type or the range defined + by {{AudioParamDescriptor/minValue}} and {{AudioParamDescriptor/maxValue}}, a + {{NotSupportedError}} exception MUST be thrown. + + : maxValue + :: + Represents the maximum value. A + {{NotSupportedError}} exception MUST be thrown if + this value is out of range of float data type or it is + smaller than minValue. This value is the most + positive finite single precision floating-point number. + + : minValue + :: + Represents the minimum value. A + {{NotSupportedError}} exception MUST be thrown if + this value is out of range of float data type or it is + greater than maxValue. This value is the most + negative finite single precision floating-point number. + + : name + :: + Represents the name of the parameter. A + {{NotSupportedError}} exception MUST be thrown when + a duplicated name is found when registering the class + definition.

@@ -11342,10 +10608,10 @@ The following figure illustrates an idealized sequence of events occurring relative to an {{AudioWorklet}}:
- -
- {{AudioWorklet}} sequence -
+ +
+ {{AudioWorklet}} sequence +
The steps depicted in the diagram are one possible sequence of @@ -11355,31 +10621,31 @@ of an {{AudioWorkletNode}} and its associated {{AudioWorkletProcessor}}.
- 1. An {{AudioContext}} is created. + 1. An {{AudioContext}} is created. - 2. In the main scope, context.audioWorklet is requested to add a script module. + 2. In the main scope, context.audioWorklet is requested to add a script module. - 2. Since none exists yet, a new {{AudioWorkletGlobalScope}} is created in association with the context. This is the global scope in which {{AudioWorkletProcessor}} class definitions will be evaluated. (On subsequent calls, this previously created scope will be used.) + 2. Since none exists yet, a new {{AudioWorkletGlobalScope}} is created in association with the context. This is the global scope in which {{AudioWorkletProcessor}} class definitions will be evaluated. (On subsequent calls, this previously created scope will be used.) - 2. The imported script is run in the newly created global scope. + 2. The imported script is run in the newly created global scope. - 3. As part of running the imported script, an {{AudioWorkletProcessor}} is registered under - a key ("custom" in the above diagram) within the {{AudioWorkletGlobalScope}}. - This populates maps both in the global scope and in the {{AudioContext}}. + 3. As part of running the imported script, an {{AudioWorkletProcessor}} is registered under + a key ("custom" in the above diagram) within the {{AudioWorkletGlobalScope}}. + This populates maps both in the global scope and in the {{AudioContext}}. - 3. The promise for the {{addModule()}} call is resolved. + 3. The promise for the {{addModule()}} call is resolved. - 6. In the main scope, an {{AudioWorkletNode}} is created using - the user-specified key along with a - dictionary of options. + 6. In the main scope, an {{AudioWorkletNode}} is created using + the user-specified key along with a + dictionary of options. - 7. As part of the node's creation, this key is used to look up the - correct {{AudioWorkletProcessor}} subclass for instantiation. + 7. As part of the node's creation, this key is used to look up the + correct {{AudioWorkletProcessor}} subclass for instantiation. - 8. An instance of the {{AudioWorkletProcessor}} subclass is - instantiated with a structured clone of the same options - dictionary. This instance is paired with the previously created - {{AudioWorkletNode}}. + 8. An instance of the {{AudioWorkletProcessor}} subclass is + instantiated with a structured clone of the same options + dictionary. This instance is paired with the previously created + {{AudioWorkletNode}}.

@@ -11399,87 +10665,90 @@ a lower bit-depth), and by quantizing in time resolution const context = new AudioContext(); context.audioWorklet.addModule('bitcrusher.js').then(() => { - const osc = new OscillatorNode(context); - const amp = new GainNode(context); - - // Create a worklet node. 'BitCrusher' identifies the - // AudioWorkletProcessor previously registered when - // bitcrusher.js was imported. The options automatically - // initialize the correspondingly named AudioParams. - const bitcrusher = new AudioWorkletNode(context, 'bitcrusher', { - parameterData: {bitDepth: 8} - }); - - osc.connect(bitcrusher).connect(amp).connect(context.destination); - osc.start(); + const osc = new OscillatorNode(context); + const amp = new GainNode(context); + + // Create a worklet node. 'BitCrusher' identifies the + // AudioWorkletProcessor previously registered when + // bitcrusher.js was imported. The options automatically + // initialize the correspondingly named AudioParams. + const bitcrusher = new AudioWorkletNode(context, 'bitcrusher', { + parameterData: {bitDepth: 8} + }); + + osc.connect(bitcrusher).connect(amp).connect(context.destination); + osc.start(); }); class Bitcrusher extends AudioWorkletProcessor { - static get parameterDescriptors () { - return [{ - name: 'bitDepth', - defaultValue: 12, - minValue: 1, - maxValue: 16 - }, { - name: 'frequencyReduction', - defaultValue: 0.5, - minValue: 0, - maxValue: 1 - }]; - } - - constructor () { - super(); - this._phase = 0; - this._lastSampleValue = 0; - } - - process (inputs, outputs, parameters) { - const input = inputs[0]; - const output = outputs[0]; - const bitDepth = parameters.bitDepth; - const frequencyReduction = parameters.frequencyReduction; - - if (bitDepth.length > 1) { - for (let channel = 0; channel < output.length; ++channel) { - for (let i = 0; i < output[channel].length; ++i) { - let step = Math.pow(0.5, bitDepth[i]); - - // Use modulo for indexing to handle the case where - // the length of the frequencyReduction array is 1. - this._phase += frequencyReduction[i % frequencyReduction.length]; - if (this._phase >= 1.0) { - this._phase -= 1.0; - this._lastSampleValue = - step * Math.floor(input[channel][i] / step + 0.5); - } - output[channel][i] = this._lastSampleValue; - } - } - } else { - // Because we know bitDepth is constant for this call, - // we can lift the computation of step outside the loop, - // saving many operations. - const step = Math.pow(0.5, bitDepth[0]); - for (let channel = 0; channel < output.length; ++channel) { - for (let i = 0; i < output[channel].length; ++i) { - this._phase += frequencyReduction[i % frequencyReduction.length]; - if (this._phase >= 1.0) { - this._phase -= 1.0; - this._lastSampleValue = - step * Math.floor(input[channel][i] / step + 0.5); - } - output[channel][i] = this._lastSampleValue; - } - } - } - // No need to return a value; this node's lifetime is dependent only on its - // input connections. - } -}; + static get parameterDescriptors () { + return [{ + name: 'bitDepth', + defaultValue: 12, + minValue: 1, + maxValue: 16 + }, { + name: 'frequencyReduction', + defaultValue: 0.5, + minValue: 0, + maxValue: 1 + }]; + } + + constructor (options) { + // The initial parameter value can be set by passing |options| + // to the processor's constructor. + super(options); + this._phase = 0; + this._lastSampleValue = 0; + } + + process (inputs, outputs, parameters) { + const input = inputs[0]; + const output = outputs[0]; + const bitDepth = parameters.bitDepth; + const frequencyReduction = parameters.frequencyReduction; + + if (bitDepth.length > 1) { + // The bitDepth parameter array has 128 sample values. + for (let channel = 0; channel < output.length; ++channel) { + for (let i = 0; i < output[channel].length; ++i) { + let step = Math.pow(0.5, bitDepth[i]); + + // Use modulo for indexing to handle the case where + // the length of the frequencyReduction array is 1. + this._phase += frequencyReduction[i % frequencyReduction.length]; + if (this._phase >= 1.0) { + this._phase -= 1.0; + this._lastSampleValue = + step * Math.floor(input[channel][i] / step + 0.5); + } + output[channel][i] = this._lastSampleValue; + } + } + } else { + // Because we know bitDepth is constant for this call, + // we can lift the computation of step outside the loop, + // saving many operations. + const step = Math.pow(0.5, bitDepth[0]); + for (let channel = 0; channel < output.length; ++channel) { + for (let i = 0; i < output[channel].length; ++i) { + this._phase += frequencyReduction[i % frequencyReduction.length]; + if (this._phase >= 1.0) { + this._phase -= 1.0; + this._lastSampleValue = + step * Math.floor(input[channel][i] / step + 0.5); + } + output[channel][i] = this._lastSampleValue; + } + } + } + // No need to return a value; this node's lifetime is dependent only on its + // input connections. + } +}); registerProcessor('bitcrusher', Bitcrusher); @@ -11500,47 +10769,47 @@ communication (asynchronous) between {{AudioWorkletNode}} and {{AudioWorkletProcessor}}. This node does not use any output. -
+
 /* vumeter-node.js: Main global scope */
 
 export default class VUMeterNode extends AudioWorkletNode {
-    constructor (context, updateIntervalInMS) {
-        super(context, 'vumeter', {
-            numberOfInputs: 1,
-            numberOfOutputs: 0,
-            channelCount: 1,
-            processorOptions: {
-                updateIntervalInMS: updateIntervalInMS || 16.67
-            }
-        });
-
-        // States in AudioWorkletNode
-        this._updateIntervalInMS = updateIntervalInMS;
-        this._volume = 0;
-
-        // Handles updated values from AudioWorkletProcessor
-        this.port.onmessage = event => {
-            if (event.data.volume)
-                this._volume = event.data.volume;
-        }
-        this.port.start();
-    }
-
-    get updateInterval() {
-        return this._updateIntervalInMS;
-    }
-
-    set updateInterval(updateIntervalInMS) {
-        this._updateIntervalInMS = updateIntervalInMS;
-        this.port.postMessage({updateIntervalInMS: updateIntervalInMS});
-    }
-
-    draw () {
-        // Draws the VU meter based on the volume value
-        // every |this._updateIntervalInMS| milliseconds.
-    }
+	constructor (context, updateIntervalInMS) {
+		super(context, 'vumeter', {
+			numberOfInputs: 1,
+			numberOfOutputs: 0,
+			channelCount: 1,
+			processorOptions: {
+				updateIntervalInMS: updateIntervalInMS || 16.67;
+			}
+		});
+
+		// States in AudioWorkletNode
+		this._updateIntervalInMS = updateIntervalInMS;
+		this._volume = 0;
+
+		// Handles updated values from AudioWorkletProcessor
+		this.port.onmessage = event => {
+			if (event.data.volume)
+				this._volume = event.data.volume;
+		}
+		this.port.start();
+	}
+
+	get updateInterval() {
+		return this._updateIntervalInMS;
+	}
+
+	set updateInterval(updateIntervalInMS) {
+		this._updateIntervalInMS = updateIntervalInMS;
+		this.port.postMessage({updateIntervalInMS: updateIntervalInMS});
+	}
+
+	draw () {
+		// Draws the VU meter based on the volume value
+		// every |this._updateIntervalInMS| milliseconds.
+	}
 };
-</pre>
+
 
 
 /* vumeter-processor.js: AudioWorkletGlobalScope */
@@ -11549,52 +10818,52 @@ const SMOOTHING_FACTOR = 0.9;
 const MINIMUM_VALUE = 0.00001;
 
 registerProcessor('vumeter', class extends AudioWorkletProcessor {
-    constructor (options) {
-        super();
-        this._volume = 0;
-        this._updateIntervalInMS = options.processorOptions.updateIntervalInMS;
-        this._nextUpdateFrame = this._updateIntervalInMS;
-
-        this.port.onmessage = event => {
-            if (event.data.updateIntervalInMS)
-                this._updateIntervalInMS = event.data.updateIntervalInMS;
-        }
-    }
-
-    get intervalInFrames () {
-        return this._updateIntervalInMS / 1000 * sampleRate;
-    }
-
-    process (inputs, outputs, parameters) {
-        const input = inputs[0];
-        // Note that the input will be down-mixed to mono; however, if no inputs are
-        // connected then zero channels will be passed in.
-        if (input.length > 0) {
-            const samples = input[0];
-            let sum = 0;
-            let rms = 0;
-
-            // Calculated the squared-sum.
-            for (let i = 0; i < samples.length; ++i)
-                sum += samples[i] * samples[i];
-
-            // Calculate the RMS level and update the volume.
-            rms = Math.sqrt(sum / samples.length);
-            this._volume = Math.max(rms, this._volume * SMOOTHING_FACTOR);
-
-            // Update and sync the volume property with the main thread.
-            this._nextUpdateFrame -= samples.length;
-            if (this._nextUpdateFrame < 0) {
-                this._nextUpdateFrame += this.intervalInFrames;
-                this.port.postMessage({volume: this._volume});
-            }
-        }
-
-        // Keep on processing if the volume is above a threshold, so that
-        // disconnecting inputs does not immediately cause the meter to stop
-        // computing its smoothed value.
-        return this._volume >= MINIMUM_VALUE;
-    }
+	constructor (options) {
+		super();
+		this._volume = 0;
+		this._updateIntervalInMS = options.processorOptions.updateIntervalInMS;
+		this._nextUpdateFrame = this._updateIntervalInMS;
+
+		this.port.onmessage = event => {
+			if (event.data.updateIntervalInMS)
+				this._updateIntervalInMS = event.data.updateIntervalInMS;
+		}
+	}
+
+	get intervalInFrames () {
+		return this._updateIntervalInMS / 1000 * sampleRate;
+	}
+
+	process (inputs, outputs, parameters) {
+		const input = inputs[0];
+		// Note that the input will be down-mixed to mono; however, if no inputs are
+		// connected then zero channels will be passed in.
+		if (input.length > 0) {
+			const samples = input[0];
+			let sum = 0;
+			let rms = 0;
+
+			// Calculated the squared-sum.
+			for (let i = 0; i < samples.length; ++i)
+				sum += samples[i] * samples[i];
+
+			// Calculate the RMS level and update the volume.
+			rms = Math.sqrt(sum / samples.length);
+			this._volume = Math.max(rms, this._volume * SMOOTHING_FACTOR);
+
+			// Update and sync the volume property with the main thread.
+			this._nextUpdateFrame -= samples.length;
+			if (this._nextUpdateFrame < 0) {
+				this._nextUpdateFrame += this.intervalInFrames;
+				this.port.postMessage({volume: this._volume});
+			}
+		}
+
+		// Keep on processing if the volume is above a threshold, so that
+		// disconnecting inputs does not immediately cause the meter to stop
+		// computing its smoothed value.
+		return this._volume >= MINIMUM_VALUE;
+	}
 
 });
 
@@ -11605,17 +10874,17 @@ import VUMeterNode from './vumeter-node.js';
 
 const context = new AudioContext();
 context.audioWorklet.addModule('vumeter-processor.js').then(() => {
-    const oscillator = new OscillatorNode(context);
-    const vuMeterNode = new VUMeterNode(context, 25);
-    oscillator.connect(vuMeterNode);
-    oscillator.start();
-
-    function drawMeter () {
-        vuMeterNode.draw();
-        requestAnimationFrame(drawMeter);
-    }
+	const oscillator = new OscillatorNode(context);
+	const vuMeterNode = new VUMeterNode(context, 25);
+	oscillator.connect(vuMeterNode);
+	oscillator.start();
 
-    drawMeter();
+	function drawMeter () {
+		vuMeterNode.draw();
+		requestAnimationFrame(drawMeter);
+	}
+
+	drawMeter();
 });
 
 
@@ -11669,6 +10938,11 @@ reaction to the calls from the control thread. It can be a
 real-time, callback-based audio thread, if computing audio for an
 {{AudioContext}}, or a normal thread if computing audio for an {{OfflineAudioContext}}.
 
+Each thread has an internal slot that indicates its current state.
+control thread state is the equivalent of {{BaseAudioContext/state}}
+and rendering thread state in the counterpart from the rendering
+thread. These slots have a value of {{AudioContextState}}.
+
 The control thread uses a traditional event loop, as described
 in [[HTML]].
 
@@ -11700,21 +10974,21 @@ by time of insertion. The oldest message is therefore the
 one at the front of the control message queue.
 
 
- Swapping a control message queue - QA with another control message queue - QB means executing the following steps: + Swapping a control message queue + QA with another control message queue + QB means executing the following steps: - 1. Let QC be a new, empty control message - queue. + 1. Let QC be a new, empty control message + queue. - 2. Move all the control messages QA to - QC. + 2. Move all the control messages QA to + QC. - 3. Move all the control messages QB to - QA. + 3. Move all the control messages QB to + QA. - 4. Move all the control messages QC to - QB. + 4. Move all the control messages QC to + QB.

@@ -11753,11 +11027,9 @@ asynchronous sections can't be reordered.

Rendering an Audio Graph

-Audio graph rendering is done in blocks of sample-frames, with the size of each -block remaining constant for the lifetime of a {{BaseAudioContext}}. The number of -sample-frames in a block is called render quantum size, and the block -itself is called a render quantum. Its default value is 128, and it can -be configured by setting {{AudioContextOptions/renderSizeHint}}. +Rendering an audio graph is done in blocks of 128 samples-frames. A +block of 128 samples-frames is called a render quantum, and +the render quantum size is 128. Operations that happen atomically on a given thread can only be executed when no other [=Atomically|atomic=] @@ -11768,300 +11040,217 @@ The algorithm for rendering a block of audio from a {{BaseAudioContext}} Q is comprised of multiple steps and explained in further detail in the algorithm of rendering a graph. -The {{AudioContext}} rendering thread is driven by a -system-level audio callback, that is periodically -at regular intevals. Each call has a system-level audio callback buffer -size, which is a varying number of sample-frames that needs to be -computed on time before the next system-level audio callback arrives. - -A load value is computed for each system-level audio -callback, by dividing its execution duration by the system-level -audio callback buffer size divided by {{BaseAudioContext/sampleRate}}. - -Ideally the load value is below 1.0, meaning that it took less time -to render the audio that it took to play it out. An audio buffer -underrun happens when this load value is greater than 1.0: the -system could not render audio fast enough for real-time. - -
- The render quantum size for an audio graph is not necessarily a - divisor of the system-level audio callback buffer size. This causes - increased audio latencies and reduced possible maximum load without - audio buffer underrun. +
+ In practice, the {{AudioContext}} rendering thread is + often running off a system-level audio callback, that executes in + an isochronous fashion. + + An {{OfflineAudioContext}} is not required to have a + system-level audio callback, but behaves as if it did with the + callback happening as soon as the previous callback is + finished.
-Note that the concepts of system-level audio callback and load -value do not apply to {{OfflineAudioContext}}s. - -The audio callback is also queued as a task in the control message queue. The UA MUST perform +The audio callback is also queued as a task in the +control message queue. The UA MUST perform the following algorithms to process render quanta to fulfill such task by -filling up the requested buffer size. Along with the control message -queue, each {{AudioContext}} has a regular task -queue, called its associated task queue -for tasks that are posted to the rendering thread from the control thread. An -additional microtask checkpoint is performed after processing a render quantum -to run any microtasks that might have been queued during the execution of the -`process` methods of {{AudioWorkletProcessor}}. - -All tasks posted from an {{AudioWorkletNode}} are posted to the [=associated -task queue=] of its associated {{BaseAudioContext}}. +filling up the requested buffer size. Along with the primary message queue, the +rendering thread has another task queue for microtasks for any microtask +operation such as resolution of {{Promise}}s in the {{AudioWorkletGlobalScope}}.
- The following step MUST be performed once before the rendering loop starts. + The following step MUST be performed once before the rendering loop starts. - 1. Set the internal slot [[current frame]] of the - {{BaseAudioContext}} to 0. Also set {{BaseAudioContext/currentTime}} to 0. + 1. Set the internal slot [[current frame]] of the + {{BaseAudioContext}} to 0. Also set {{BaseAudioContext/currentTime}} to 0.
- The following steps MUST be performed when rendering a render quantum. - - 1. Let render result be false. - - 2. Process the [=control message queue=]. - - 1. Let Qrendering be an empty [=control message - queue=]. [=Atomically=] [=swap=] Qrendering - with the current [=control message queue=]. - - 2. While there are messages in Qrendering, execute the - following steps: - - 1. Execute the asynchronous section of the [=oldest message=] of - Qrendering. - - 2. Remove the [=oldest message=] of Qrendering. - - - 3. Process the {{BaseAudioContext}}'s [=associated task queue=]. - - - 1. Let task queue be the {{BaseAudioContext}}'s [=associated task queue=]. - 2. Let task count be the number of tasks in the in task queue - 3. While task count is not equal to 0, execute the following steps: - 1. Let oldest task be the first runnable task in task queue, and remove it from task queue. - 2. Set the rendering loop's currently running task to oldest task. - 3. Perform oldest task's steps. - 4. Set the rendering loop currently running task back to null. - 6. Decrement task count - 5. Perform a microtask checkpoint. - - 4. Process a render quantum. - - 1. If the {{[[rendering thread state]]}} of the {{BaseAudioContext}} is not - running, return false. - - 2. Order the {{AudioNode}}s of the {{BaseAudioContext}} to be processed. - - 1. Let ordered node list be an empty list of {{AudioNode}}s and - {{AudioListener}}. It will contain an ordered list of {{AudioNode}}s and - the {{AudioListener}} when this ordering algorithm terminates. - - 2. Let nodes be the set of all nodes created by this - {{BaseAudioContext}}, and still alive. - - 3. Add the {{AudioListener}} to nodes. - - 4. Let cycle breakers be an empty set of {{DelayNode}}s. It will - contain all the {{DelayNode}}s that are part of a cycle. - - 5. For each {{AudioNode}} node in nodes: - - 1. If node is a {{DelayNode}} that is part of a cycle, add it - to cycle breakers and remove it from nodes. - - 6. For each {{DelayNode}} delay in cycle breakers: + 1. Let render result be false. - 1. Let delayWriter and delayReader respectively be a - DelayWriter and a DelayReader, for delay. - Add delayWriter and delayReader to - nodes. Disconnect delay from all its input and - outputs. + 2. Process the [=control message queue=]. - Note: This breaks the cycle: if a DelayNode is in a - cycle, its two ends can be considered separately, because delay lines - cannot be smaller than one render quantum when in a cycle. + 1. Let Qrendering be an empty [=control message + queue=]. [=Atomically=] [=swap=] Qrendering + with the current [=control message queue=]. - 7. If nodes contains cycles, [=mute=] all the - {{AudioNode}}s that are part of this cycle, and remove them from - nodes. + 2. While there are messages in Qrendering, execute the + following steps: - 8. Consider all elements in nodes to be unmarked. While there are unmarked elements in nodes: + 1. Execute the asynchronous section of the [=oldest message=] of + Qrendering. - 1. Choose an element node in nodes. + 2. Remove the [=oldest message=] of Qrendering. - 2. [=Visit=] node. + 3. Process a render quantum. -
- Visiting a node means performing - the following steps: + 1. If the rendering thread state of the {{BaseAudioContext}} is not + running, return false. - 1. If node is marked, abort these steps. + 2. Order the {{AudioNode}}s of the {{BaseAudioContext}} to be processed. - 2. Mark node. + 1. Let ordered node list be an empty list of {{AudioNode}}s and + {{AudioListener}}. It will contain an ordered list of {{AudioNode}}s and + the {{AudioListener}} when this ordering algorithm terminates. - 3. If node is an {{AudioNode}}, [=Visit=] each - {{AudioNode}} connected to the input of node. + 2. Let nodes be the set of all nodes created by this + {{BaseAudioContext}}, and still alive. - 4. For each {{AudioParam}} param of node: - 1. For each {{AudioNode}} param input node connected to param: - 1. [=Visit=] param input node + 3. Add the {{AudioListener}} to nodes. - 5. Add node to the beginning of ordered node list. -
+ 4. Let cycle breakers be an empty set of {{DelayNode}}s. It will + contain all the {{DelayNode}}s that are part of a cycle. - 9. Reverse the order of ordered node list. + 5. For each {{AudioNode}} node in nodes: - 4. [[#computation-of-value|Compute the value(s)]] of the - {{AudioListener}}'s {{AudioParam}}s for this block. + 1. If node is a {{DelayNode}} that is part of a cycle, add it + to cycle breakers and remove it from nodes. - 5. For each {{AudioNode}}, in ordered node list: + 6. For each {{DelayNode}} delay in cycle breakers: - 1. For each {{AudioParam}} of this {{AudioNode}}, execute these steps: + 1. Let delayWriter and delayReader respectively be a + DelayWriter and a DelayReader, for delay. + Add delayWriter and delayReader to + nodes. Disconnect delay from all its input and + outputs. - 1. If this {{AudioParam}} has any {{AudioNode}} connected to it, - [[#channel-up-mixing-and-down-mixing|sum]] the buffers - [=Making a buffer available for reading|made available for reading=] by - all {{AudioNode}} connected to this {{AudioParam}}, - [[#down-mix|down mix]] the resulting buffer down to a mono - channel, and call this buffer the - input AudioParam buffer. + Note: This breaks the cycle: if a DelayNode is in a + cycle, its two ends can be considered separately, because delay lines + cannot be smaller than one render quantum when in a cycle. - 2. [[#computation-of-value|Compute the value(s)]] of this - {{AudioParam}} for this block. + 7. If nodes contains cycles, [=mute=] all the + {{AudioNode}}s that are part of this cycle, and remove them from + nodes. - 3. [=Queue a control message=] to set the {{[[current value]]}} slot - of this {{AudioParam}} according to [[#computation-of-value]]. + 8. Consider all elements in nodes to be unmarked. While there are unmarked elements in nodes: - 2. If this {{AudioNode}} has any {{AudioNode}}s connected to its input, - [[#channel-up-mixing-and-down-mixing|sum]] the buffers - [=Making a buffer available for reading|made available for reading=] by all - {{AudioNode}}s connected to this {{AudioNode}}. The resulting buffer is - called the input buffer. - [[#channel-up-mixing-and-down-mixing|Up or down-mix]] it to - match if number of input channels of this {{AudioNode}}. + 1. Choose an element node in nodes. - 3. If this {{AudioNode}} is a source node, - [=Computing a block of audio|compute a block of audio=], and - [=Making a buffer available for reading|make it available for reading=]. + 2. [=Visit=] node. - 4. If this {{AudioNode}} is an {{AudioWorkletNode}}, execute these substeps: +
+ Visiting a node means performing + the following steps: - 1. Let |processor| be the associated {{AudioWorkletProcessor}} - instance of {{AudioWorkletNode}}. + 1. If node is marked, abort these steps. - 1. Let |O| be the ECMAScript object corresponding to |processor|. + 2. Mark node. - 1. Let |processCallback| be an uninitialized variable. + 3. If node is an {{AudioNode}}, [=Visit=] each + {{AudioNode}} connected to the input of node. - 1. Let |completion| be an uninitialized variable. + 4. For each {{AudioParam}} param of node: + 1. For each {{AudioNode}} param input node connected to param: + 1. [=Visit=] param input node - 1. [=Prepare to run script=] with the [=current settings object=]. + 5. Add node to the beginning of ordered node list. +
- 1. [=Prepare to run a callback=] with the [=current settings object=]. + 9. Reverse the order of ordered node list. - 1. Let |getResult| be - - Get(|O|, "process"). + 4. [[#computation-of-value|Compute the value(s)]] of the + {{AudioListener}}'s {{AudioParam}}s for this block. - 1. If |getResult| is an - - abrupt completion, set |completion| to |getResult| and jump to the step - labeled return. + 5. For each {{AudioNode}}, in ordered node list: - 1. Set |processCallback| to |getResult|.\[[Value]]. + 1. For each {{AudioParam}} of this {{AudioNode}}, execute these steps: - 1. If ! - - IsCallable(|processCallback|) is `false`, then: + 1. If this {{AudioParam}} has any {{AudioNode}} connected to it, + [[#channel-up-mixing-and-down-mixing|sum]] the buffers + [=Making a buffer available for reading|made available for reading=] by + all {{AudioNode}} connected to this {{AudioParam}}, + [[#down-mix|down mix]] the resulting buffer down to a mono + channel, and call this buffer the + input AudioParam buffer. - 1. Set |completion| to new - Completion - {\[[Type]]: throw, \[[Value]]: a newly created - - TypeError object, \[[Target]]: empty}. + 2. [[#computation-of-value|Compute the value(s)]] of this + {{AudioParam}} for this block. - 1. Jump to the step labeled - return. + 3. [=Queue a control message=] to set the {{[[current value]]}} slot + of this {{AudioParam}} according to [[#computation-of-value]]. - 1. Set {{[[callable process]]}} to `true`. + 2. If this {{AudioNode}} has any {{AudioNode}}s connected to its input, + [[#channel-up-mixing-and-down-mixing|sum]] the buffers + [=Making a buffer available for reading|made available for reading=] by all + {{AudioNode}}s connected to this {{AudioNode}}. The resulting buffer is + called the input buffer. + [[#channel-up-mixing-and-down-mixing|Up or down-mix]] it to + match if number of input channels of this {{AudioNode}}. - 1. Perform the following substeps: + 3. If this {{AudioNode}} is a source node, + [=Computing a block of audio|compute a block of audio=], and + [=Making a buffer available for reading|make it available for reading=]. - 1. Let |args| be a - - Web IDL arguments list consisting of - {{AudioWorkletProcessCallback/inputs}}, - {{AudioWorkletProcessCallback/outputs}}, and - {{AudioWorkletProcessCallback/parameters}}. + 4. If this {{AudioNode}} is an {{AudioWorkletNode}}, execute these + substeps: - 1. Let |esArgs| be the result of - - converting |args| to an ECMAScript arguments list. + 1. Let processor be the associated {{AudioWorkletProcessor}} + instance of {{AudioWorkletNode}}. - 1. Let |callResult| be the - Call(|processCallback|, |O|, |esArgs|). This operation - [=Computing a block of audio|computes a block of audio=] with |esArgs|. - Upon a successful function call, a buffer containing copies of - the elements of the {{Float32Array}}s passed via the - {{AudioWorkletProcessCallback/outputs}} is - [=Making a buffer available for reading|made available for reading=]. - Any {{Promise}} resolved within this call will be queued into the - microtask queue in the {{AudioWorkletGlobalScope}}. + 1. If {{[[callable process]]}} of processor is `true`, + execute the following steps: - 1. If |callResult| is an - - abrupt completion, set |completion| to |callResult| and jump to the - step labeled return. + 1. Let processFunction be the result of + + Get(O=processor, P="process"). - 1. Set |processor|’s active source flag to - - ToBoolean(|callResult|.\[[Value]]). + 1. Set {{[[callable process]]}} to be the return value of + + IsCallable(argument=processFunction). - 1. Return: at this point |completion| - will be set to an ECMAScript - completion value. + 1. If {{[[callable process]]}} is `true`, + invoke processFunction to + [=Computing a block of audio|compute a block of audio=] with the + argument of [=input buffer=], output buffer and + [=input AudioParam buffer=]. A buffer containing copies of the + elements of the {{Float32Array}}s passed via the + + outputs parameter to processFunction is + made available for reading. - 1. [=Clean up after running a callback=] with the [=current settings object=]. + 1. At the conclusion of processFunction, + ToBoolean + is applied to the return value and the result is + assigned to the associated {{AudioWorkletProcessor}}'s + active source flag. This in turn affects whether + subsequent invocations of {{process()}} occur, and has + an impact on the lifetime of the node. - 1. [=Clean up after running script=] with the [=current settings object=]. + 1. Else if {{[[callable process]]}} is `false`, + queue a task to the control thread + fire an + ErrorEvent + named processorerror at the associated + {{AudioWorkletNode}}. - 1. If |completion| is an - - abrupt completion: + 1. If {{[[callable process]]}} of processor is `false`, + execute the following steps: - 1. Set {{[[callable process]]}} to `false`. + 1. [=Making a buffer available for reading|Make a silent output buffer available for reading=]. - 1. Set |processor|'s active source flag to `false`. + 1. Any {{Promise}} resolved within the execution of process method will + be queued into the microtask queue in the {{AudioWorkletGlobalScope}}. - 1. [=Making a buffer available for reading|Make a silent output buffer available for reading=]. + 5. If this {{AudioNode}} is a destination node, + [=Recording the input|record the input=] of this {{AudioNode}}. - 1. Queue a task to the control thread to [=fire an event=] - named {{AudioWorkletNode/processorerror}} at the associated - {{AudioWorkletNode}} using {{ErrorEvent}}. + 6. Else, process the input buffer, and + [=Making a buffer available for reading|make available for reading=] the + resulting buffer. - 5. If this {{AudioNode}} is a destination node, - [=Recording the input|record the input=] of this {{AudioNode}}. + 6. [=Atomically=] perform the following steps: - 6. Else, [=processing an input buffer|process=] the input buffer, and - [=Making a buffer available for reading|make available for reading=] the - resulting buffer. + 1. Increment {{[[current frame]]}} by the [=render quantum size=]. - 6. [=Atomically=] perform the following steps: + 2. Set {{BaseAudioContext/currentTime}} to {{[[current frame]]}} divided + by {{BaseAudioContext/sampleRate}}. - 1. Increment {{[[current frame]]}} by the [=render quantum size=]. + 7. Set render result to true. - 2. Set {{BaseAudioContext/currentTime}} to {{[[current frame]]}} divided - by {{BaseAudioContext/sampleRate}}. + 4. [=Perform a microtask checkpoint=]. - 7. Set render result to true. - - 5. [=Perform a microtask checkpoint=]. - - 6. Return render result. + 5. Return render result.
Muting an {{AudioNode}} means that its @@ -12081,63 +11270,27 @@ means copying the input data of this {{AudioNode}} for future usage. Computing a block of audio means -running the algorithm for this {{AudioNode}} to produce -{{BaseAudioContext/[[render quantum size]]}} sample-frames. +running the algorithm for this {{AudioNode}} to produce 128 +sample-frames. -Processing an input buffer means +Processing an input buffer means running the algorithm for an {{AudioNode}}, using an input buffer and the value(s) of the {{AudioParam}}(s) of this {{AudioNode}} as the input for this algorithm. -

- Handling an error from System Audio Resources on the {{AudioContext}}

- -The {{AudioContext}} |audioContext| performs the following steps on rendering thread in the - event of an audio system resource error. - -1. If the |audioContext|'s {{[[rendering thread state]]}} is running: - - 1. Attempt to release system resources. - - 1. Set the |audioContext|'s {{[[rendering thread state]]}} to suspended. - - 1. [=Queue a media element task=] to execute the following steps: - - 1. [=Fire an event=] named {{AudioContext/error}} at |audioContext|. - - 1. Set the |audioContext|'s {{[[suspended by user]]}} to false. - - 1. Set the |audioContext|'s {{[[control thread state]]}} to suspended. - - 1. Set the |audioContext|'s {{BaseAudioContext/state}} attribute to - "{{AudioContextState/suspended}}". - - 1. [=Fire an event=] named {{BaseAudioContext/statechange}} at the |audioContext|. - - 1. Abort these steps. - -1. If the |audioContext|'s {{[[rendering thread state]]}} is suspended: - - 1. [=Queue a media element task=]to execute the following steps: - - 1. [=Fire an event=] named {{AudioContext/error}} at |audioContext|. - -Note: An example of system audio resource errors would be when an external or wireless audio device - becoming disconnected during the active rendering of the {{AudioContext}}. -

Unloading a document

- Additional unloading - document cleanup steps are defined for documents that use - {{BaseAudioContext}}: + Additional unloading + document cleanup steps are defined for documents that use + {{BaseAudioContext}}: 1. Reject all the promises of {{BaseAudioContext/[[pending promises]]}} with - InvalidStateError, for each {{AudioContext}} and - {{OfflineAudioContext}} whose relevant global object is the same as - the document's associated Window. + InvalidStateError, for each {{AudioContext}} and + {{OfflineAudioContext}} whose relevant global object is the same as + the document's associated Window. 2. Stop all {{decoding thread}}s. 3. Queue a control message to {{AudioContext/close()}} the - {{AudioContext}} or {{OfflineAudioContext}}. + {{AudioContext}} or {{OfflineAudioContext}}.

Dynamic Lifetime

@@ -12146,8 +11299,8 @@ Dynamic Lifetime

Background Note: The normative description of {{AudioContext}} and {{AudioNode}} lifetime characteristics is - described by the AudioContext lifetime and AudioNode lifetime. + described by the AudioContext lifetime and AudioNode lifetime. This section is non-normative. @@ -12190,11 +11343,11 @@ automatically with no extra handling required. Example
- dynamic allocation -
- A graph featuring a subgraph that will be released early. -
+ dynamic allocation +
+ A graph featuring a subgraph that will be releases early. +
The low-pass filter, panner, and second gain nodes are directly @@ -12214,39 +11367,39 @@ let streamingAudioSource = 0; // Initial setup of the "long-lived" part of the routing graph function setupAudioContext() { - context = new AudioContext(); + context = new AudioContext(); - compressor = context.createDynamicsCompressor(); - gainNode1 = context.createGain(); + compressor = context.createDynamicsCompressor(); + gainNode1 = context.createGain(); - // Create a streaming audio source. - const audioElement = document.getElementById('audioTagID'); - streamingAudioSource = context.createMediaElementSource(audioElement); - streamingAudioSource.connect(gainNode1); + // Create a streaming audio source. + const audioElement = document.getElementById('audioTagID'); + streamingAudioSource = context.createMediaElementSource(audioElement); + streamingAudioSource.connect(gainNode1); - gainNode1.connect(compressor); - compressor.connect(context.destination); + gainNode1.connect(compressor); + compressor.connect(context.destination); } // Later in response to some user action (typically mouse or key event) // a one-shot sound can be played. function playSound() { - const oneShotSound = context.createBufferSource(); - oneShotSound.buffer = dogBarkingBuffer; - - // Create a filter, panner, and gain node. - const lowpass = context.createBiquadFilter(); - const panner = context.createPanner(); - const gainNode2 = context.createGain(); - - // Make connections - oneShotSound.connect(lowpass); - lowpass.connect(panner); - panner.connect(gainNode2); - gainNode2.connect(compressor); - - // Play 0.75 seconds from now (to play immediately pass in 0) - oneShotSound.start(context.currentTime + 0.75); + const oneShotSound = context.createBufferSource(); + oneShotSound.buffer = dogBarkingBuffer; + + // Create a filter, panner, and gain node. + const lowpass = context.createBiquadFilter(); + const panner = context.createPanner(); + const gainNode2 = context.createGain(); + + // Make connections + oneShotSound.connect(lowpass); + lowpass.connect(panner); + panner.connect(gainNode2); + gainNode2.connect(compressor); + + // Play 0.75 seconds from now (to play immediately pass in 0) + oneShotSound.start(context.currentTime + 0.75); } @@ -12279,23 +11432,23 @@ internal value computedNumberOfChannels representing the actual number of channels of the input at any given time.
- For each input of an {{AudioNode}}, an implementation - MUST: + For each input of an {{AudioNode}}, an implementation + MUST: - 1. Compute computedNumberOfChannels. + 1. Compute computedNumberOfChannels. - 2. For each connection to the input: + 2. For each connection to the input: - 1. [=up-mix=] or [=down-mix=] the connection to - computedNumberOfChannels according to the - {{ChannelInterpretation}} - value given by the node's {{AudioNode/channelInterpretation}} attribute. + 1. [=up-mix=] or [=down-mix=] the connection to + computedNumberOfChannels according to the + {{ChannelInterpretation}} + value given by the node's {{AudioNode/channelInterpretation}} attribute. - 2. Mix it together with all of the other mixed streams (from other - connections). This is a straight-forward summing together of each of - the corresponding channels that have been - [=up-mix|up-mixed=] or [=down-mix|down-mixed=] - in step 1 for each connection.
+ 2. Mix it together with all of the other mixed streams (from other + connections). This is a straight-forward summing together of each of + the corresponding channels that have been + [=up-mix|up-mixed=] or [=down-mix|down-mixed=] + in step 1 for each connection.

Speaker Channel Layouts

@@ -12317,33 +11470,33 @@ Implementations MUST present the channels provided in the order defined below, skipping over those channels not present. - - - - + + +
Order - Label - Mono - Stereo - Quad - 5.1 -
0 SPEAKER_FRONT_LEFT 0 0 0 0 -
1 SPEAKER_FRONT_RIGHT 1 1 1 -
2 SPEAKER_FRONT_CENTER 2 -
3 SPEAKER_LOW_FREQUENCY 3 -
4 SPEAKER_BACK_LEFT 2 4 -
5 SPEAKER_BACK_RIGHT 3 5 -
6 SPEAKER_FRONT_LEFT_OF_CENTER -
7 SPEAKER_FRONT_RIGHT_OF_CENTER -
8 SPEAKER_BACK_CENTER -
9 SPEAKER_SIDE_LEFT -
10 SPEAKER_SIDE_RIGHT -
11 SPEAKER_TOP_CENTER -
12 SPEAKER_TOP_FRONT_LEFT -
13 SPEAKER_TOP_FRONT_CENTER -
14 SPEAKER_TOP_FRONT_RIGHT -
15 SPEAKER_TOP_BACK_LEFT -
16 SPEAKER_TOP_BACK_CENTER -
17 SPEAKER_TOP_BACK_RIGHT +
Order + Label + Mono + Stereo + Quad + 5.1 +
0 SPEAKER_FRONT_LEFT 0 0 0 0 +
1 SPEAKER_FRONT_RIGHT 1 1 1 +
2 SPEAKER_FRONT_CENTER 2 +
3 SPEAKER_LOW_FREQUENCY 3 +
4 SPEAKER_BACK_LEFT 2 4 +
5 SPEAKER_BACK_RIGHT 3 5 +
6 SPEAKER_FRONT_LEFT_OF_CENTER +
7 SPEAKER_FRONT_RIGHT_OF_CENTER +
8 SPEAKER_BACK_CENTER +
9 SPEAKER_SIDE_LEFT +
10 SPEAKER_SIDE_RIGHT +
11 SPEAKER_TOP_CENTER +
12 SPEAKER_TOP_FRONT_LEFT +
13 SPEAKER_TOP_FRONT_CENTER +
14 SPEAKER_TOP_FRONT_RIGHT +
15 SPEAKER_TOP_BACK_LEFT +
16 SPEAKER_TOP_BACK_CENTER +
17 SPEAKER_TOP_BACK_RIGHT

@@ -12362,15 +11515,15 @@ When there is an increase in input channel count, the behavior depends on the {{AudioNode}} type: - For a {{DelayNode}} or a {{DynamicsCompressorNode}}, the number of output - channels MUST increase when the input that was received with greater channel - count begins to affect the output. + channels MUST increase when the input that was received with greater channel + count begins to affect the output. - For other {{AudioNode}}s that have a tail-time, the number of output - channels MUST increase immediately. + channels MUST increase immediately. - Note: For a {{ConvolverNode}}, this only applies to the case where the impulse - response is mono. Otherwise, the {{ConvolverNode}} always outputs a stereo - signal regardless of its input channel count. + Note: For a {{ConvolverNode}}, this only applies to the case where the impulse + response is mono. Otherwise, the {{ConvolverNode}} always outputs a stereo + signal regardless of its input channel count. Note: Intuitively, this allows not losing stereo information as part of processing: when multiple input render quanta of different channel count @@ -12383,49 +11536,49 @@ Up Mixing Speaker Layouts

 Mono up-mix:
 
-    1 -> 2 : up-mix from mono to stereo
-        output.L = input;
-        output.R = input;
+	1 -> 2 : up-mix from mono to stereo
+		output.L = input;
+		output.R = input;
 
-    1 -> 4 : up-mix from mono to quad
-        output.L = input;
-        output.R = input;
-        output.SL = 0;
-        output.SR = 0;
+	1 -> 4 : up-mix from mono to quad
+		output.L = input;
+		output.R = input;
+		output.SL = 0;
+		output.SR = 0;
 
-    1 -> 5.1 : up-mix from mono to 5.1
-        output.L = 0;
-        output.R = 0;
-        output.C = input; // put in center channel
-        output.LFE = 0;
-        output.SL = 0;
-        output.SR = 0;
+	1 -> 5.1 : up-mix from mono to 5.1
+		output.L = 0;
+		output.R = 0;
+		output.C = input; // put in center channel
+		output.LFE = 0;
+		output.SL = 0;
+		output.SR = 0;
 
 Stereo up-mix:
 
-    2 -> 4 : up-mix from stereo to quad
-        output.L = input.L;
-        output.R = input.R;
-        output.SL = 0;
-        output.SR = 0;
+	2 -> 4 : up-mix from stereo to quad
+		output.L = input.L;
+		output.R = input.R;
+		output.SL = 0;
+		output.SR = 0;
 
-    2 -> 5.1 : up-mix from stereo to 5.1
-        output.L = input.L;
-        output.R = input.R;
-        output.C = 0;
-        output.LFE = 0;
-        output.SL = 0;
-        output.SR = 0;
+	2 -> 5.1 : up-mix from stereo to 5.1
+		output.L = input.L;
+		output.R = input.R;
+		output.C = 0;
+		output.LFE = 0;
+		output.SL = 0;
+		output.SR = 0;
 
 Quad up-mix:
 
-    4 -> 5.1 : up-mix from quad to 5.1
-        output.L = input.L;
-        output.R = input.R;
-        output.C = 0;
-        output.LFE = 0;
-        output.SL = input.SL;
-        output.SR = input.SR;
+	4 -> 5.1 : up-mix from quad to 5.1
+		output.L = input.L;
+		output.R = input.R;
+		output.C = 0;
+		output.LFE = 0;
+		output.SL = input.SL;
+		output.SR = input.SR;
 

@@ -12437,63 +11590,63 @@ material, but playing back stereo.
 Mono down-mix:
 
-    2 -> 1 : stereo to mono
-        output = 0.5 * (input.L + input.R);
+	2 -> 1 : stereo to mono
+		output = 0.5 * (input.L + input.R);
 
-    4 -> 1 : quad to mono
-        output = 0.25 * (input.L + input.R + input.SL + input.SR);
+	4 -> 1 : quad to mono
+		output = 0.25 * (input.L + input.R + input.SL + input.SR);
 
-    5.1 -> 1 : 5.1 to mono
-        output = sqrt(0.5) * (input.L + input.R) + input.C + 0.5 * (input.SL + input.SR)
+	5.1 -> 1 : 5.1 to mono
+		output = sqrt(0.5) * (input.L + input.R) + input.C + 0.5 * (input.SL + input.SR)
 
 Stereo down-mix:
 
-    4 -> 2 : quad to stereo
-        output.L = 0.5 * (input.L + input.SL);
-        output.R = 0.5 * (input.R + input.SR);
+	4 -> 2 : quad to stereo
+		output.L = 0.5 * (input.L + input.SL);
+		output.R = 0.5 * (input.R + input.SR);
 
-    5.1 -> 2 : 5.1 to stereo
-        output.L = L + sqrt(0.5) * (input.C + input.SL)
-        output.R = R + sqrt(0.5) * (input.C + input.SR)
+	5.1 -> 2 : 5.1 to stereo
+		output.L = L + sqrt(0.5) * (input.C + input.SL)
+		output.R = R + sqrt(0.5) * (input.C + input.SR)
 
 Quad down-mix:
 
-    5.1 -> 4 : 5.1 to quad
-        output.L = L + sqrt(0.5) * input.C
-        output.R = R + sqrt(0.5) * input.C
-        output.SL = input.SL
-        output.SR = input.SR
+	5.1 -> 4 : 5.1 to quad
+		output.L = L + sqrt(0.5) * input.C
+		output.R = R + sqrt(0.5) * input.C
+		output.SL = input.SL
+		output.SR = input.SR
 

Channel Rules Examples

-    // Set gain node to explicit 2-channels (stereo).
-    gain.channelCount = 2;
-    gain.channelCountMode = "explicit";
-    gain.channelInterpretation = "speakers";
-
-    // Set "hardware output" to 4-channels for DJ-app with two stereo output busses.
-    context.destination.channelCount = 4;
-    context.destination.channelCountMode = "explicit";
-    context.destination.channelInterpretation = "discrete";
-
-    // Set "hardware output" to 8-channels for custom multi-channel speaker array
-    // with custom matrix mixing.
-    context.destination.channelCount = 8;
-    context.destination.channelCountMode = "explicit";
-    context.destination.channelInterpretation = "discrete";
-
-    // Set "hardware output" to 5.1 to play an HTMLAudioElement.
-    context.destination.channelCount = 6;
-    context.destination.channelCountMode = "explicit";
-    context.destination.channelInterpretation = "speakers";
-
-    // Explicitly down-mix to mono.
-    gain.channelCount = 1;
-    gain.channelCountMode = "explicit";
-    gain.channelInterpretation = "speakers";
+	// Set gain node to explicit 2-channels (stereo).
+	gain.channelCount = 2;
+	gain.channelCountMode = "explicit";
+	gain.channelInterpretation = "speakers";
+
+	// Set "hardware output" to 4-channels for DJ-app with two stereo output busses.
+	context.destination.channelCount = 4;
+	context.destination.channelCountMode = "explicit";
+	context.destination.channelInterpretation = "discrete";
+
+	// Set "hardware output" to 8-channels for custom multi-channel speaker array
+	// with custom matrix mixing.
+	context.destination.channelCount = 8;
+	context.destination.channelCountMode = "explicit";
+	context.destination.channelInterpretation = "discrete";
+
+	// Set "hardware output" to 5.1 to play an HTMLAudioElement.
+	context.destination.channelCount = 6;
+	context.destination.channelCountMode = "explicit";
+	context.destination.channelInterpretation = "speakers";
+
+	// Explicitly down-mix to mono.
+	gain.channelCount = 1;
+	gain.channelCountMode = "explicit";
+	gain.channelInterpretation = "speakers";
 

@@ -12555,11 +11708,11 @@ below, with the default values shown. The locations for the positions so we can see things better.
- panner-coord -
- Diagram of the coordinate system with AudioListener - and PannerNode attributes shown. -
+ panner-coord +
+ Diagram of the coordinate system with AudioListener + and PannerNode attributes shown. +
During rendering, the {{PannerNode}} calculates an @@ -12651,7 +11804,7 @@ connections to the input are mono. Otherwise PannerNode "equalpower" Panning

This is a simple and relatively inexpensive algorithm which -provides basic, but reasonable results. It is used for the +provides basic, but reasonable results. It is used for the for the {{PannerNode}} when the {{PannerNode/panningModel}} attribute is set to "{{PanningModelType/equalpower}}", in which case the elevation value is ignored. This algorithm MUST be implemented using @@ -12661,82 +11814,82 @@ the appropriate rate as specified by the "{{AutomationRate/a-rate}}", a-rate processing must be used.
- 1. For each sample to be computed by this {{AudioNode}}: - - 1. Let azimuth be the value computed in the azimuth and elevation section. - - 2. The azimuth value is first contained to be within - the range [-90, 90] according to: - -
-            // First, clamp azimuth to allowed range of [-180, 180].
-            azimuth = max(-180, azimuth);
-            azimuth = min(180, azimuth);
-
-            // Then wrap to range [-90, 90].
-            if (azimuth < -90)
-                azimuth = -180 - azimuth;
-            else if (azimuth > 90)
-                azimuth = 180 - azimuth;
-            
- - 3. A normalized value x is calculated from - azimuth for a mono input as: - -
-            x = (azimuth + 90) / 180;
-            
- - Or for a stereo input as: - -
-            if (azimuth <= 0) { // -90 -> 0
-                // Transform the azimuth value from [-90, 0] degrees into the range [-90, 90].
-                x = (azimuth + 90) / 90;
-            } else { // 0 -> 90
-                // Transform the azimuth value from [0, 90] degrees into the range [-90, 90].
-                x = azimuth / 90;
-            }
-            
- - 4. Left and right gain values are calculated as: - -
-            gainL = cos(x * Math.PI / 2);
-            gainR = sin(x * Math.PI / 2);
-            
- - 5. For mono input, the stereo output is calculated as: - -
-            outputL = input * gainL;
-            outputR = input * gainR;
-            
- - Else for stereo input, the output is calculated as: - -
-            if (azimuth <= 0) {
-                outputL = inputL + inputR * gainL;
-                outputR = inputR * gainR;
-            } else {
-                outputL = inputL * gainL;
-                outputR = inputR + inputL * gainR;
-            }
-            
- 6. Apply the distance gain and cone gain where the - computation of the distance is described in - [[#Spatialization-distance-effects|Distance - Effects]] and the cone gain is described in - [[#Spatialization-sound-cones|Sound Cones]]: - -
-            let distance = distance();
-            let distanceGain = distanceModel(distance);
-            let totalGain = coneGain() * distanceGain();
-            outputL = totalGain * outputL;
-            outputR = totalGain * outputR;
-            
+ 1. For each sample to be computed by this {{AudioNode}}: + + 1. Let azimuth be the value computed in the azimuth and elevation section. + + 2. The azimuth value is first contained to be within + the range [-90, 90] according to: + +
+			// First, clamp azimuth to allowed range of [-180, 180].
+			azimuth = max(-180, azimuth);
+			azimuth = min(180, azimuth);
+
+			// Then wrap to range [-90, 90].
+			if (azimuth < -90)
+				azimuth = -180 - azimuth;
+			else if (azimuth > 90)
+				azimuth = 180 - azimuth;
+			
+ + 3. A normalized value x is calculated from + azimuth for a mono input as: + +
+			x = (azimuth + 90) / 180;
+			
+ + Or for a stereo input as: + +
+			if (azimuth <= 0) { // -90 -> 0
+				// Transform the azimuth value from [-90, 0] degrees into the range [-90, 90].
+				x = (azimuth + 90) / 90;
+			} else { // 0 -> 90
+				// Transform the azimuth value from [0, 90] degrees into the range [-90, 90].
+				x = azimuth / 90;
+			}
+			
+ + 4. Left and right gain values are calculated as: + +
+			gainL = cos(x * Math.PI / 2);
+			gainR = sin(x * Math.PI / 2);
+			
+ + 5. For mono input, the stereo output is calculated as: + +
+			outputL = input * gainL;
+			outputR = input * gainR;
+			
+ + Else for stereo input, the output is calculated as: + +
+			if (azimuth <= 0) {
+				outputL = inputL + inputR * gainL;
+				outputR = inputR * gainR;
+			} else {
+				outputL = inputL * gainL;
+				outputR = inputR + inputL * gainR;
+			}
+			
+ 6. Apply the distance gain and cone gain where the + computation of the distance is described in + [[#Spatialization-distance-effects|Distance + Effects]] and the cone gain is described in + [[#Spatialization-sound-cones|Sound Cones]]: + +
+			let distance = distance();
+			let distanceGain = distanceModel(distance);
+			let totalGain = coneGain() * distanceGain();
+			outputL = totalGain * outputL;
+			outputR = totalGain * outputR;
+			
@@ -12751,73 +11904,73 @@ than "equalpower", but provides more perceptually spatialized sound.
- -
- A diagram showing the process of panning a source using HRTF. -
+ +
+ A diagram showing the process of panning a source using HRTF. +

StereoPannerNode Panning

- For a {{StereoPannerNode}}, the following algorithm - MUST be implemented. - - 1. For each sample to be computed by this {{AudioNode}} - 1. Let pan be the computedValue of the - pan {{AudioParam}} of this - {{StereoPannerNode}}. - - 2. Clamp pan to [-1, 1]. - -
-            pan = max(-1, pan);
-            pan = min(1, pan);
-            
- - 3. Calculate x by normalizing pan value to - [0, 1]. For mono input: - -
-            x = (pan + 1) / 2;
-            
- - For stereo input: - -
-            if (pan <= 0)
-                x = pan + 1;
-            else
-                x = pan;
-            
- - 4. Left and right gain values are calculated as: - -
-            gainL = cos(x * Math.PI / 2);
-            gainR = sin(x * Math.PI / 2);
-            
- - 5. For mono input, the stereo output is calculated as: - -
-            outputL = input * gainL;
-            outputR = input * gainR;
-            
- - Else for stereo input, the output is calculated as: - -
-            if (pan <= 0) {
-                outputL = inputL + inputR * gainL;
-                outputR = inputR * gainR;
-            } else {
-                outputL = inputL * gainL;
-                outputR = inputR + inputL * gainR;
-            }
-            
+ For a {{StereoPannerNode}}, the following algorithm + MUST be implemented. + + 1. For each sample to be computed by this {{AudioNode}} + 1. Let pan be the computedValue of the + pan {{AudioParam}} of this + {{StereoPannerNode}}. + + 2. Clamp pan to [-1, 1]. + +
+			pan = max(-1, pan);
+			pan = min(1, pan);
+			
+ + 3. Calculate x by normalizing pan value to + [0, 1]. For mono input: + +
+			x = (pan + 1) / 2;
+			
+ + For stereo input: + +
+			if (pan <= 0)
+				x = pan + 1;
+			else
+				x = pan;
+			
+ + 4. Left and right gain values are calculated as: + +
+			gainL = cos(x * Math.PI / 2);
+			gainR = sin(x * Math.PI / 2);
+			
+ + 5. For mono input, the stereo output is calculated as: + +
+			outputL = input * gainL;
+			outputR = input * gainR;
+			
+ + Else for stereo input, the output is calculated as: + +
+			if (pan <= 0) {
+				outputL = inputL + inputR * gainL;
+				outputR = inputR * gainR;
+			} else {
+				outputL = inputL * gainL;
+				outputR = inputR + inputL * gainR;
+			}
+			

@@ -12872,10 +12025,10 @@ is, the inner cone extends 25 deg on each side of the direction vector. Similarly, the outer cone is 60 deg on each side.
- cone-diagram -
- Cone angles for a source in relationship to the source orientation and the listeners position and orientation. -
+ cone-diagram +
+ Cone angles for a source in relationship to the source orientation and the listeners position and orientation. +
The following algorithm MUST be used to calculate the gain @@ -12938,10 +12091,10 @@ Performance Considerations

Latency
- latency -
- Use cases in which the latency can be important -
+ latency +
+ Use cases in which the latency can be important +
For web applications, the time delay between mouse and keyboard @@ -12975,31 +12128,31 @@ Additionally, some {{AudioNode}}s can add latency to some paths of the audio graph, notably: * The {{AudioWorkletNode}} can run a script that buffers - internally, adding delay to the signal path. + internally, adding delay to the signal path. * The {{DelayNode}}, whose role is to add controlled latency - time. + time. * The {{BiquadFilterNode}} and {{IIRFilterNode}} filter - design can delay incoming samples, as a natural consequence of the - causal filtering process. + design can delay incoming samples, as a natural consequence of the + causal filtering process. * The {{ConvolverNode}} depending on the impulse, can delay - incoming samples, as a natural result of the convolution operation. + incoming samples, as a natural result of the convolution operation. * The {{DynamicsCompressorNode}} has a look ahead algorithm that - causes delay in the signal path. + causes delay in the signal path. * The {{MediaStreamAudioSourceNode}}, - {{MediaStreamTrackAudioSourceNode}} and - {{MediaStreamAudioDestinationNode}}, depending on the - implementation, can add buffers internally that add delays. + {{MediaStreamTrackAudioSourceNode}} and + {{MediaStreamAudioDestinationNode}}, depending on the + implementation, can add buffers internally that add delays. * The {{ScriptProcessorNode}} can have buffers between the - control thread and the rendering thread. + control thread and the rendering thread. * The {{WaveShaperNode}}, when oversampling, and depending on - the oversampling technique, add delays to the signal path. + the oversampling technique, add delays to the signal path.

Audio Buffer Copying

@@ -13054,229 +12207,199 @@ more work than is possible in real-time given the CPU's speed.

Security and Privacy Considerations

-Per the [[security-privacy-questionnaire#questions]]: +The W3C TAG is developing a Self-Review +Questionnaire: Security and Privacy for editors of specifications +to informatively answer. + +Per the Questions +to Consider 1. Does this specification deal with personally-identifiable information? - It would be possible to perform a hearing test using Web Audio API, thus - revealing the range of frequencies audible to a person (this decreases - with age). It is difficult to see how this could be done without the - realization and consent of the user, as it requires active particpation. - + It would be possible to perform a hearing test using Web Audio API, thus + revealing the range of frequencies audible to a person (this decreases + with age). It is difficult to see how this could be done without the + realization and consent of the user, as it requires active particpation. + 2. Does this specification deal with high-value data? - No. Credit card information and the like is not used in Web Audio. - It is possible to use Web Audio to process or analyze voice data, - which might be a privacy concern, but access to the user's - microphone is permission-based via {{getUserMedia()}}. + No. Credit card information and the like is not used in Web Audio. + It is possible to use Web Audio to process or analyze voice data, + which might be a privacy concern, but access to the user's + microphone is permission-based via {{getUserMedia()}}. 3. Does this specification introduce new state for an origin that - persists across browsing sessions? + persists across browsing sessions? - No. AudioWorklet does not persist across browsing sessions. + No. AudioWorklet does not persist across browsing sessions. 4. Does this specification expose persistent, cross-origin state to - the web? + the web? - Yes, the supported audio sample rate(s) and the output device channel count are exposed. See {{AudioContext}}. + Yes, the supported audio sample rate(s) and the output device channel count are exposed. See {{AudioContext}}. 5. Does this specification expose any other data to an origin that it - doesn’t currently have access to? - - Yes. When giving various information on available - {{AudioNode}}s, the Web Audio API potentially - exposes information on characteristic features of the client (such - as audio hardware sample-rate) to any page that makes use of the - {{AudioNode}} interface. Additionally, timing - information can be collected through the - {{AnalyserNode}} or - {{ScriptProcessorNode}} interface. The information - could subsequently be used to create a fingerprint of the client. - - Research by Princeton CITP's - Web Transparency and Accountability Project - has shown that {{DynamicsCompressorNode}} and {{OscillatorNode}} can - be used to gather entropy from a client to fingerprint a device. - This is due to small, and normally inaudible, differences in DSP - architecture, resampling strategies and rounding trade-offs between - differing implementations. The precise compiler flags used and also the - CPU architecture (ARM vs. x86) contribute to this entropy. - - In practice however, this merely allows deduction of information - already readily available by easier means (User Agent string), - such as "this is browser X running on platform Y". However, to reduce the - possibility of additional fingerprinting, we mandate browsers take - action to mitigate fingerprinting issues that might be possible from the - output of any node. - - Fingerprinting via clock skew - has - been described by Steven J Murdoch and Sebastian Zander. It might be possible - to determine this from {{getOutputTimestamp}}. Skew-based fingerprinting has also - been demonstrated - - by Nakibly et. al. for HTML. The [[hr-time-3#sec-privacy]] section should be consulted for further - information on clock resolution and drift. + doesn’t currently have access to? + + Yes. When giving various information on available + {{AudioNode}}s, the Web Audio API potentially + exposes information on characteristic features of the client (such + as audio hardware sample-rate) to any page that makes use of the + {{AudioNode}} interface. Additionally, timing + information can be collected through the + {{AnalyserNode}} or + {{ScriptProcessorNode}} interface. The information + could subsequently be used to create a fingerprint of the client. + + Research by Princeton CITP's + Web Transparency and Accountability Project + has shown that {{DynamicsCompressorNode}} and {{OscillatorNode}} can + be used to gather entropy from a client to fingerprint a device. + This is due to small, and normally inaudible, differences in DSP + architecture, resampling strategies and rounding trade-offs between + differing implementations. The precise compiler flags used and also the + CPU architecture (ARM vs. x86) contribute to this entropy. + + In practice however, this merely allows deduction of information + already readily available by easier means (User Agent string), + such as "this is Chrome running on an x86". + + Fingerprinting via clock skew + has + been described by Steven J Murdoch and Sebastian Zander. It might be possible + to determine this from {{getOutputTimestamp}}. Skew-based fingerprinting has also + been demonstrated + + by Nakibly et. al. for HTML. The + Security appendix of High Resolution Time should be consulted for further + information on clock resolution and drift. Fingerprinting via latency is also possible; it might be possible to deduce this - from {{baseLatency}} and {{outputLatency}}. Mitigation - strategies include adding jitter (dithering) and quantization so that the exact - skew is incorrectly reported. Note however that most audio systems aim - for low latency, - to synchronise the audio generated by WebAudio to other - audio or video sources or to visual cues (for example in a game, or an audio - recording or music making environment). Excessive latency decreases usability and may be - an accessibility issue. - - Fingerprining via the sample rate of the {{AudioContext}} is also possible. We - recommend the following steps to be taken to minimize this: - - 1. 44.1 kHz and 48 kHz are allowed as default rates; the system will choose - between them for best applicability. (Obviously, if the audio device is - natively 44.1, 44.1 will be chosen, etc., but also the system may choose - the most "compatible" rate—e.g. if the system is natively 96kHz, - 48kHz would likely be chosen, not 44.1kHz. - 1. The system should resample to one of those two rates for devices that are - natively at different rates, despite the fact that this may cause extra - battery drain due to resampled audio. (Again, the system will choose the - most compatible rate—e.g. if the native system is 16kHz, it's - expected that 48kHz would be chosen.) - 1. It is expected (though not mandated) that browsers would offer a user - affordance to force use of the native rate—e.g. by setting a flag in - the browser on the device. This setting would not be exposed in the API. - 1. It is also expected behavior that a different rate could be explicitly - requested in the constructor for {{AudioContext}} (this is already in the - specification; it normally causes the audio rendering to be done at the - requested sampleRate, and then up- or down-sampled to the device output), - and if that rate is natively supported, the rendering could be passed - straight through. This would enable apps to render to higher rates without - user intervention (although it's not observable from Web Audio that the - audio output is not downsampled on output)—for example, if - {{MediaDevices}} capabilities were read (with user intervention) and indicated - a higher rate was supported. - - Fingerprinting via the number of output channels for the - {{AudioContext}} is possible as well. We recommend that - {{AudioDestinationNode/maxChannelCount}} be set to two - (stereo). Stereo is by far the most common number of - channels. + from {{baseLatency}} and {{outputLatency}}. Mitigation + strategies include adding jitter (dithering) and quantization so that the exact + skew is incorrectly reported. Note however that most audio systems aim + for low latency, + to synchronise the audio generated by WebAudio to other + audio or video sources or to visual cues (for example in a game, or an audio + recording or music making environment). Excessive latency decreases usability and may be + an accessibility issue. 6. Does this specification enable new script execution/loading - mechanisms? + mechanisms? - No. It does use the [[HTML]] script execution method, - defined in that specification. + No. It does use the [[worklets-1]] script execution method, + defined in that specification. 7. Does this specification allow an origin access to a user’s - location? + location? - No. + No. 8. Does this specification allow an origin access to sensors on a - user’s device? - - Not directly. Currently, audio input is not specified in this - document, but it will involve gaining access to the client - machine's audio input or microphone. This will require asking the - user for permission in an appropriate way, probably via the - {{getUserMedia()}} API. - - Additionally, the security and privacy considerations from the - Media Capture - and Streams specification should be noted. In particular, - analysis of ambient audio or playing unique audio may enable - identification of user location down to the level of a room or - even simultaneous occupation of a room by disparate users or - devices. Access to both audio output and audio input might also - enable communication between otherwise partitioned contexts in one - browser. + user’s device? + + Not directly. Currently, audio input is not specified in this + document, but it will involve gaining access to the client + machine's audio input or microphone. This will require asking the + user for permission in an appropriate way, probably via the + {{getUserMedia()}} API. + + Additionally, the security and privacy considerations from the + Media Capture + and Streams specification should be noted. In particular, + analysis of ambient audio or playing unique audio may enable + identification of user location down to the level of a room or + even simultaneous occupation of a room by disparate users or + devices. Access to both audio output and audio input might also + enable communication between otherwise partitioned contexts in one + browser. 9. Does this specification allow an origin access to aspects of a - user’s local computing environment? + user’s local computing environment? - Not directly; all requested sample rates are supported, with upsampling if needed. - It is possible to use Media Capture and Streams to - probe for supported audio sample rates with - MediaTrackSupportedConstraints. - This requires explicit user consent. - This does provide a small measure of fingerprinting. However, - in practice most consumer and prosumer devices use one of two - standardized sample rates: 44.1kHz (originally used by CD) and 48kHz - (originally used by DAT). Highly resource constrained devices may - support the speech-quality 11kHz sample rate, and higher-end devices often - support 88.2, 96, or even the audiophile 192kHz rate. - - Requiring all implementations to upsample to a single, commonly-supported - rate such as 48kHz would increase CPU cost for no particular benefit, and - requiring higher-end devices to use a lower rate would merely result in - Web Audio being labelled as unsuitable for professional use. + Not directly; all requested sample rates are supported, with upsampling if needed. + It is possible to use Media Capture and Streams to + probe for supported audio sample rates with + MediaTrackSupportedConstraints. + This requires explicit user consent. + This does provide a small measure of fingerprinting. However, + in practice most consumer and prosumer devices use one of two + standardized sample rates: 44.1kHz (originally used by CD) and 48kHz + (originally used by DAT). Highly resource constrained devices may + support the speech-quality 11kHz sample rate, and higher-end devices often + support 88.2, 96, or even the audiophile 192kHz rate. + + Requiring all implementations to upsample to a single, commonly-supported + rate such as 48kHz would increase CPU cost for no particular benefit, and + requiring higher-end devices to use a lower rate would merely result in + Web Audio being labelled as unsuitable for professional use. 10. Does this specification allow an origin access to other devices? - It typically does not allow access to other networked devices (an - exception in a high-end recording studio might be Dante networked - devices, although these typically use a separate, dedicated network). - It does of necessity allow access to the user's audio output device - or devices, which are sometimes separate units to the computer. - - For voice or sound-actuated devices, Web Audio API might be used - to control other devices. In addition, if the sound-operated device is sensitive - to near ultrasonic frequencies, such control might not be audible. - This possibility also exists with HTML, through either - the <audio> or <video> element. At common audio sampling rates, there is (by design) - insufficient headroom for much ultrasonic information: - - The limit of human hearing is usually stated as 20kHz. For a 44.1kHz - sampling rate, the Nyquist limit is 22.05kHz. Given that a true - brickwall filter cannot be physically realized, the space between - 20kHz and 22.05kHz is used for a rapid rolloff filter to strongly - attenuate all frequencies above Nyquist. - - At 48kHz sampling rate, there is still rapid attenuation in the - 20kHz to 24kHz band (but it is easier to avoid phase ripple - errors in the passband). + It typically does not allow access to other networked devices (an + exception in a high-end recording studio might be Dante networked + devices, although these typically use a separate, dedicated network). + It does of necessity allow access to the user's audio output device + or devices, which are sometimes separate units to the computer. + + For voice or sound-actuated devices, Web Audio API might be used + to control other devices. In addition, if the sound-operated device is sensitive + to near ultrasonic frequencies, such control might not be audible. + This possibility also exists with HTML, through either + the <audio> or <video> element. At common audio sampling rates, there is (by design) + insufficient headroom for much ultrasonic information: + + The limit of human hearing is usually stated as 20kHz. For a 44.1kHz + sampling rate, the Nyquist limit is 22.05kHz. Given that a true + brickwall filter cannot be physically realized, the space between + 20kHz and 22.05kHz is used for a rapid rolloff filter to strongly + attenuate all frequencies above Nyquist. + + At 48kHz sampling rate, there is still rapid attenuation in the + 20kHz to 24kHz band (but it is easier to avoid phase ripple + errors in the passband). 11. Does this specification allow an origin some measure of control - over a user agent’s native UI? + over a user agent’s native UI? - If the UI has audio components, such as a voice assistant or screenreader, - Web Audio API might be used to emulate aspects of the native UI to make - an attack seem more like a local system event. - This possibility also exists with HTML, through - the <audio> element. + If the UI has audio components, such as a voice assistant or screenreader, + Web Audio API might be used to emulate aspects of the native UI to make + an attack seem more like a local system event. + This possibility also exists with HTML, through + the <audio> element. 12. Does this specification expose temporary identifiers to the web? - No. + No. 13. Does this specification distinguish between behavior in first-party - and third-party contexts? + and third-party contexts? - No. + No. 14. How should this specification work in the context of a user agent’s - "incognito" mode? + "incognito" mode? - Not differently. + Not differently. 15. Does this specification persist data to a user’s local device? - No. + No. 16. Does this specification have a "Security Considerations" and - "Privacy Considerations" section? + "Privacy Considerations" section? - Yes (you are reading it). + Yes (you are reading it). 17. Does this specification allow downgrading default security - characteristics? + characteristics? - No. + No.

Requirements and Use Cases

@@ -13293,53 +12416,224 @@ JavaScript code used within this specification.
 // Three dimensional vector class.
 class Vec3 {
-    // Construct from 3 coordinates.
-    constructor(x, y, z) {
-        this.x = x;
-        this.y = y;
-        this.z = z;
-    }
-
-    // Dot product with another vector.
-    dot(v) {
-        return (this.x * v.x) + (this.y * v.y) + (this.z * v.z);
-    }
-
-    // Cross product with another vector.
-    cross(v) {
-        return new Vec3((this.y * v.z) - (this.z * v.y),
-            (this.z * v.x) - (this.x * v.z),
-            (this.x * v.y) - (this.y * v.x));
-    }
-
-    // Difference with another vector.
-    diff(v) {
-        return new Vec3(this.x - v.x, this.y - v.y, this.z - v.z);
-    }
-
-    // Get the magnitude of this vector.
-    get magnitude() {
-        return Math.sqrt(dot(this));
-    }
-
-    // Get a copy of this vector multiplied by a scalar.
-    scale(s) {
-        return new Vec3(this.x * s, this.y * s, this.z * s);
-    }
-
-    // Get a normalized copy of this vector.
-    normalize() {
-        const m = magnitude;
-        if (m == 0) {
-            return new Vec3(0, 0, 0);
-        }
-        return scale(1 / m);
-    }
+	// Construct from 3 coordinates.
+	constructor(x, y, z) {
+		this.x = x;
+		this.y = y;
+		this.z = z;
+	}
+
+	// Dot product with another vector.
+	dot(v) {
+		return (this.x * v.x) + (this.y * v.y) + (this.z * v.z);
+	}
+
+	// Cross product with another vector.
+	cross(v) {
+		return new Vec3((this.y * v.z) - (this.z * v.y),
+			(this.z * v.x) - (this.x * v.z),
+			(this.x * v.y) - (this.y * v.x));
+	}
+
+	// Difference with another vector.
+	diff(v) {
+		return new Vec3(this.x - v.x, this.y - v.y, this.z - v.z);
+	}
+
+	// Get the magnitude of this vector.
+	get magnitude() {
+		return Math.sqrt(dot(this));
+	}
+
+	// Get a copy of this vector multiplied by a scalar.
+	scale(s) {
+		return new Vec3(this.x * s, this.y * s, this.z * s);
+	}
+
+	// Get a normalized copy of this vector.
+	normalize() {
+		const m = magnitude;
+		if (m == 0) {
+			return new Vec3(0, 0, 0);
+		}
+		return scale(1 / m);
+	}
 }
 

Change Log

+

+ Since Candidate Recommendation of 18 September 2018 +

+ +* Issue 2193: Incorrect azimuth comparison in spatialization algorithm +* Issue 2192: Waveshaper curve interpolation algorithm incorrect +* Issue 2171: Allow not having get parameterDescriptors in an AudioWorkletProcessor +* Issue 2184: PannerNode refDistance description unclear +* Issue 2165: AudioScheduledSourceNode start algorithm incomplete +* Issue 2155: Restore changes accidentally reverted in bikeshed conversion +* Issue 2154: Exception for changing channelCountMode on ScriptProcessorNode does not match browsers +* Issue 2153: Exception for changing channelCount on ScriptProcessorNode does not match browsers +* Issue 2152: close() steps don't make sense +* Issue 2150: AudioBufferOptions requires throwing NotFoundError in cases that can't happen +* Issue 2149: MediaStreamAudioSourceNode constructor has weird check for AudioContext +* Issue 2148: IIRFilterOptions description makes impossible demands +* Issue 2147: PeriodicWave constructor examines lengths of things that might not be there +* Issue 2113: BiquadFilter gain lower bound can be lower. +* Issue 2096: Lifetime of pending processor construction data and exceptions in instantiation of AudioWorkletProcessor +* Issue 2087: Minor issues with BiquadFilter AudioParams +* Issue 2083: Missing text in WaveShaperNode? +* Issue 2082: WaveShaperNode curve interpolation incomplete +* Issue 2074: Should the AudioWorkletNode constructor invoke the algorithm for initializing an object that inherits from AudioNode? +* Issue 2073: Inconsistencies in constructor descriptions and factory method initialization +* Issue 2072: Clarification on `AudioBufferSourceNode` looping, and loop points +* Issue 2071: cancelScheduledValues with setValueCurveAtTime +* Issue 2060: Would it be helpful to restrict use of `AudioWorkletProcessor.port().postMessage()` in order to facilitate garbage collection? +* Issue 2051: Update to constructor operations +* Issue 2050: Restore ConvolverNode channel mixing configurability (up to 2 channels) +* Issue 2045: Should the check on `process()` be removed from `AudioWorkletGlobalScope.registerProcessor()`? +* Issue 2044: Remove `options` parameter from `AudioWorkletProcessor` constructor WebIDL +* Issue 2036: Remove `options` parameter of `AudioWorkletProcessor` constructor +* Issue 2035: De-duplicate initial value setting on AudioWorkletNode AudioParams +* Issue 2027: Revise "processor construction data" algorithm +* Issue 2021: AudioWorkletProcessor constructor leads to infinite recursion +* Issue 2018: There are still issues with the setup of an AudioWorkletNode's parameters +* Issue 2016: Clarify `parameters` in AudioWorkletProcessor.process() +* Issue 2011: AudioWorkletNodeOptions.processorOptions should not default to null. +* Issue 1989: Please update to Web IDL changes to optional dictionary defaulting +* Issue 1984: Handling of exceptions in audio worklet is not very clear +* Issue 1976: AudioWorkletProcessor's [[node reference]] seems to be write-only +* Issue 1972: parameterDescriptors handling during AudioWorkletNode initialization is probably wrong +* Issue 1971: AudioWorkletNode options serialization is underdefined +* Issue 1970: "active source" flag handling is a weird monkeypatch +* Issue 1969: It would be clearer if the various validation of AudioWorkletNodeOptions were an explicit step or set of steps +* Issue 1966: parameterDescriptors is not looked up by the AudioWorkletProcessor constructor +* Issue 1963: NewTarget check for AudioWorkletProcessor isn't actually possible with a Web IDL constructor +* Issue 1947: Spec is inconsistent about whether parameterDescriptors is an array or an iterable +* Issue 1946: Population of "node name to parameter descriptor map" needs to be defined +* Issue 1945: registerProcessor is doing odd things with threads and JS values +* Issue 1943: Describe how WaveShaperNode shapes the input with the curve +* Issue 1935: length of AudioWorkletProcessor.process() parameter sequences with inactive inputs +* Issue 1932: Make AudioWorkletNode output buffer available for reading +* Issue 1925: front vs forward +* Issue 1902: Mixer Gain Structure section not needed +* Issue 1906: Steps in rendering algorithm +* Issue 1905: Rendering callbacks are observable +* Issue 1904: Strange Note in algorithm for swapping a control message queue +* Issue 1903: Funny sentence about priority and latency +* Issue 1901: AudioWorkletNode state property? +* Issue 1900: AudioWorkletProcessor NewTarget undefined +* Issue 1899: Missing synchronous markers +* Issue 1897: WaveShaper curve value setter allows multiple sets +* Issue 1896: WaveShaperNode constructor says curve set is initialized to false +* Issue #1471: AudioNode Lifetime section seems to attempt to make garbage collection observable +* Issue #1893: Active processing for Panner/Convolver/ChannelMerger +* Issue #1894: Funny text in PannerNode.orientationX +* Issue #1866: References to garbage collection +* Issue #1851: Parameter values used for BiquadFilterNode::getFrequencyResponse +* Issue #1905: Rendering callbacks are observable +* Issue #1879: ABSN playback algorithm offset +* Issue #1882: Biquad lowpass/highpass Q +* Issue #1303: MediaElementAudioSourceNode information in a funny place +* Issue #1896: WaveShaperNode constructor says curve set is initialized to false +* Issue #1897: WaveShaper curve value setter allows multiple sets. +* Issue #1880: setOrientation description has confusing paragraph +* Issue #1855: createScriptProcessor parameter requirements +* Issue #1857: Fix typos and bad phrasing +* Issue #1788: Unclear what value is returned by AudioParam.value +* Issue #1852: Fix error condition of AudioNode.disconnect(destinationNode, output, input) +* Issue #1841: Recovering from unstable biquad filters? +* Issue #1777: Picture of the coordinate system for panner node +* Issue #1802: Clarify interaction between user-invoked suspend and autoplay policy +* Issue #1822: OfflineAudioContext.suspend can suspend before the given time +* Issue #1772: Sorting tracks alphabetically is underspecified +* Issue #1797: Specification is incomplete for AudioNode.connect() +* Issue #1805: Exception ordering on error +* Issue #1790: Automation example chart has an error (reversed function arguments +* Fix rendering algorithm iteration and cycle breaking +* Issue #1719: channel count changes in filter nodes with tail time +* Issue #1563: Make decodeAudioData more precise +* Issue #1481: Tighten spec on ABSN output channels? +* Issue #1762: Setting convolver buffer more than once? +* Issue #1758: Explicitly include time-domain processing code for BiquadFilterNode +* Issue #1770: Link to correct algorithm for StereoPannerNode, mention algorithm is equal-power +* Issue #1753: Have a single `AudioWorkletGlobalScope` per `BaseAudioContext` +* Issue #1746: AnalyserNode: Clarify how much time domain data we're supposed to keep around +* Issue #1741: Sample rate of AudioBuffer +* Issue #1745: Clarify unit of fftSize +* Issue #1743: Missing normative reference to Fetch +* Use "get a reference to the bytes" algorithm as needed. +* Specify rules for determining output chanel count. +* Clarified rendering algorithm for AudioListener. +

+ Since Working Draft of 19 June 2018 +

+* Minor editorial clarifications. +* Update implementation-report.html. +* Widen the valid range of detune values so that any value that doesn't cause 2^(d/1200) to overflow is valid. +* PannerNode constructor throws errors. +* Rephrase algorithm for setting buffer and curve. +* Refine startRendering algorithm. +* Make "queue a task" link to the HTML spec. +* Specify more precisely, events overlapping with SetValueCurveAtTime. +* Add implementation report to gh-pages. +* Honor the given value in `outputChannelCount`. +* Initialize bufferDuration outside of process() in ABSN algorithm. +* Rework definition of ABSN output behavior to account for playbackRate’s interaction with the start(…duration) argument. +* Add mention of video element in ultrasonic attack surface. + +

+Since Working Draft of 08 December 2015

+* Add AudioWorklet and related interfaces to support custom nodes. This replaces ScriptProcessorNode, which is now deprecated. +* Explicitly say what the channel count, mode, and interpretation values are for all source nodes. +* Specify the behavior of Web Audio when a document is unloaded. +* Merge the proposed SpatialListener interface into AudioListener. +* Rework and clean up algorithms for panning and spatialization and define "magic functions". +* Clarify that AudioBufferSourceNode looping is limited by duration argument to start(). +* Add constructors with options dictionaries for all node types. +* Clarify parameter automation method behavior and equations. Handle cases where automation methods may interact with each other. +* Support latency hints and arbitrary sample rates in AudioContext constructor. +* Clear up ambiguities in definitions of start() and stop() for scheduled sources. +* Remove automatic dezippering from AudioParam value setters which now equate to setValueAtTime(). +* Specify normative behavior of DynamicsCompressorNode. +* Specify that AudioParam.value returns the most recent computed value. +* Permit AudioBufferSourceNode to specify sub-sample start, duration, loopStart and loopEnd. Respecify algorithms to say exactly how looping works in all scenarios, including dynamic and negative playback rates. +* Harmonized behavior of IIRFilterNode with BiquadFilterNode. +* Add diagram describing mono-input-to-matrixed-stereo case. +* Prevent connecting an AudioNode to an AudioParam of a different AudioContext. +* Added Audioparam cancelAndHoldAtTime +* Clarify behaviour of AudioParam.cancelScheduledValues(). +* Add playing reference to MediaElementAudioSourceNodes and MediaStreamAudioSourceNodes. +* Refactor BaseAudioContext interface out of AudioContext, OfflineAudioContext. +* OfflineAudioContext inherits from BaseAudioContext, not AudioContext. +* "StereoPanner" replaced with the correct "StereoPannerNode". +* Support chaining on AudioNode.connect() and AudioParam automation methods. +* Specify behavior of events following SetTarget events. +* Reinstate channelCount declaration for AnalyserNode. +* Specify exponential ramp behavior when previous value is 0. +* Specify behavior of setValueCurveAtTime parameters. +* Add spatialListener attribute to AudioContext. +* Remove section titled "Doppler Shift". +* Added a list of nodes and reason why they can add latency, in an informative section. +* Speced nominal ranges, nyquist, and behavior when outside the range. +* Spec the processing model for the Web Audio API. +* Merge the SpatialPannerNode into the PannerNode, undeprecating the PannerNode. +* Merge the SpatialListener into the AudioListener, undeprecating the AudioListener. +* Added latencyHint(s). +* Move the constructor from BaseAudioContext to AudioContext where it belongs; BaseAudioContext is not constructible. +* Specified the Behavior of automations and nominal ranges. +* The playbackRate is widened to +/- infinity. +* setValueCurveAtTime is modified so that an implicit call to setValueAtTime is made at the end of the curve duration. +* Make setting the `value` attribute of an `AudioParam` strictly equivalent of calling setValueAtTime with AudioContext.currentTime. +* Add new sections for AudioContextOptions and AudioTimestamp. +* Add constructor for all nodes. +* Define ConstantSourceNode. +* Make the WaveShaperNode have a tail time, depending on the oversampling level. +* Allow collecting MediaStreamAudioSourceNode or MediaElementAudioSourceNode when they won't play ever again. +* Add a concept of 'allowed to start' and use it when creating an AudioContext and resuming it from resume() (closes #836). +* Add AudioScheduledSourceNode base class for source nodes. +* Mark all AudioParams as being k-rate.

Acknowledgements