diff --git a/index.bs b/index.bs index 4375c7e6b..2702ec085 100644 --- a/index.bs +++ b/index.bs @@ -1,46 +1,36 @@
Title: Web Audio API 1.1 -Shortname: webaudio -Level: 1.1 +Shortname: webaudio11 +Level: none Group: audiowg -Status: ED +Status: FPWD +Prepare for TR: yes +Date:2024-10-17 ED: https://webaudio.github.io/web-audio-api/ -TR: https://www.w3.org/TR/webaudio-11/ +TR: https://www.w3.org/TR/webaudio/ Favicon: favicon.png -Previous Version: https://www.w3.org/TR/2021/REC-webaudio-20210617/ -Previous Version: https://www.w3.org/TR/2021/CR-webaudio-20210114/ -Previous Version: https://www.w3.org/TR/2020/CR-webaudio-20200611/ -Previous Version: https://www.w3.org/TR/2018/CR-webaudio-20180918/ -Previous Version: https://www.w3.org/TR/2018/WD-webaudio-20180619/ -Previous Version: https://www.w3.org/TR/2015/WD-webaudio-20151208/ -Previous Version: https://www.w3.org/TR/2013/WD-webaudio-20131010/ -Previous Version: https://www.w3.org/TR/2012/WD-webaudio-20121213/ -Previous Version: https://www.w3.org/TR/2012/WD-webaudio-20120802/ -Previous Version: https://www.w3.org/TR/2012/WD-webaudio-20120315/ -Previous Version: https://www.w3.org/TR/2011/WD-webaudio-20111215/ Editor: Paul Adenot, Mozilla (https://www.mozilla.org/), padenot@mozilla.com, w3cid 62410 Editor: Hongchan Choi, Google (https://www.google.com/), hongchan@google.com, w3cid 74103 Former Editor: Raymond Toy (until Oct 2018) Former Editor: Chris Wilson (Until Jan 2016) Former Editor: Chris Rogers (Until Aug 2013) -Implementation Report: implementation-report.html Test Suite: https://github.com/web-platform-tests/wpt/tree/master/webaudio Repository: WebAudio/web-audio-api Abstract: This specification describes a high-level Web API - for processing and synthesizing audio in web applications. - The primary paradigm is of an audio routing graph, - where a number of {{AudioNode}} objects are connected together to define the overall audio rendering. - The actual processing will primarily take place in the underlying implementation - (typically optimized Assembly / C / C++ code), - but [[#AudioWorklet|direct script processing and synthesis]] is also supported. - - The [[#introductory]] section covers the motivation behind this specification. - - This API is designed to be used in conjunction with other APIs and elements on the web platform, notably: - XMLHttpRequest [[XHR]] (using the@@ -48,207 +38,157 @@ Markup Shorthands: markdown on, dfn on, css off spec: ECMAScript; url: https://tc39.github.io/ecma262/#sec-data-blocks; type: dfn; text: data block; url: https://www.w3.org/TR/mediacapture-streams/#dom-mediadevices-getusermedia; type: method; for: MediaDevices; text: getUserMedia() - -responseType
andresponse
attributes). - For games and interactive applications, - it is anticipated to be used with thecanvas
2D [[2dcontext]] - and WebGL [[WEBGL]] 3D graphics APIs. + for processing and synthesizing audio in web applications. + The primary paradigm is of an audio routing graph, + where a number of {{AudioNode}} objects are connected together to define the overall audio rendering. + The actual processing will primarily take place in the underlying implementation + (typically optimized Assembly / C / C++ code), + but [[#audioworklet|direct script processing and synthesis]] is also supported. + + The [[#introductory]] section covers the motivation behind this specification. + + This API is designed to be used in conjunction with other APIs and elements on the web platform, notably: + XMLHttpRequest [[XHR]] (using theresponseType
andresponse
attributes). + For games and interactive applications, + it is anticipated to be used with thecanvas
2D [[2dcontext]] + and WebGL [[WEBGL]] 3D graphics APIs. Markup Shorthands: markdown on, dfn on, css off
-spec:webidl; type:interface; text:object -spec:webidl; type:interface; text:Promise -+ + + + + + + @@ -289,68 +229,68 @@ Features The API supports these primary features: * [[#ModularRouting|Modular routing]] for simple or complex - mixing/effect architectures. + mixing/effect architectures. * High dynamic range, using 32-bit floats for internal processing. * [[#AudioParam|Sample-accurate scheduled sound playback]] - with low [[#latency|latency]] for musical applications - requiring a very high degree of rhythmic precision such as drum - machines and sequencers. This also includes the possibility of - [[#DynamicLifetime|dynamic creation]] of effects. + with low [[#latency|latency]] for musical applications + requiring a very high degree of rhythmic precision such as drum + machines and sequencers. This also includes the possibility of + [[#DynamicLifetime|dynamic creation]] of effects. * Automation of audio parameters for envelopes, fade-ins / - fade-outs, granular effects, filter sweeps, LFOs etc. + fade-outs, granular effects, filter sweeps, LFOs etc. * Flexible handling of channels in an audio stream, allowing them - to be split and merged. + to be split and merged. * Processing of audio sources from an <{audio}> or <{video}> - {{MediaElementAudioSourceNode|media element}}. + {{MediaElementAudioSourceNode|media element}}. * Processing live audio input using a {{MediaStreamTrackAudioSourceNode|MediaStream}} from - {{getUserMedia()}}. + {{getUserMedia()}}. * Integration with WebRTC - * Processing audio received from a remote peer using a - {{MediaStreamTrackAudioSourceNode}} and - [[!webrtc]]. + * Processing audio received from a remote peer using a + {{MediaStreamTrackAudioSourceNode}} and + [[!webrtc]]. - * Sending a generated or processed audio stream to a remote - peer using a {{MediaStreamAudioDestinationNode}} - and [[!webrtc]]. + * Sending a generated or processed audio stream to a remote + peer using a {{MediaStreamAudioDestinationNode}} + and [[!webrtc]]. -* Audio stream synthesis and processing [[#AudioWorklet|directly using scripts]]. +* Audio stream synthesis and processing [[#audioworklet|directly using scripts]]. * [[#Spatialization|Spatialized audio]] supporting a wide - range of 3D games and immersive environments: + range of 3D games and immersive environments: - * Panning models: equalpower, HRTF, pass-through - * Distance Attenuation - * Sound Cones - * Obstruction / Occlusion - * Source / Listener based + * Panning models: equalpower, HRTF, pass-through + * Distance Attenuation + * Sound Cones + * Obstruction / Occlusion + * Source / Listener based * A convolution engine for a wide - range of linear effects, especially very high-quality room effects. - Here are some examples of possible effects: - - * Small / large room - * Cathedral - * Concert hall - * Cave - * Tunnel - * Hallway - * Forest - * Amphitheater - * Sound of a distant room through a doorway - * Extreme filters - * Strange backwards effects - * Extreme comb filter effects + range of linear effects, especially very high-quality room effects. + Here are some examples of possible effects: + + * Small / large room + * Cathedral + * Concert hall + * Cave + * Tunnel + * Hallway + * Forest + * Amphitheater + * Sound of a distant room through a doorway + * Extreme filters + * Strange backwards effects + * Extreme comb filter effects * Dynamics compression for overall control and sweetening of the mix -* Efficient [[#AnalyserNode|real-time time-domain and frequency-domain analysis / music visualizer support]]. +* Efficient [[#analysernode|real-time time-domain and frequency-domain analysis / music visualizer support]]. * Efficient biquad filters for lowpass, highpass, and other common filters. @@ -379,10 +319,10 @@ All routing occurs within an {{AudioContext}} containing a single {{AudioDestinationNode}}: Illustrating this simple routing, here's a simple example playing a single sound: @@ -391,10 +331,10 @@ Illustrating this simple routing, here's a simple example playing a single sound const context = new AudioContext(); function playSound() { - const source = context.createBufferSource(); - source.buffer = dogBarkingBuffer; - source.connect(context.destination); - source.start(0); + const source = context.createBufferSource(); + source.buffer = dogBarkingBuffer; + source.connect(context.destination); + source.start(0); } @@ -402,10 +342,10 @@ Here's a more complex example with three sources and a convolution reverb send with a dynamics compressor at the final output stage:
@@ -426,69 +366,69 @@ let mainDry; let mainWet; function setupRoutingGraph () { - context = new AudioContext(); - - // Create the effects nodes. - lowpassFilter = context.createBiquadFilter(); - waveShaper = context.createWaveShaper(); - panner = context.createPanner(); - compressor = context.createDynamicsCompressor(); - reverb = context.createConvolver(); - - // Create main wet and dry. - mainDry = context.createGain(); - mainWet = context.createGain(); - - // Connect final compressor to final destination. - compressor.connect(context.destination); - - // Connect main dry and wet to compressor. - mainDry.connect(compressor); - mainWet.connect(compressor); - - // Connect reverb to main wet. - reverb.connect(mainWet); - - // Create a few sources. - source1 = context.createBufferSource(); - source2 = context.createBufferSource(); - source3 = context.createOscillator(); - - source1.buffer = manTalkingBuffer; - source2.buffer = footstepsBuffer; - source3.frequency.value = 440; - - // Connect source1 - dry1 = context.createGain(); - wet1 = context.createGain(); - source1.connect(lowpassFilter); - lowpassFilter.connect(dry1); - lowpassFilter.connect(wet1); - dry1.connect(mainDry); - wet1.connect(reverb); - - // Connect source2 - dry2 = context.createGain(); - wet2 = context.createGain(); - source2.connect(waveShaper); - waveShaper.connect(dry2); - waveShaper.connect(wet2); - dry2.connect(mainDry); - wet2.connect(reverb); - - // Connect source3 - dry3 = context.createGain(); - wet3 = context.createGain(); - source3.connect(panner); - panner.connect(dry3); - panner.connect(wet3); - dry3.connect(mainDry); - wet3.connect(reverb); - - // Start the sources now. - source1.start(0); - source2.start(0); - source3.start(0); + context = new AudioContext(); + + // Create the effects nodes. + lowpassFilter = context.createBiquadFilter(); + waveShaper = context.createWaveShaper(); + panner = context.createPanner(); + compressor = context.createDynamicsCompressor(); + reverb = context.createConvolver(); + + // Create main wet and dry. + mainDry = context.createGain(); + mainWet = context.createGain(); + + // Connect final compressor to final destination. + compressor.connect(context.destination); + + // Connect main dry and wet to compressor. + mainDry.connect(compressor); + mainWet.connect(compressor); + + // Connect reverb to main wet. + reverb.connect(mainWet); + + // Create a few sources. + source1 = context.createBufferSource(); + source2 = context.createBufferSource(); + source3 = context.createOscillator(); + + source1.buffer = manTalkingBuffer; + source2.buffer = footstepsBuffer; + source3.frequency.value = 440; + + // Connect source1 + dry1 = context.createGain(); + wet1 = context.createGain(); + source1.connect(lowpassFilter); + lowpassFilter.connect(dry1); + lowpassFilter.connect(wet1); + dry1.connect(mainDry); + wet1.connect(reverb); + + // Connect source2 + dry2 = context.createGain(); + wet2 = context.createGain(); + source2.connect(waveShaper); + waveShaper.connect(dry2); + waveShaper.connect(wet2); + dry2.connect(mainDry); + wet2.connect(reverb); + + // Connect source3 + dry3 = context.createGain(); + wet3 = context.createGain(); + source3.connect(panner); + panner.connect(dry3); + panner.connect(wet3); + dry3.connect(mainDry); + wet3.connect(reverb); + + // Start the sources now. + source1.start(0); + source2.start(0); + source3.start(0); }@@ -500,36 +440,36 @@ output of a node can act as a modulation signal rather than an input signal.
function setupRoutingGraph() { - const context = new AudioContext(); - - // Create the low frequency oscillator that supplies the modulation signal - const lfo = context.createOscillator(); - lfo.frequency.value = 1.0; - - // Create the high frequency oscillator to be modulated - const hfo = context.createOscillator(); - hfo.frequency.value = 440.0; - - // Create a gain node whose gain determines the amplitude of the modulation signal - const modulationGain = context.createGain(); - modulationGain.gain.value = 50; - - // Configure the graph and start the oscillators - lfo.connect(modulationGain); - modulationGain.connect(hfo.detune); - hfo.connect(context.destination); - hfo.start(0); - lfo.start(0); + const context = new AudioContext(); + + // Create the low frequency oscillator that supplies the modulation signal + const lfo = context.createOscillator(); + lfo.frequency.value = 1.0; + + // Create the high frequency oscillator to be modulated + const hfo = context.createOscillator(); + hfo.frequency.value = 440.0; + + // Create a gain node whose gain determines the amplitude of the modulation signal + const modulationGain = context.createGain(); + modulationGain.gain.value = 50; + + // Configure the graph and start the oscillators + lfo.connect(modulationGain); + modulationGain.connect(hfo.detune); + hfo.connect(context.destination); + hfo.start(0); + lfo.start(0); }@@ -539,139 +479,139 @@ API Overview The interfaces defined are: * An AudioContext - interface, which contains an audio signal graph representing - connections between {{AudioNode}}s. + interface, which contains an audio signal graph representing + connections between {{AudioNode}}s. * An {{AudioNode}} interface, which represents - audio sources, audio outputs, and intermediate processing modules. - {{AudioNode}}s can be dynamically connected together - in a [[#ModularRouting|modular fashion]]. - {{AudioNode}}s exist in the context of an - {{AudioContext}}. + audio sources, audio outputs, and intermediate processing modules. + {{AudioNode}}s can be dynamically connected together + in a [[#ModularRouting|modular fashion]]. + {{AudioNode}}s exist in the context of an + {{AudioContext}}. * An {{AnalyserNode}} interface, an - {{AudioNode}} for use with music visualizers, or - other visualization applications. + {{AudioNode}} for use with music visualizers, or + other visualization applications. * An {{AudioBuffer}} interface, for working with - memory-resident audio assets. These can represent one-shot sounds, or - longer audio clips. + memory-resident audio assets. These can represent one-shot sounds, or + longer audio clips. * An {{AudioBufferSourceNode}} interface, an - {{AudioNode}} which generates audio from an - AudioBuffer. + {{AudioNode}} which generates audio from an + AudioBuffer. * An {{AudioDestinationNode}} interface, an - {{AudioNode}} subclass representing the final - destination for all rendered audio. + {{AudioNode}} subclass representing the final + destination for all rendered audio. * An {{AudioParam}} interface, for controlling an - individual aspect of an {{AudioNode}}'s functioning, - such as volume. + individual aspect of an {{AudioNode}}'s functioning, + such as volume. * An {{AudioListener}} interface, which works with - a {{PannerNode}} for spatialization. + a {{PannerNode}} for spatialization. * An {{AudioWorklet}} interface representing a - factory for creating custom nodes that can process audio directly - using scripts. + factory for creating custom nodes that can process audio directly + using scripts. * An {{AudioWorkletGlobalScope}} interface, the - context in which AudioWorkletProcessor processing scripts run. + context in which AudioWorkletProcessor processing scripts run. * An {{AudioWorkletNode}} interface, an - {{AudioNode}} representing a node processed in an - AudioWorkletProcessor. + {{AudioNode}} representing a node processed in an + AudioWorkletProcessor. * An {{AudioWorkletProcessor}} interface, - representing a single node instance inside an audio worker. + representing a single node instance inside an audio worker. * A {{BiquadFilterNode}} interface, an - {{AudioNode}} for common low-order filters such as: + {{AudioNode}} for common low-order filters such as: - * Low Pass - * High Pass - * Band Pass - * Low Shelf - * High Shelf - * Peaking - * Notch - * Allpass + * Low Pass + * High Pass + * Band Pass + * Low Shelf + * High Shelf + * Peaking + * Notch + * Allpass * A {{ChannelMergerNode}} interface, an - {{AudioNode}} for combining channels from multiple - audio streams into a single audio stream. + {{AudioNode}} for combining channels from multiple + audio streams into a single audio stream. * A {{ChannelSplitterNode}} interface, an {{AudioNode}} for accessing the individual channels of an - audio stream in the routing graph. + audio stream in the routing graph. * A {{ConstantSourceNode}} interface, an - {{AudioNode}} for generating a nominally constant output value - with an {{AudioParam}} to allow automation of the value. + {{AudioNode}} for generating a nominally constant output value + with an {{AudioParam}} to allow automation of the value. * A {{ConvolverNode}} interface, an - {{AudioNode}} for applying a - real-time linear effect (such as the sound of - a concert hall). + {{AudioNode}} for applying a + real-time linear effect (such as the sound of + a concert hall). * A {{DelayNode}} interface, an - {{AudioNode}} which applies a dynamically adjustable - variable delay. + {{AudioNode}} which applies a dynamically adjustable + variable delay. * A {{DynamicsCompressorNode}} interface, an - {{AudioNode}} for dynamics compression. + {{AudioNode}} for dynamics compression. * A {{GainNode}} interface, an - {{AudioNode}} for explicit gain control. + {{AudioNode}} for explicit gain control. * An {{IIRFilterNode}} interface, an - {{AudioNode}} for a general IIR filter. + {{AudioNode}} for a general IIR filter. * A {{MediaElementAudioSourceNode}} interface, an - {{AudioNode}} which is the audio source from an - <{audio}>, <{video}>, or other media element. + {{AudioNode}} which is the audio source from an + <{audio}>, <{video}>, or other media element. * A {{MediaStreamAudioSourceNode}} interface, an - {{AudioNode}} which is the audio source from a - {{MediaStream}} such as live audio input, or from a remote peer. + {{AudioNode}} which is the audio source from a + {{MediaStream}} such as live audio input, or from a remote peer. * A {{MediaStreamTrackAudioSourceNode}} interface, - an {{AudioNode}} which is the audio source from a - {{MediaStreamTrack}}. + an {{AudioNode}} which is the audio source from a + {{MediaStreamTrack}}. * A {{MediaStreamAudioDestinationNode}} interface, - an {{AudioNode}} which is the audio destination to a - {{MediaStream}} sent to a remote peer. + an {{AudioNode}} which is the audio destination to a + {{MediaStream}} sent to a remote peer. * A {{PannerNode}} interface, an - {{AudioNode}} for spatializing / positioning audio in - 3D space. + {{AudioNode}} for spatializing / positioning audio in + 3D space. * A {{PeriodicWave}} interface for specifying - custom periodic waveforms for use by the - {{OscillatorNode}}. + custom periodic waveforms for use by the + {{OscillatorNode}}. * An {{OscillatorNode}} interface, an - {{AudioNode}} for generating a periodic waveform. + {{AudioNode}} for generating a periodic waveform. * A {{StereoPannerNode}} interface, an - {{AudioNode}} for equal-power positioning of audio - input in a stereo stream. + {{AudioNode}} for equal-power positioning of audio + input in a stereo stream. * A {{WaveShaperNode}} interface, an - {{AudioNode}} which applies a non-linear waveshaping - effect for distortion and other more subtle warming effects. + {{AudioNode}} which applies a non-linear waveshaping + effect for distortion and other more subtle warming effects. There are also several features that have been deprecated from the Web Audio API but not yet removed, pending implementation experience of their replacements: * A {{ScriptProcessorNode}} interface, an {{AudioNode}} for generating or processing audio directly - using scripts. + using scripts. * An {{AudioProcessingEvent}} interface, which is - an event type used with {{ScriptProcessorNode}} - objects. + an event type used with {{ScriptProcessorNode}} + objects.
"suspended"
-
, and a private slot
-[[render quantum size]] that is an unsigned integer.
enum AudioContextState { - "suspended", - "running", - "closed" + "suspended", + "running", + "closed" };
Enum value | Description | -
---|---|
- "suspended" - | - This context is currently suspended (context time is not - proceeding, audio hardware may be powered down/released). - |
- "running" - | - Audio is being processed. - |
- "closed" - | - This context has been released, and can no longer be used to - process audio. All system audio resources have been released. - |
- enum AudioContextRenderSizeCategory { - "default", - "hardware" -}; -- -
- Enumeration description - | -|
---|---|
- "default" - | - The AudioContext's render quantum size is the default value of 128 - frames. - |
- "hardware" - | -- The User-Agent picks a render quantum size that is best for the - current configuration. - - Note: This exposes information about the host and can be used for fingerprinting. - | -
+ Enumeration description + | |
+ "suspended" + | + This context is currently suspended (context time is not + proceeding, audio hardware may be powered down/released). + |
+ "running" + | + Audio is being processed. + |
+ "closed" + | + This context has been released, and can no longer be used to + process audio. All system audio resources have been released. |
Worklet
object that can import
- a script containing {{AudioWorkletProcessor}}
- class definitions via the algorithms defined by [[!HTML]]
- and {{AudioWorklet}}.
-
- : currentTime
- ::
- This is the time in seconds of the sample frame immediately
- following the last sample-frame in the block of audio most
- recently processed by the context's rendering graph. If the
- context's rendering graph has not yet processed a block of
- audio, then {{BaseAudioContext/currentTime}} has a value of
- zero.
-
- In the time coordinate system of {{BaseAudioContext/currentTime}}, the value of
- zero corresponds to the first sample-frame in the first block
- processed by the graph. Elapsed time in this system corresponds
- to elapsed time in the audio stream generated by the
- {{BaseAudioContext}}, which may not be
- synchronized with other clocks in the system. (For an
- {{OfflineAudioContext}}, since the stream is
- not being actively played by any device, there is not even an
- approximation to real time.)
-
- All scheduled times in the Web Audio API are relative to the
- value of {{BaseAudioContext/currentTime}}.
-
- When the {{BaseAudioContext}} is in the
- "{{AudioContextState/running}}" state, the
- value of this attribute is monotonically increasing and is
- updated by the rendering thread in uniform increments,
- corresponding to one render quantum. Thus, for a running
- context, currentTime
increases steadily as the
- system processes audio blocks, and always represents the time
- of the start of the next audio block to be processed. It is
- also the earliest possible time when any change scheduled in
- the current state might take effect.
-
- currentTime
MUST be read atomically on the control thread before being
- returned.
-
- : destination
- ::
- An {{AudioDestinationNode}}
- with a single input representing the final destination for all
- audio. Usually this will represent the actual audio hardware.
- All {{AudioNode}}s actively rendering audio
- will directly or indirectly connect to {{BaseAudioContext/destination}}.
-
- : listener
- ::
- An {{AudioListener}}
- which is used for 3D spatialization.
-
- : onstatechange
- ::
- A property used to set an [=event handler=] for an
- event that is dispatched to
- {{BaseAudioContext}} when the state of the
- AudioContext has changed (i.e. when the corresponding promise
- would have resolved). The event type of this event handler is
- statechange. An event that uses the
- {{Event}} interface will be dispatched to the event
- handler, which can query the AudioContext's state directly. A
- newly-created AudioContext will always begin in the
- suspended
state, and a state change event will be
- fired whenever the state changes to a different state. This
- event is fired before the {{complete}} event
- is fired.
-
- : sampleRate
- ::
- The sample rate (in sample-frames per second) at which the
- {{BaseAudioContext}} handles audio. It is assumed that all
- {{AudioNode}}s in the context run at this rate. In making this
- assumption, sample-rate converters or "varispeed" processors are
- not supported in real-time processing.
- The Nyquist frequency is half this sample-rate value.
-
- : state
- ::
- Describes the current state of the {{BaseAudioContext}}. Getting this
- attribute returns the contents of the {{[[control thread state]]}} slot.
-
- : renderQuantumSize
- ::
- Getting this attribute returns the value of {{BaseAudioContext/[[render
- quantum size]]}} slot.
+ : audioWorklet
+ ::
+ Allows access to the Worklet
object that can import
+ a script containing {{AudioWorkletProcessor}}
+ class definitions via the algorithms defined by [[!worklets-1]]
+ and {{AudioWorklet}}.
+
+ : currentTime
+ ::
+ This is the time in seconds of the sample frame immediately
+ following the last sample-frame in the block of audio most
+ recently processed by the context's rendering graph. If the
+ context's rendering graph has not yet processed a block of
+ audio, then {{BaseAudioContext/currentTime}} has a value of
+ zero.
+
+ In the time coordinate system of {{BaseAudioContext/currentTime}}, the value of
+ zero corresponds to the first sample-frame in the first block
+ processed by the graph. Elapsed time in this system corresponds
+ to elapsed time in the audio stream generated by the
+ {{BaseAudioContext}}, which may not be
+ synchronized with other clocks in the system. (For an
+ {{OfflineAudioContext}}, since the stream is
+ not being actively played by any device, there is not even an
+ approximation to real time.)
+
+ All scheduled times in the Web Audio API are relative to the
+ value of {{BaseAudioContext/currentTime}}.
+
+ When the {{BaseAudioContext}} is in the
+ "{{AudioContextState/running}}" state, the
+ value of this attribute is monotonically increasing and is
+ updated by the rendering thread in uniform increments,
+ corresponding to one render quantum. Thus, for a running
+ context, currentTime
increases steadily as the
+ system processes audio blocks, and always represents the time
+ of the start of the next audio block to be processed. It is
+ also the earliest possible time when any change scheduled in
+ the current state might take effect.
+
+ currentTime
MUST be read atomically on the control thread before being
+ returned.
+
+ : destination
+ ::
+ An {{AudioDestinationNode}}
+ with a single input representing the final destination for all
+ audio. Usually this will represent the actual audio hardware.
+ All {{AudioNode}}s actively rendering audio
+ will directly or indirectly connect to {{BaseAudioContext/destination}}.
+
+ : listener
+ ::
+ An {{AudioListener}}
+ which is used for 3D spatialization.
+
+ : onstatechange
+ ::
+ A property used to set the EventHandler
for an
+ event that is dispatched to
+ {{BaseAudioContext}} when the state of the
+ AudioContext has changed (i.e. when the corresponding promise
+ would have resolved). An event of type
+ {{Event}} will be dispatched to the event
+ handler, which can query the AudioContext's state directly. A
+ newly-created AudioContext will always begin in the
+ suspended
state, and a state change event will be
+ fired whenever the state changes to a different state. This
+ event is fired before the {{oncomplete}} event
+ is fired.
+
+ : sampleRate
+ ::
+ The sample rate (in sample-frames per second) at which the
+ {{BaseAudioContext}} handles audio. It is assumed that all
+ {{AudioNode}}s in the context run at this rate. In making this
+ assumption, sample-rate converters or "varispeed" processors are
+ not supported in real-time processing.
+ The Nyquist frequency is half this sample-rate value.
+
+ : state
+ ::
+ Describes the current state of the {{AudioContext}}. Its value is identical
+ to control thread state.
- numberOfChannels: Determines how many channels the buffer will have. An implementation MUST support at least 32 channels. - length: Determines the size of the buffer in sample-frames. This MUST be at least 1. - sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. An implementation MUST support sample rates in at least the range 8000 to 96000. --
- numberOfInputs: Determines the number of inputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used. -- -
- numberOfOutputs: The number of outputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used. -- -
- maxDelayTime: Specifies the maximum delay time in seconds allowed for the delay line. If specified, this value MUST be greater than zero and less than three minutes or a {{NotSupportedError}} exception MUST be thrown. If not specified, then `1` will be used.
-
-
- - feedforward: An array of the feedforward (numerator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If all of the values are zero, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20. - feedback: An array of the feedback (denominator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If the first element of the array is 0, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20. -- -
constraints
attribute passed to the factory
- method.
-
- 5. Construct a new {{PeriodicWave}}
- p, passing the {{BaseAudioContext}} this factory
- method has been called on as a first argument, and
- o.
- 6. Return p.
-
- real: A sequence of cosine parameters. See its {{PeriodicWaveOptions/real}} constructor argument for a more detailed description.
- imag: A sequence of sine parameters. See its {{PeriodicWaveOptions/imag}} constructor argument for a more detailed description.
- constraints: If not given, the waveform is normalized. Otherwise, the waveform is normalized according the value given by constraints
.
-
-
- - bufferSize: The {{ScriptProcessorNode/bufferSize}} parameter determines the buffer size in units of sample-frames. If it's not passed in, or if the value is 0, then the implementation will choose the best buffer size for the given environment, which will be constant power of 2 throughout the lifetime of the node. Otherwise if the author explicitly specifies the bufferSize, it MUST be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. This value controls how frequently the {{ScriptProcessorNode/audioprocess}} event is dispatched and how many sample-frames need to be processed each call. Lower values for {{ScriptProcessorNode/bufferSize}} will result in a lower (better) latency. Higher values will be necessary to avoid audio breakup and glitches. It is recommended for authors to not specify this buffer size and allow the implementation to pick a good buffer size to balance between latency and audio quality. If the value of this parameter is not one of the allowed power-of-2 values listed above, an {{IndexSizeError}} MUST be thrown. - numberOfInputChannels: This parameter determines the number of channels for this node's input. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported. - numberOfOutputChannels: This parameter determines the number of channels for this node's output. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported. -- -
XMLHttpRequest
's
- response
attribute after setting the
- responseType
to "arraybuffer"
. Audio
- file data can be in any of the formats supported by the
- <{audio}> element. The buffer passed to
- {{BaseAudioContext/decodeAudioData()}} has its
- content-type determined by sniffing, as described in
- [[!mimesniff]].
-
- Although the primary method of interfacing with this function
- is via its promise return value, the callback parameters are
- provided for legacy reasons.
-
- Encourage implementation to warn authors in case of a corrupted file. It
- isn't possible to throw because this would be a breaking change.
-
- decodeAudioData
is
- called, the following steps MUST be performed on the control
- thread:
-
- 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}.
-
- 2. Let promise be a new Promise.
-
- 3. If {{BaseAudioContext/decodeAudioData(audioData, successCallback, errorCallback)/audioData!!argument}}
- is [=BufferSource/detached=], execute the following steps:
-
- 1. Append promise to {{BaseAudioContext/[[pending promises]]}}.
-
- 2. [=ArrayBuffer/Detach=]
- the {{BaseAudioContext/decodeAudioData(audioData, successCallback, errorCallback)/audioData!!argument}} {{ArrayBuffer}}.
- If this operations throws, jump to the step 3.
-
- 3. Queue a decoding operation to be performed on another thread.
-
- 4. Else, execute the following error steps:
-
- 1. Let error be a {{DataCloneError}}.
- 2. Reject promise with error, and remove it from
- {{BaseAudioContext/[[pending promises]]}}.
-
- 3.
- Queue a media element task to invoke
- {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with |error|.
-
- 5. Return promise.
- decodeAudioData
.
-
- 1. Let can decode be a boolean flag, initially set to true.
-
- 2. Attempt to determine the MIME type of
- {{BaseAudioContext/decodeAudioData(audioData, successCallback,
- errorCallback)/audioData!!argument}}, using
- [[mimesniff#matching-an-audio-or-video-type-pattern]]. If the audio or
- video type pattern matching algorithm returns {{undefined}},
- set can decode to false.
-
- 3. If can decode is true, attempt to decode the encoded
- {{BaseAudioContext/decodeAudioData(audioData, successCallback,
- errorCallback)/audioData!!argument}} into [=linear PCM=]. In case of
- failure, set can decode to false.
-
- If the media byte-stream contains multiple audio tracks, only decode the
- first track to [=linear pcm=].
-
- Note: Authors who need more control over the decoding process can use
- [[WEBCODECS]].
-
- 4. If |can decode| is `false`,
-
- queue a media element task to execute the following steps:
-
- 1. Let error be a DOMException
- whose name is {{EncodingError}}.
-
- 2. Reject promise with error, and remove it from
- {{BaseAudioContext/[[pending promises]]}}.
-
- 3. If {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} is
- not missing, invoke
- {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with
- error.
-
- 5. Otherwise:
- 1. Take the result, representing the decoded [=linear PCM=]
- audio data, and resample it to the sample-rate of the
- {{BaseAudioContext}} if it is different from
- the sample-rate of {{BaseAudioContext/decodeAudioData(audioData,
- successCallback, errorCallback)/audioData!!argument}}.
-
- 2.
- queue a media element task to execute the following steps:
+ : createAnalyser()
+ ::
+ Factory method for an {{AnalyserNode}}.
+ + numberOfChannels: Determines how many channels the buffer will have. An implementation MUST support at least 32 channels. + length: Determines the size of the buffer in sample-frames. This MUST be at + least 1. + sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. An implementation MUST support sample rates in at least the range 8000 to 96000. ++
+ numberOfInputs: Determines the number of inputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used. ++ +
+ numberOfOutputs: The number of outputs. Values of up to 32 MUST be supported. If not specified, then `6` will be used. ++ +
+ maxDelayTime: Specifies the maximum delay time in seconds allowed for the delay line. If specified, this value MUST be greater than zero and less than three minutes or a {{NotSupportedError}} exception MUST be thrown. If not specified, then `1` will be used.
+
+
+ + feedforward: An array of the feedforward (numerator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If all of the values are zero, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20. + feedback: An array of the feedback (denominator) coefficients for the transfer function of the IIR filter. The maximum length of this array is 20. If the first element of the array is 0, an {{InvalidStateError}} MUST be thrown. A {{NotSupportedError}} MUST be thrown if the array length is 0 or greater than 20. ++ +
constraints
attribute passed to the factory
+ method.
+
+ 5. Construct a new {{PeriodicWave}}
+ p, passing the {{BaseAudioContext}} this factory
+ method has been called on as a first argument, and
+ o.
+ 6. Return p.
+
+ real: A sequence of cosine parameters. See its {{PeriodicWaveOptions/real}} constructor argument for a more detailed description.
+ imag: A sequence of sine parameters. See its {{PeriodicWaveOptions/imag}} constructor argument for a more detailed description.
+ constraints: If not given, the waveform is normalized. Otherwise, the waveform is normalized according the value given by constraints
.
+
+
+ + bufferSize: The {{ScriptProcessorNode/bufferSize}} parameter determines the buffer size in units of sample-frames. If it's not passed in, or if the value is 0, then the implementation will choose the best buffer size for the given environment, which will be constant power of 2 throughout the lifetime of the node. Otherwise if the author explicitly specifies the bufferSize, it MUST be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. This value controls how frequently the {{ScriptProcessorNode/onaudioprocess}} event is dispatched and how many sample-frames need to be processed each call. Lower values for {{ScriptProcessorNode/bufferSize}} will result in a lower (better) latency. Higher values will be necessary to avoid audio breakup and glitches. It is recommended for authors to not specify this buffer size and allow the implementation to pick a good buffer size to balance between latency and audio quality. If the value of this parameter is not one of the allowed power-of-2 values listed above, an {{IndexSizeError}} MUST be thrown. + numberOfInputChannels: This parameter determines the number of channels for this node's input. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported. + numberOfOutputChannels: This parameter determines the number of channels for this node's output. The default value is 2. Values of up to 32 must be supported. A {{NotSupportedError}} must be thrown if the number of channels is not supported. ++ +
XMLHttpRequest
's
+ response
attribute after setting the
+ responseType
to "arraybuffer"
. Audio
+ file data can be in any of the formats supported by the
+ <{audio}> element. The buffer passed to
+ {{BaseAudioContext/decodeAudioData()}} has its
+ content-type determined by sniffing, as described in
+ [[mimesniff]].
+
+ Although the primary method of interfacing with this function
+ is via its promise return value, the callback parameters are
+ provided for legacy reasons.
+
+ decodeAudioData
is
+ called, the following steps MUST be performed on the control
+ thread:
+
+ 1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=fully active=] then return [=a promise rejected with=] "{{InvalidStateError}}" {{DOMException}}.
+
+ 2. Let promise be a new Promise.
+
+ 3. If the operation IsDetachedBuffer
+ (described in [[!ECMASCRIPT]]) on {{BaseAudioContext/decodeAudioData(audioData, successCallback, errorCallback)/audioData!!argument}} is
+ false
, execute the following steps:
+
+ 1. Append promise to {{BaseAudioContext/[[pending promises]]}}.
+
+ 2.
+ Detach the {{BaseAudioContext/decodeAudioData(audioData,
+ successCallback, errorCallback)/audioData!!argument}} {{ArrayBuffer}}.
+ This operation is described in [[!ECMASCRIPT]]. If this operations
+ throws, jump to the step 3.
+
+ 3. Queue a decoding operation to be performed on another thread.
+
+ 4. Else, execute the following error steps:
+
+ 1. Let error be a {{DataCloneError}}.
+ 2. Reject promise with error, and remove it from
+ {{BaseAudioContext/[[pending promises]]}}.
+ 3. Queue a task to invoke {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with error.
+
+ 5. Return promise.
+ decodeAudioData
.
+
+ 1. Let can decode be a boolean flag, initially set to true.
+
+ 2. Attempt to determine the MIME type of
+ {{BaseAudioContext/decodeAudioData(audioData, successCallback,
+ errorCallback)/audioData!!argument}}, using
+ [[mimesniff#matching-an-audio-or-video-type-pattern]]. If the audio or
+ video type pattern matching algorithm returns undefined
,
+ set can decode to false.
+
+ 3. If can decode is true, attempt to decode the encoded
+ {{BaseAudioContext/decodeAudioData(audioData, successCallback,
+ errorCallback)/audioData!!argument}} into [=linear PCM=]. In case of
+ failure, set can decode to false.
+
+ 4. If can decode is false, queue a task to
+ execute the following step, on the control thread's
+ event loop:
+
+ 1. Let error be a DOMException
+ whose name is {{EncodingError}}.
+
+ 2. Reject promise with error, and remove it from
+ {{BaseAudioContext/[[pending promises]]}}.
+
+ 3. If {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} is
+ not missing, invoke
+ {{BaseAudioContext/decodeAudioData()/errorCallback!!argument}} with
+ error.
+
+ 5. Otherwise:
+ 1. Take the result, representing the decoded [=linear PCM=]
+ audio data, and resample it to the sample-rate of the
+ {{AudioContext}} if it is different from
+ the sample-rate of {{BaseAudioContext/decodeAudioData(audioData,
+ successCallback, errorCallback)/audioData!!argument}}.
+
+ 2. Queue a task on the control thread's event loop
+ to execute the following steps:
+
+ 1. Let buffer be an
+ {{AudioBuffer}} containing the final result
+ (after possibly performing sample-rate conversion).
+
+ 2. Resolve promise with buffer.
+
+ 3. If {{BaseAudioContext/decodeAudioData()/successCallback!!argument}}
+ is not missing, invoke
+ {{BaseAudioContext/decodeAudioData()/successCallback!!argument}}
+ with buffer.
+ + audioData: An ArrayBuffer containing compressed audio data. + successCallback: A callback function which will be invoked when the decoding is finished. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. + errorCallback: A callback function which will be invoked if there is an error decoding the audio file. +- 2. Resolve promise with buffer. - - 3. If {{BaseAudioContext/decodeAudioData()/successCallback!!argument}} - is not missing, invoke - {{BaseAudioContext/decodeAudioData()/successCallback!!argument}} - with buffer. -
- audioData: An ArrayBuffer containing compressed audio data. - successCallback: A callback function which will be invoked when the decoding is finished. The single argument to this callback is an AudioBuffer representing the decoded PCM audio data. - errorCallback: A callback function which will be invoked if there is an error decoding the audio file. -- -
enum AudioContextLatencyCategory { - "balanced", - "interactive", - "playback" + "balanced", + "interactive", + "playback" };
Enum value | Description | -||
---|---|---|---|
- "balanced" - | - Balance audio output latency and power consumption. - | ||
- "interactive" - | - Provide the lowest audio output latency possible without - glitching. This is the default. - | ||
- "playback" - |
- Prioritize sustained playback without interruption over audio
- output latency. Lowest power consumption.
+
+
+ Enumeration description
+ | | |
+ "balanced" + | + Balance audio output latency and power consumption. + | ||
+ "interactive" + | + Provide the lowest audio output latency possible without + glitching. This is the default. + | ||
+ "playback" + | + Prioritize sustained playback without interruption over audio + output latency. Lowest power consumption. |
-enum AudioSinkType { - "none" -}; -- -
Enum Value | Description | -
---|---|
- "none" - | - The audio graph will be processed without being played - through an audio output device. - |
false
.
-
- : [[sink ID]]
- ::
- A {{DOMString}} or an {{AudioSinkInfo}} representing the identifier
- or the information of the current audio output device respectively. The
- initial value is ""
, which means the default audio output
- device.
-
- : [[pending resume promises]]
- ::
- An ordered list to store pending {{Promise}}s created by
- {{AudioContext/resume()}}. It is initially empty.
+ : [[suspended by user]]
+ ::
+ A boolean flag representing whether the context is suspended by user code.
+ The initial value is false
.
- If the [=current settings object=]'s [=relevant global object=]'s - [=associated Document=] is NOT [=fully active=], throw an - "{{InvalidStateError}}" and abort these steps. -
- When creating an {{AudioContext}}, - execute these steps: - - 1. Let |context| be a new {{AudioContext}} object. - - 1. Set a {{[[control thread state]]}} tosuspended
on
- |context|.
-
- 1. Set a {{[[rendering thread state]]}} to suspended
on
- |context|.
-
- 1. Let |messageChannel| be a new {{MessageChannel}}.
-
- 1. Let |controlSidePort| be the value of
- |messageChannel|'s {{MessageChannel/port1}} attribute.
-
- 1. Let |renderingSidePort| be the value of
- |messageChannel|'s {{MessageChannel/port2}} attribute.
-
- 1. Let |serializedRenderingSidePort| be the result of
- [$StructuredSerializeWithTransfer$](|renderingSidePort|,
- « |renderingSidePort| »).
-
- 1. Set this {{BaseAudioContext/audioWorklet}}'s {{AudioWorklet/port}}
- to |controlSidePort|.
-
- 1. Queue a control message to set the
- MessagePort on the AudioContextGlobalScope, with
- |serializedRenderingSidePort|.
-
- 1. If contextOptions
is given, perform the following
- substeps:
-
- 1. If {{AudioContextOptions/sinkId}} is specified, let |sinkId| be
- the value of
- contextOptions.{{AudioContextOptions/sinkId}}
and
- run the following substeps:
-
- 1. If both |sinkId| and {{AudioContext/[[sink ID]]}} are a type of
- {{DOMString}}, and they are equal to each other, abort these
- substeps.
-
- 1. If |sinkId| is a type of {{AudioSinkOptions}} and
- {{AudioContext/[[sink ID]]}} is a type of {{AudioSinkInfo}}, and
- {{AudioSinkOptions/type}} in |sinkId| and {{AudioSinkInfo/type}}
- in {{AudioContext/[[sink ID]]}} are equal, abort these substeps.
-
- 1. Let |validationResult| be the return value of
- sink identifier validation
- of |sinkId|.
-
- 1. If |validationResult| is a type of {{DOMException}}, throw an
- exception with |validationResult| and abort these substeps.
-
- 1. If |sinkId| is a type of {{DOMString}}, set
- {{AudioContext/[[sink ID]]}} to |sinkId| and abort these
- substeps.
-
- 1. If |sinkId| is a type of {{AudioSinkOptions}}, set
- {{AudioContext/[[sink ID]]}} to a new instance of
- {{AudioSinkInfo}} created with the value of
- {{AudioSinkOptions/type}} of |sinkId|.
-
- 1. Set the internal latency of |context| according to
- contextOptions.{{AudioContextOptions/latencyHint}}
,
- as described in {{AudioContextOptions/latencyHint}}.
-
- 1. If contextOptions.{{AudioContextOptions/sampleRate}}
-
is specified, set the {{BaseAudioContext/sampleRate}} of
- |context| to this value. Otherwise, follow these substeps:
-
- 1. If |sinkId| is the empty string or a type of
- {{AudioSinkOptions}}, use the sample rate of the default output
- device. Abort these substeps.
-
- 1. If |sinkId| is a {{DOMString}}, use the sample rate of the
- output device identified by |sinkId|. Abort these substeps.
-
- If contextOptions.{{AudioContextOptions/sampleRate}}
- differs from the sample rate of the output device, the user agent
- MUST resample the audio output to match the sample rate of the
- output device.
-
- Note: If resampling is required, the latency of |context| may be
- affected, possibly by a large amount.
-
- 1. If |context| is allowed to start, send a
- control message to start processing.
-
- 1. Return |context|.
- "speaker-selection"
, abort these substeps.
-
- 1. [=Queue a media element task=] to [=fire an event=] named
- {{AudioContext/error}} at the {{AudioContext}}, and abort the following
- steps.
-
- 1. Set [=this=] {{[[rendering thread state]]}} to running
on the
- {{AudioContext}}.
-
- 1. [=Queue a media element task=] to execute the following steps:
-
- 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}}
- to "{{AudioContextState/running}}".
-
- 1. [=fire an event=] named {{BaseAudioContext/statechange}} at the
- {{AudioContext}}.
- -contextOptions: User-specified options controlling how the {{AudioContext}} should be constructed. -- -
closed
reject the promise
- with {{InvalidStateError}}, abort these steps,
- returning promise.
-
- 1. Set the {{[[control thread state]]}} flag on the {{AudioContext}} to
- closed
.
-
- 1. Queue a control message to close the {{AudioContext}}.
-
- 1. Return promise.
- suspended
.
- HTMLMediaElement.captureStream()
.
-
- Note: When an {{AudioContext}} has been closed, implementation can
- choose to aggressively release more resources than when
- suspending.
-
- - mediaElement: The media element that will be re-routed. -- -
- mediaStream: The media stream that will act as source. -- -
- mediaStreamTrack: The {{MediaStreamTrack}} that will act as source. The value of its kind
attribute must be equal to "audio"
, or an {{InvalidStateError}} exception MUST be thrown.
-
-
- contextTime
value was
- rendered by the audio output device, in the same units and
- origin as performance.now()
(described in
- [[!hr-time-3]]).
-
- If the context's rendering graph has not yet processed a block
- of audio, then {{getOutputTimestamp}} call
- returns an {{AudioTimestamp}} instance with both
- members containing zero.
-
- After the context's rendering graph has started processing of
- blocks of audio, its {{BaseAudioContext/currentTime}} attribute value
- always exceeds the {{AudioTimestamp/contextTime}} value obtained
- from {{AudioContext/getOutputTimestamp}} method call.
-
- - function outputPerformanceTime(contextTime) { - const timestamp = context.getOutputTimestamp(); - const elapsedTime = contextTime - timestamp.contextTime; - return timestamp.performanceTime + elapsedTime * 1000; - } -- - In the above example the accuracy of the estimation depends on - how close the argument value is to the current output audio - stream position: the closer the given
contextTime
- is to timestamp.contextTime
, the better the
- accuracy of the obtained estimation.
- closed
reject the
- promise with {{InvalidStateError}}, abort these steps,
- returning promise.
-
- 3. Set {{[[suspended by user]]}} to false
.
-
- 4. If the context is not allowed to start, append
- promise to {{BaseAudioContext/[[pending promises]]}} and
- {{AudioContext/[[pending resume promises]]}} and abort these steps, returning
- promise.
-
- 5. Set the {{[[control thread state]]}} on the
- {{AudioContext}} to running
.
-
- 6. Queue a control message to resume the {{AudioContext}}.
-
- 7. Return promise.
- running
.
+
+ If the current
+ settings object's associated Document
+ is NOT
+ fully active, throw an InvalidStateError
and
+ abort these steps.
+
suspended
on the {{AudioContext}}.
- 4. In case of failure,
-
- queue a media element task to execute the following steps:
+ 2. Set a rendering thread state to suspended
on the {{AudioContext}}.
- 1. Reject all promises from {{AudioContext/[[pending resume promises]]}}
- in order, then clear {{AudioContext/[[pending resume promises]]}}.
+ 3. Let [[pending resume promises]] be a
+ slot on this {{AudioContext}}, that is an initially empty ordered list of
+ promises.
- 2. Additionally, remove those promises from {{BaseAudioContext/[[pending
- promises]]}}.
+ 4. If contextOptions
is given, apply the options:
- 5.
- queue a media element task to execute the following steps:
+ 1. Set the internal latency of this {{AudioContext}}
+ according to contextOptions.{{AudioContextOptions/latencyHint}}
, as described
+ in {{AudioContextOptions/latencyHint}}.
- 1. Resolve all promises from {{AudioContext/[[pending resume promises]]}} in order.
- 1. Clear {{AudioContext/[[pending resume promises]]}}. Additionally, remove those
- promises from {{BaseAudioContext/[[pending promises]]}}.
+ 2. If contextOptions.{{AudioContextOptions/sampleRate}}
is specified,
+ set the {{BaseAudioContext/sampleRate}}
+ of this {{AudioContext}} to this value. Otherwise, use
+ the sample rate of the default output device. If the
+ selected sample rate differs from the sample rate of the
+ output device, this {{AudioContext}} MUST resample the
+ audio output to match the sample rate of the output device.
- 2. Resolve promise.
+ Note: If resampling is required, the latency of the
+ AudioContext may be affected, possibly by a large
+ amount.
- 3. If the {{BaseAudioContext/state}} attribute of the {{AudioContext}} is not already "{{AudioContextState/running}}":
+ 5. If the context is allowed to start, send a
+ control message to start processing.
- 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/running}}".
+ 6. Return this {{AudioContext}} object.
+ running
on the {{AudioContext}}.
- : suspend()
- ::
- Suspends the progression of {{AudioContext}}'s
- {{BaseAudioContext/currentTime}}, allows any
- current context processing blocks that are already processed to
- be played to the destination, and then allows the system to
- release its claim on audio hardware. This is generally useful
- when the application knows it will not need the
- {{AudioContext}} for some time, and wishes to temporarily
- release system resource associated with the
- {{AudioContext}}. The promise resolves when the frame buffer
- is empty (has been handed off to the hardware), or immediately
- (with no other effect) if the context is already
- suspended
. The promise is rejected if the context
- has been closed.
+ 4. Queue a task on the control thread event loop, to execute these steps:
- statechange
at the {{AudioContext}}.
+ closed
reject the promise
- with {{InvalidStateError}}, abort these steps,
- returning promise.
+ + contextOptions: User-specified options controlling how the {{AudioContext}} should be constructed. ++ - 3. Append promise to {{BaseAudioContext/[[pending promises]]}}. +
true
.
+suspended
.
+closed
reject the promise
+ with {{InvalidStateError}}, abort these steps,
+ returning promise.
+
+ 1. Set the control thread state flag on the {{AudioContext}} to closed
.
+
+ 1. Queue a control message to close the {{AudioContext}}.
+
+ 1. Return promise.
+ suspended
.
+ statechange
at the {{AudioContext}}.
+ HTMLMediaElement.captureStream()
.
+
+ Note: When an {{AudioContext}} has been closed, implementation can
+ choose to aggressively release more resources than when
+ suspending.
+
+ + mediaElement: The media element that will be re-routed. ++ +
+ mediaStream: The media stream that will act as source. ++ +
+ mediaStreamTrack: The {{MediaStreamTrack}} that will act as source. The value of its kind
attribute must be equal to "audio"
, or an {{InvalidStateError}} exception MUST be thrown.
+
+
+ contextTime
value was
+ rendered by the audio output device, in the same units and
+ origin as performance.now()
(described in
+ [[!hr-time-2]]).
+
+ If the context's rendering graph has not yet processed a block
+ of audio, then {{getOutputTimestamp}} call
+ returns an {{AudioTimestamp}} instance with both
+ members containing zero.
+
+ After the context's rendering graph has started processing of
+ blocks of audio, its {{BaseAudioContext/currentTime}} attribute value
+ always exceeds the {{AudioTimestamp/contextTime}} value obtained
+ from {{AudioContext/getOutputTimestamp}} method call.
+
+ + function outputPerformanceTime(contextTime) { + const timestamp = context.getOutputTimestamp(); + const elapsedTime = contextTime - timestamp.contextTime; + return timestamp.performanceTime + elapsedTime * 1000; + } ++ + In the above example the accuracy of the estimation depends on + how close the argument value is to the current output audio + stream position: the closer the given
contextTime
+ is to timestamp.contextTime
, the better the
+ accuracy of the obtained estimation.
+ closed
reject the
+ promise with {{InvalidStateError}}, abort these steps,
+ returning promise.
+
+ 3. Set {{[[suspended by user]]}} to false
.
+
+ 4. If the context is not allowed to start, append
+ promise to {{BaseAudioContext/[[pending promises]]}} and
+ {{AudioContext/[[pending resume promises]]}} and abort these steps, returning
+ promise.
- 7. Return promise.
- running
.
- suspended
.
+ running
.
- 2. If the {{BaseAudioContext/state}}
- attribute of the {{AudioContext}} is not already "{{AudioContextState/suspended}}":
+ 3. Start rendering the audio graph.
- 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/suspended}}".
+ 4. In case of failure, queue a task on the control thread to execute the following,
+ and abort these steps:
- 1. [=Queue a media element task=] to [=fire an event=] named
- {{BaseAudioContext/statechange}} at the {{AudioContext}}.
- resume()
/suspend()
does not cause
- silence to appear in the {{AnalyserNode}}'s stream of data.
- In particular, calling {{AnalyserNode}} functions repeatedly
- when a {{AudioContext}} is suspended MUST return the same
- data.
+ 2. Additionally, remove those promises from {{BaseAudioContext/[[pending
+ promises]]}}.
- null
, return a promise
- rejected with |validationResult|. Abort these steps.
+ 3. If the {{BaseAudioContext/state}} attribute of the {{AudioContext}} is not already "{{AudioContextState/running}}":
- 1. Let |p| be a new promise.
+ 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/running}}".
- 1. Send a control message with |p| and |sinkId| to start
- processing.
+ 2. Queue a task to fire a simple event named statechange
at the {{AudioContext}}.
+ suspended
. The promise is rejected if the context
+ has been closed.
- 1. Let |sinkId| be the sink identifier passed into this algorithm.
+ closed
reject the promise
+ with {{InvalidStateError}}, abort these steps,
+ returning promise.
- 1. Set |wasRunning| to false if the {{[[rendering thread state]]}} on
- the {{AudioContext}} is "suspended"
.
+ 3. Append promise to {{BaseAudioContext/[[pending promises]]}}.
- 1. Pause the renderer after processing the current render quantum.
+ 4. Set {{[[suspended by user]]}} to true
.
- 1. Attempt to release system resources.
+ 5. Set the control thread state on the {{AudioContext}} to suspended
.
- 1. If |wasRunning| is true:
+ 6. Queue a control message to suspend the {{AudioContext}}.
- 1. Set the {{[[rendering thread state]]}} on the {{AudioContext}} to
- "suspended"
.
-
- 1.
- Queue a media element task to execute the following steps:
+ 7. Return promise.
+ suspended
.
- 1. Resolve |p|.
-
- 1. [=Fire an event=] named {{AudioContext/sinkchange}} at the
- associated {{AudioContext}}.
+ 3. Queue a task on the control thread's event loop, to execute these steps:
- 1. If |wasRunning| is true:
+ 1. Resolve promise.
- 1. Set the {{[[rendering thread state]]}} on the {{AudioContext}} to
- "running"
.
+ 2. If the {{BaseAudioContext/state}}
+ attribute of the {{AudioContext}} is not already "{{AudioContextState/suspended}}":
- 1.
- Queue a media element task to execute the following steps:
+ 1. Set the {{BaseAudioContext/state}} attribute of the {{AudioContext}} to "{{AudioContextState/suspended}}".
+ 2. Queue a task to fire a simple event named statechange
at the {{AudioContext}}.
+ resume()
/suspend()
does not cause
+ silence to appear in the {{AnalyserNode}}'s stream of data.
+ In particular, calling {{AnalyserNode}} functions repeatedly
+ when a {{AudioContext}} is suspended MUST return the same
+ data.
- 1. Set the {{BaseAudioContext/state}} attribute of the
- {{AudioContext}} to "{{AudioContextState/running}}".
-
- 1. [=Fire an event=] named {{BaseAudioContext/statechange}} at the
- associated {{AudioContext}}.
- "speaker-selection"
, return a new {{DOMException}} whose name
- is "{{NotAllowedError}}".
-
- 1. If |sinkIdArg| is a type of {{DOMString}} but it is not equal to the empty
- string or it does not match any audio output device identified by the
- result that would be provided by {{MediaDevices/enumerateDevices()}},
- return a new {{DOMException}} whose name is "{{NotFoundError}}".
-
- 1. Return null
.
-- dictionary AudioContextOptions { - (AudioContextLatencyCategory or double) latencyHint = "interactive"; - float sampleRate; - (DOMString or AudioSinkOptions) sinkId; - (AudioContextRenderSizeCategory or unsigned long) renderSizeHint = "default"; - }; +dictionary AudioContextOptions { + (AudioContextLatencyCategory or double) latencyHint = "interactive"; + float sampleRate; +};
latencyHint
is a
- value from {{AudioContextLatencyCategory}}. However, a
- double can also be specified for the number of seconds of
- latency for finer control to balance latency and power
- consumption. It is at the browser's discretion to interpret
- the number appropriately. The actual latency used is given by
- AudioContext's {{AudioContext/baseLatency}} attribute.
-
- : sampleRate
- ::
- Set the {{BaseAudioContext/sampleRate}} to this value
- for the {{AudioContext}} that will be created. The
- supported values are the same as the sample rates for an
- {{AudioBuffer}}. A
- {{NotSupportedError}} exception MUST be thrown if
- the specified sample rate is not supported.
-
- If {{AudioContextOptions/sampleRate}} is not
- specified, the preferred sample rate of the output device for
- this {{AudioContext}} is used.
-
- : sinkId
- ::
- The identifier or associated information of the audio output device.
- See {{AudioContext/sinkId}} for more details.
-
- : renderSizeHint
- ::
- This allows users to ask for a particular render quantum size when an
- integer is passed, to use the default of 128 frames if nothing or
- "default"
is passed, or to ask the User-Agent to pick a good
- render quantum size if "hardware"
is specified.
-
- It is a hint that might not be honored.
--dictionary AudioSinkOptions { - required AudioSinkType type; -}; -- -
-[Exposed=Window] -interface AudioSinkInfo { - readonly attribute AudioSinkType type; -}; -- -
latencyHint
is a
+ value from {{AudioContextLatencyCategory}}. However, a
+ double can also be specified for the number of seconds of
+ latency for finer control to balance latency and power
+ consumption. It is at the browser's discretion to interpret
+ the number appropriately. The actual latency used is given by
+ AudioContext's {{AudioContext/baseLatency}} attribute.
+
+ : sampleRate
+ ::
+ Set the {{BaseAudioContext/sampleRate}} to this value
+ for the {{AudioContext}} that will be created. The
+ supported values are the same as the sample rates for an
+ {{AudioBuffer}}. A
+ {{NotSupportedError}} exception MUST be thrown if
+ the specified sample rate is not supported.
+
+ If {{AudioContextOptions/sampleRate}} is not
+ specified, the preferred sample rate of the output device for
+ this {{AudioContext}} is used.
dictionary AudioTimestamp { - double contextTime; - DOMHighResTimeStamp performanceTime; + double contextTime; + DOMHighResTimeStamp performanceTime; };@@ -2381,142 +1884,16 @@ dictionary AudioTimestamp { Dictionary {{AudioTimestamp}} Members
Performance
interface implementation (described in
- [[!hr-time-3]]).
--[Exposed=Window] -interface AudioRenderCapacity : EventTarget { - undefined start(optional AudioRenderCapacityOptions options = {}); - undefined stop(); - attribute EventHandler onupdate; -}; -- -This interface provides rendering performance metrics of an -{{AudioContext}}. In order to calculate them, the renderer collects a -load value per system-level audio callback. - -
-dictionary AudioRenderCapacityOptions { - double updateInterval = 1; -}; -- -
-[Exposed=Window] -interface AudioRenderCapacityEvent : Event { - constructor (DOMString type, optional AudioRenderCapacityEventInit eventInitDict = {}); - readonly attribute double timestamp; - readonly attribute double averageLoad; - readonly attribute double peakLoad; - readonly attribute double underrunRatio; -}; - -dictionary AudioRenderCapacityEventInit : EventInit { - double timestamp = 0; - double averageLoad = 0; - double peakLoad = 0; - double underrunRatio = 0; -}; -- -
Performance
interface implementation (described in
+ [[!hr-time-2]]).
- If the [=current settings object=]'s [=relevant global object=]'s - [=associated Document=] is NOT [=fully active=], - throw an {{InvalidStateError}} and abort these steps. -
- Let |c| be a new {{OfflineAudioContext}} object. - Initialize |c| as follows: - - 1. Set the {{[[control thread state]]}} for |c| - to"suspended"
.
-
- 1. Set the {{[[rendering thread state]]}} for
- |c| to "suspended"
.
-
- 1. Determine the {{[[render quantum size]]}} for this {{OfflineAudioContext}},
- based on the value of the {{OfflineAudioContextOptions/renderSizeHint}}:
-
- 1. If it has the default value of "default"
or
- "hardware
", set the {{[[render quantum size]]}} private
- slot to 128.
-
- 1. Else, if an integer has been passed, the User-Agent can decide to
- honour this value by setting it to the {{[[render quantum size]]}}
- private slot.
-
- 1. Construct an {{AudioDestinationNode}} with its
- {{AudioNode/channelCount}} set to
- contextOptions.numberOfChannels
.
-
- 1. Let |messageChannel| be a new {{MessageChannel}}.
-
- 1. Let |controlSidePort| be the value of
- |messageChannel|'s {{MessageChannel/port1}} attribute.
-
- 1. Let |renderingSidePort| be the value of
- |messageChannel|'s {{MessageChannel/port2}} attribute.
-
- 1. Let |serializedRenderingSidePort| be the result of
- [$StructuredSerializeWithTransfer$](|renderingSidePort|,
- « |renderingSidePort| »).
-
- 1. Set this {{BaseAudioContext/audioWorklet}}'s {{AudioWorklet/port}} to
- |controlSidePort|.
-
- 1. Queue a control message to set the
- MessagePort on the AudioContextGlobalScope, with
- |serializedRenderingSidePort|.
- - contextOptions: The initial parameters needed to construct this context. -- - : OfflineAudioContext(numberOfChannels, length, sampleRate) - :: - The {{OfflineAudioContext}} can be constructed with the same arguments - as AudioContext.createBuffer. A - {{NotSupportedError}} exception MUST be thrown if any - of the arguments is negative, zero, or outside its nominal - range. - - The OfflineAudioContext is constructed as if - -
- new OfflineAudioContext({ - numberOfChannels: numberOfChannels, - length: length, - sampleRate: sampleRate - }) -- - were called instead. - -
- numberOfChannels: Determines how many channels the buffer will have. See {{BaseAudioContext/createBuffer()}} for the supported number of channels. - length: Determines the size of the buffer in sample-frames. - sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. See {{BaseAudioContext/createBuffer()}} for valid sample rates. -+ : OfflineAudioContext(contextOptions) + :: +
+ If the current
+ settings object's associated
+ Document is NOT
+ fully active, throw an InvalidStateError
and
+ abort these steps.
+
"suspended"
.
+
+ 2. Set the rendering thread state for
+ c to "suspended"
.
+
+ 3. Construct an {{AudioDestinationNode}} with its
+ {{AudioNode/channelCount}} set to
+ contextOptions.numberOfChannels
.
+ + contextOptions: The initial parameters needed to construct this context. ++ + : OfflineAudioContext(numberOfChannels, length, sampleRate) + :: + The {{OfflineAudioContext}} can be constructed with the same arguments + as AudioContext.createBuffer. A + {{NotSupportedError}} exception MUST be thrown if any + of the arguments is negative, zero, or outside its nominal + range. + + The OfflineAudioContext is constructed as if + +
+ new OfflineAudioContext({ + numberOfChannels: numberOfChannels, + length: length, + sampleRate: sampleRate + }) ++ + were called instead. + +
+ numberOfChannels: Determines how many channels the buffer will have. See {{BaseAudioContext/createBuffer()}} for the supported number of channels. + length: Determines the size of the buffer in sample-frames. + sampleRate: Describes the sample-rate of the [=linear PCM=] audio data in the buffer in sample-frames per second. See {{BaseAudioContext/createBuffer()}} for valid sample rates. +
length
parameter for the constructor.
-
- : oncomplete
- ::
- The event type of this event handler is complete. The event
- dispatched to the event handler will use the {{OfflineAudioCompletionEvent}}
- interface. It is the last event fired on an {{OfflineAudioContext}}.
+ : length
+ ::
+ The size of the buffer in sample-frames. This is the same as the
+ value of the length
parameter for the constructor.
+
+ : oncomplete
+ ::
+ An EventHandler of type OfflineAudioCompletionEvent.
+ It is the last event fired on an {{OfflineAudioContext}}.
complete
for legacy reasons.
-
- startRendering
is
- called, the following steps MUST be performed on the control
- thread:
-
- complete
for legacy reasons.
+
+ startRendering
is
+ called, the following steps MUST be performed on the control
+ thread:
+
+ numberOfChannels
, length
and
- sampleRate
values passed to this instance's
- constructor in the contextOptions
parameter.
- Assign this buffer to an internal slot
- [[rendered buffer]] in the {{OfflineAudioContext}}.
+ numberOfChannels
, length
and
+ sampleRate
values passed to this instance's
+ constructor in the contextOptions
parameter.
+ Assign this buffer to an internal slot
+ [[rendered buffer]] in the {{OfflineAudioContext}}.
- length
sample-frames of audio into
- {{[[rendered buffer]]}}
+ length
sample-frames of audio into
+ {{[[rendered buffer]]}}
- complete
at this instance, using an instance
+ of {{OfflineAudioCompletionEvent}} whose
+ renderedBuffer
property is set to
+ {{[[rendered buffer]]}}.
- closed
.
+ - The {{[[rendering started]]}} slot on the {{OfflineAudioContext}}
+ is false.
- 1. Abort these steps and reject promise with
- {{InvalidStateError}} when any of following conditions is true:
- - The {{[[control thread state]]}} on the {{OfflineAudioContext}}
- is closed
.
- - The {{[[rendering started]]}} slot on the {{OfflineAudioContext}}
- is false.
+ 1. Set the control thread state flag on the
+ {{OfflineAudioContext}} to running
.
- 1. Set the {{[[control thread state]]}} flag on the
- {{OfflineAudioContext}} to running
.
+ 1. Queue a control message to resume the {{OfflineAudioContext}}.
- 1. Queue a control message to resume the {{OfflineAudioContext}}.
+ 1. Return promise.
+ running
.
- 1. Set the {{[[rendering thread state]]}} on the {{OfflineAudioContext}} to running
.
+ 2. Start rendering the audio graph.
+
+ 3. In case of failure, queue a task on the control thread to
+ reject promise and abort these steps:
- 2. Start rendering the audio graph.
+ 4. Queue a task on the control thread's event loop, to
+ execute these steps:
- 3. In case of failure,
-
- queue a media element task to reject |promise| and abort the remaining steps.
+ 1. Resolve promise.
- 4.
- queue a media element task to execute the following steps:
+ 2. If the {{BaseAudioContext/state}} attribute of the
+ {{OfflineAudioContext}} is not already "{{AudioContextState/running}}":
- 1. Resolve promise.
+ 1. Set the {{BaseAudioContext/state}} attribute of the
+ {{OfflineAudioContext}} to "{{AudioContextState/running}}".
- 2. If the {{BaseAudioContext/state}} attribute of the
- {{OfflineAudioContext}} is not already "{{AudioContextState/running}}":
+ 2. Queue a task to fire a simple event named statechange
+ at the {{OfflineAudioContext}}.
+ + suspendTime: Schedules a suspension of the rendering at the specified time, which is quantized and rounded up to the render quantum size. If the quantized frame number- : suspend(suspendTime) - :: - Schedules a suspension of the time progression in the audio - context at the specified time and returns a promise. This is - generally useful when manipulating the audio graph - synchronously on {{OfflineAudioContext}}. - - Note that the maximum precision of suspension is the size of - the render quantum and the specified suspension time - will be rounded up to the nearest render quantum - boundary. For this reason, it is not allowed to schedule - multiple suspends at the same quantized frame. Also, scheduling - should be done while the context is not running to ensure - precise suspension. - -then the promise is rejected with {{InvalidStateError}}. +
- is negative or
- is less than or equal to the current time or
- is greater than or equal to the total render duration or
- is scheduled by another suspend for the same time,
- suspendTime: Schedules a suspension of the rendering at the specified time, which is quantized and rounded up to the render quantum size. If the quantized frame number- -then the promise is rejected with {{InvalidStateError}}. -
- is negative or
- is less than or equal to the current time or
- is greater than or equal to the total render duration or
- is scheduled by another suspend for the same time,
dictionary OfflineAudioContextOptions { - unsigned long numberOfChannels = 1; - required unsigned long length; - required float sampleRate; - (AudioContextRenderSizeCategory or unsigned long) renderSizeHint = "default"; + unsigned long numberOfChannels = 1; + required unsigned long length; + required float sampleRate; };@@ -2864,25 +2210,20 @@ dictionary OfflineAudioContextOptions { Dictionary {{OfflineAudioContextOptions}} Members
[Exposed=Window] interface OfflineAudioCompletionEvent : Event { - constructor (DOMString type, OfflineAudioCompletionEventInit eventInitDict); - readonly attribute AudioBuffer renderedBuffer; + constructor (DOMString type, OfflineAudioCompletionEventInit eventInitDict); + readonly attribute AudioBuffer renderedBuffer; };@@ -2900,17 +2241,17 @@ interface OfflineAudioCompletionEvent : Event { Attributes
dictionary OfflineAudioCompletionEventInit : EventInit { - required AudioBuffer renderedBuffer; + required AudioBuffer renderedBuffer; };@@ -2918,9 +2259,9 @@ dictionary OfflineAudioCompletionEventInit : EventInit { Dictionary {{OfflineAudioCompletionEventInit}} Members
[Exposed=Window] interface AudioBuffer { - constructor (AudioBufferOptions options); - readonly attribute float sampleRate; - readonly attribute unsigned long length; - readonly attribute double duration; - readonly attribute unsigned long numberOfChannels; - Float32Array getChannelData (unsigned long channel); - undefined copyFromChannel (Float32Array destination, - unsigned long channelNumber, - optional unsigned long bufferOffset = 0); - undefined copyToChannel (Float32Array source, - unsigned long channelNumber, - optional unsigned long bufferOffset = 0); + constructor (AudioBufferOptions options); + readonly attribute float sampleRate; + readonly attribute unsigned long length; + readonly attribute double duration; + readonly attribute unsigned long numberOfChannels; + Float32Array getChannelData (unsigned long channel); + void copyFromChannel (Float32Array destination, + unsigned long channelNumber, + optional unsigned long bufferOffset = 0); + void copyToChannel (Float32Array source, + unsigned long channelNumber, + optional unsigned long bufferOffset = 0); };@@ -2993,136 +2333,136 @@ interface AudioBuffer { Constructors
- CreateByteDataBlock
({{[[length]]}} * {{[[number of channels]]}}).
-
- Note: This initializes the underlying storage to zero.
-
- 1. Return b.
- - options: An {{AudioBufferOptions}} that determine the properties for this {{AudioBuffer}}. -+ : AudioBuffer(options) + :: +
+ CreateByteDataBlock
({{[[length]]}} * {{[[number of channels]]}}).
+
+ Note: This initializes the underlying storage to zero.
+
+ 1. Return b.
+ + options: An {{AudioBufferOptions}} that determine the properties for this {{AudioBuffer}}. +
destination
array.
-
- Let buffer
be the {{AudioBuffer}} with
- \(N_b\) frames, let \(N_f\) be the number of elements in the
- {{AudioBuffer/copyFromChannel()/destination}} array, and \(k\) be the value of
- {{AudioBuffer/copyFromChannel()/bufferOffset}}. Then the number of frames copied
- from buffer
to {{AudioBuffer/copyFromChannel()/destination}} is
- \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the
- remaining elements of {{AudioBuffer/copyFromChannel()/destination}} are not
- modified.
-
-
- destination: The array the channel data will be copied to.
- channelNumber: The index of the channel to copy the data from. If channelNumber
is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
- bufferOffset: An optional offset, defaulting to 0. Data from the {{AudioBuffer}} starting at this offset is copied to the {{AudioBuffer/copyFromChannel()/destination}}.
-
-
- source
array.
-
- A {{UnknownError}} may be thrown if
- {{AudioBuffer/copyToChannel()/source}} cannot be
- copied to the buffer.
-
- Let buffer
be the {{AudioBuffer}} with
- \(N_b\) frames, let \(N_f\) be the number of elements in the
- {{AudioBuffer/copyToChannel()/source}} array, and \(k\) be the value of
- {{AudioBuffer/copyToChannel()/bufferOffset}}. Then the number of frames copied
- from {{AudioBuffer/copyToChannel()/source}} to the buffer
is
- \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the
- remaining elements of buffer
are not
- modified.
-
-
- source: The array the channel data will be copied from.
- channelNumber: The index of the channel to copy the data to. If channelNumber
is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
- bufferOffset: An optional offset, defaulting to 0. Data from the {{AudioBuffer/copyToChannel()/source}} is copied to the {{AudioBuffer}} starting at this offset.
-
-
-
- channel: This parameter is an index representing the particular channel to get data for. An index value of 0 represents the first channel. This index value MUST be less than {{[[number of channels]]}} or an {{IndexSizeError}} exception MUST be thrown.
-
-
- destination
array.
+
+ Let buffer
be the {{AudioBuffer}} with
+ \(N_b\) frames, let \(N_f\) be the number of elements in the
+ {{AudioBuffer/copyFromChannel()/destination}} array, and \(k\) be the value of
+ {{AudioBuffer/copyFromChannel()/bufferOffset}}. Then the number of frames copied
+ from buffer
to {{AudioBuffer/copyFromChannel()/destination}} is
+ \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the
+ remaining elements of {{AudioBuffer/copyFromChannel()/destination}} are not
+ modified.
+
+
+ destination: The array the channel data will be copied to.
+ channelNumber: The index of the channel to copy the data from. If channelNumber
is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
+ bufferOffset: An optional offset, defaulting to 0. Data from the {{AudioBuffer}} starting at this offset is copied to the {{AudioBuffer/copyFromChannel()/destination}}.
+
+
+ void
+ source
array.
+
+ A {{UnknownError}} may be thrown if
+ {{AudioBuffer/copyToChannel()/source}} cannot be
+ copied to the buffer.
+
+ Let buffer
be the {{AudioBuffer}} with
+ \(N_b\) frames, let \(N_f\) be the number of elements in the
+ {{AudioBuffer/copyToChannel()/source}} array, and \(k\) be the value of
+ {{AudioBuffer/copyToChannel()/bufferOffset}}. Then the number of frames copied
+ from {{AudioBuffer/copyToChannel()/source}} to the buffer
is
+ \(\max(0, \min(N_b - k, N_f))\). If this is less than \(N_f\), then the
+ remaining elements of buffer
are not
+ modified.
+
+
+ source: The array the channel data will be copied from.
+ channelNumber: The index of the channel to copy the data to. If channelNumber
is greater or equal than the number of channels of the {{AudioBuffer}}, an {{IndexSizeError}} MUST be thrown.
+ bufferOffset: An optional offset, defaulting to 0. Data from the {{AudioBuffer/copyToChannel()/source}} is copied to the {{AudioBuffer}} starting at this offset.
+
+
+ void
+
+ channel: This parameter is an index representing the particular channel to get data for. An index value of 0 represents the first channel. This index value MUST be less than {{[[number of channels]]}} or an {{IndexSizeError}} exception MUST be thrown.
+
+
+ IsDetachedBuffer
+ on any of the {{AudioBuffer}}'s {{ArrayBuffer}}s return
+ `true`, abort these steps, and return a zero-length
+ channel data buffer to the invoker.
- 2. [=ArrayBuffer/Detach=] all {{ArrayBuffer}}s for arrays previously returned
- by {{AudioBuffer/getChannelData()}} on this {{AudioBuffer}}.
+ 2. Detach
+ all {{ArrayBuffer}}s for arrays previously returned by
+ {{AudioBuffer/getChannelData()}} on this {{AudioBuffer}}.
- Note: Because {{AudioBuffer}} can only be created via
- {{BaseAudioContext/createBuffer()}} or via the {{AudioBuffer}} constructor, this
- cannot throw.
+ Note: Because {{AudioBuffer}} can only be created via
+ {{BaseAudioContext/createBuffer()}} or via the {{AudioBuffer}} constructor, this
+ cannot throw.
- 3. Retain the underlying {{[[internal data]]}} from those
- {{ArrayBuffer}}s and return references to them to the
- invoker.
+ 3. Retain the underlying {{[[internal data]]}} from those
+ {{ArrayBuffer}}s and return references to them to the
+ invoker.
- 4. Attach {{ArrayBuffer}}s containing copies of the data to
- the {{AudioBuffer}}, to be returned by the next call to
- {{AudioBuffer/getChannelData()}}.
+ 4. Attach {{ArrayBuffer}}s containing copies of the data to
+ the {{AudioBuffer}}, to be returned by the next call to
+ {{AudioBuffer/getChannelData()}}.
dictionary AudioBufferOptions { - unsigned long numberOfChannels = 1; - required unsigned long length; - required float sampleRate; + unsigned long numberOfChannels = 1; + required unsigned long length; + required float sampleRate; };@@ -3213,17 +2554,17 @@ Dictionary {{AudioBufferOptions}} Members The allowed values for the members of this dictionary are constrained. See {{BaseAudioContext/createBuffer()}}.
[Exposed=Window] interface AudioNode : EventTarget { - AudioNode connect (AudioNode destinationNode, - optional unsigned long output = 0, - optional unsigned long input = 0); - undefined connect (AudioParam destinationParam, optional unsigned long output = 0); - undefined disconnect (); - undefined disconnect (unsigned long output); - undefined disconnect (AudioNode destinationNode); - undefined disconnect (AudioNode destinationNode, unsigned long output); - undefined disconnect (AudioNode destinationNode, - unsigned long output, - unsigned long input); - undefined disconnect (AudioParam destinationParam); - undefined disconnect (AudioParam destinationParam, unsigned long output); - readonly attribute BaseAudioContext context; - readonly attribute unsigned long numberOfInputs; - readonly attribute unsigned long numberOfOutputs; - attribute unsigned long channelCount; - attribute ChannelCountMode channelCountMode; - attribute ChannelInterpretation channelInterpretation; + AudioNode connect (AudioNode destinationNode, + optional unsigned long output = 0, + optional unsigned long input = 0); + void connect (AudioParam destinationParam, optional unsigned long output = 0); + void disconnect (); + void disconnect (unsigned long output); + void disconnect (AudioNode destinationNode); + void disconnect (AudioNode destinationNode, unsigned long output); + void disconnect (AudioNode destinationNode, + unsigned long output, + unsigned long input); + void disconnect (AudioParam destinationParam); + void disconnect (AudioParam destinationParam, unsigned long output); + readonly attribute BaseAudioContext context; + readonly attribute unsigned long numberOfInputs; + readonly attribute unsigned long numberOfOutputs; + attribute unsigned long channelCount; + attribute ChannelCountMode channelCountMode; + attribute ChannelInterpretation channelInterpretation; };@@ -3321,53 +2662,53 @@ method, the associated
BaseAudioContext
of the
is called on.
enum ChannelCountMode { - "max", - "clamped-max", - "explicit" + "max", + "clamped-max", + "explicit" };@@ -3397,57 +2738,57 @@ mixing is to be done.
Enum value | Description | -||
---|---|---|---|
"max" - | - computedNumberOfChannels is the maximum of the number of - channels of all connections to an input. In this mode - {{AudioNode/channelCount}} is ignored. - | ||
"clamped-max" - | - computedNumberOfChannels is determined as for "{{ChannelCountMode/max}}" - and then clamped to a maximum value of the given - {{AudioNode/channelCount}}. - | ||
"explicit" - |
- computedNumberOfChannels is the exact value as specified
- by the {{AudioNode/channelCount}}.
+
+
+ Enumeration description
+ | | |
"max" + | + computedNumberOfChannels is the maximum of the number of + channels of all connections to an input. In this mode + {{AudioNode/channelCount}} is ignored. + | ||
"clamped-max" + | + computedNumberOfChannels is determined as for "{{ChannelCountMode/max}}" + and then clamped to a maximum value of the given + {{AudioNode/channelCount}}. + | ||
"explicit" + | + computedNumberOfChannels is the exact value as specified + by the {{AudioNode/channelCount}}. |
enum ChannelInterpretation { - "speakers", - "discrete" + "speakers", + "discrete" };
Enum value | Description | -||
---|---|---|---|
"speakers" - | - use up-mix equations or down-mix equations. In cases where the number of - channels do not match any of these basic speaker layouts, revert - to "{{ChannelInterpretation/discrete}}". - | ||
"discrete" - |
- Up-mix by filling channels until they run out then zero out
- remaining channels. Down-mix by filling as many channels as
- possible, then dropping remaining channels.
+
+
+ Enumeration description
+ | | |
"speakers" + | + use up-mix equations or down-mix equations. In cases where the number of + channels do not match any of these basic speaker layouts, revert + to "{{ChannelInterpretation/discrete}}". + | ||
"discrete" + | + Up-mix by filling channels until they run out then zero out + remaining channels. Down-mix by filling as many channels as + possible, then dropping remaining channels. |
readyState
attribute equal to "live"
, a
- muted
attribute equal to false
and an
- enabled
attribute equal to true
.
+ [=actively processing=] when the associated
+ {{MediaStreamTrack}} object has a
+ readyState
attribute equal to "live"
, a
+ muted
attribute equal to false
and an
+ enabled
attribute equal to true
.
- A {{DelayNode}} in a cycle is [=actively processing=] only when the absolute value
- of any output sample for the current [=render quantum=] is greater than or equal
- to \( 2^{-126} \).
+ of any output sample for the current [=render quantum=] is greater than or equal
+ to \( 2^{-126} \).
- A {{ScriptProcessorNode}} is [=actively processing=] when its input or output is
- connected.
+ connected.
- An {{AudioWorkletNode}} is [=actively processing=] when its
- {{AudioWorkletProcessor}}'s {{[[callable process]]}} returns true
- and either its [=active source=] flag is true
or any
- {{AudioNode}} connected to one of its inputs is [=actively processing=].
+ {{AudioWorkletProcessor}}'s {{[[callable process]]}} returns true
+ and either its [=active source=] flag is true
or any
+ {{AudioNode}} connected to one of its inputs is [=actively processing=].
- All other {{AudioNode}}s start [=actively processing=] when any
- {{AudioNode}} connected to one of its inputs is [=actively processing=], and
- stops [=actively processing=] when the input that was received from other
- [=actively processing=] {{AudioNode}} no longer affects the output.
+ {{AudioNode}} connected to one of its inputs is [=actively processing=], and
+ stops [=actively processing=] when the input that was received from other
+ [=actively processing=] {{AudioNode}} no longer affects the output.
Note: This takes into account {{AudioNode}}s that have a [=tail-time=].
@@ -3501,400 +2842,400 @@ silence.
Attributes- nodeA.connect(nodeB); - nodeA.connect(nodeB); -- - will have the same effect as - -
- nodeA.connect(nodeB); --
destination
- {{AudioNode}} object.
-
- - destinationNode: The- -destination
parameter is the {{AudioNode}} to connect to. If thedestination
parameter is an {{AudioNode}} that has been created using another {{AudioContext}}, an {{InvalidAccessError}} MUST be thrown. That is, {{AudioNode}}s cannot be shared between {{AudioContext}}s. Multiple {{AudioNode}}s can be connected to the same {{AudioNode}}, this is described in [[#channel-up-mixing-and-down-mixing|Channel Upmixing and down mixing]] section. - output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to connect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} output to more than one input with multiple calls to connect(). Thus, "fan-out" is supported. - input: Theinput
parameter is an index describing which input of the destination {{AudioNode}} to connect to. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} to another {{AudioNode}} which creates a cycle: an {{AudioNode}} may connect to another {{AudioNode}}, which in turn connects back to the input or {{AudioParam}} of the first {{AudioNode}}. -
value
the
- {{AudioParam}} would normally have without any
- audio connections), including any timeline changes scheduled
- for the parameter.
-
- The down-mixing to mono is equivalent to the down-mixing for an
- {{AudioNode}} with {{AudioNode/channelCount}} = 1,
- {{AudioNode/channelCountMode}} = "{{ChannelCountMode/explicit}}", and
- {{AudioNode/channelInterpretation}} = "{{ChannelInterpretation/speakers}}".
-
- There can only be one connection between a given output of one
- specific node and a specific {{AudioParam}}.
- Multiple connections with the same termini are ignored.
-
- - nodeA.connect(param); - nodeA.connect(param); -- - will have the same effect as - -
- nodeA.connect(param); --
- destinationParam: The- -destination
parameter is the {{AudioParam}} to connect to. This method does not return thedestination
{{AudioParam}} object. If {{AudioNode/connect(destinationParam, output)/destinationParam}} belongs to an {{AudioNode}} that belongs to a {{BaseAudioContext}} that is different from the {{BaseAudioContext}} that has created the {{AudioNode}} on which this method was called, an {{InvalidAccessError}} MUST be thrown. - output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to connect. If theparameter
is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. -
- output: This parameter is an index describing which output of the {{AudioNode}} to disconnect. It disconnects all outgoing connections from the given output. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
-
-
- - destinationNode: The-destinationNode
parameter is the {{AudioNode}} to disconnect. It disconnects all outgoing connections to the givendestinationNode
. If there is no connection to thedestinationNode
, an {{InvalidAccessError}} exception MUST be thrown. -
- destinationNode: The- -destinationNode
parameter is the {{AudioNode}} to disconnect. If there is no connection to thedestinationNode
from the given output, an {{InvalidAccessError}} exception MUST be thrown. - output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. -
- destinationNode: The- -destinationNode
parameter is the {{AudioNode}} to disconnect. If there is no connection to thedestinationNode
from the given output to the given input, an {{InvalidAccessError}} exception MUST be thrown. - output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. - input: Theinput
parameter is an index describing which input of the destination {{AudioNode}} to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. -
- destinationParam: The-destinationParam
parameter is the {{AudioParam}} to disconnect. If there is no connection to thedestinationParam
, an {{InvalidAccessError}} exception MUST be thrown. -
- destinationParam: The-destinationParam
parameter is the {{AudioParam}} to disconnect. If there is no connection to thedestinationParam
, an {{InvalidAccessError}} exception MUST be thrown. - output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If theparameter
is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. -
+ nodeA.connect(nodeB); + nodeA.connect(nodeB); ++ + will have the same effect as + +
+ nodeA.connect(nodeB); ++
destination
+ {{AudioNode}} object.
+
+ + destinationNode: The+ +destination
parameter is the {{AudioNode}} to connect to. If thedestination
parameter is an {{AudioNode}} that has been created using another {{AudioContext}}, an {{InvalidAccessError}} MUST be thrown. That is, {{AudioNode}}s cannot be shared between {{AudioContext}}s. Multiple {{AudioNode}}s can be connected to the same {{AudioNode}}, this is described in [[#channel-up-mixing-and-down-mixing|Channel Upmixing and down mixing]] section. + output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to connect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} output to more than one input with multiple calls to connect(). Thus, "fan-out" is supported. + input: Theinput
parameter is an index describing which input of the destination {{AudioNode}} to connect to. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. It is possible to connect an {{AudioNode}} to another {{AudioNode}} which creates a cycle: an {{AudioNode}} may connect to another {{AudioNode}}, which in turn connects back to the input or {{AudioParam}} of the first {{AudioNode}}. +
value
the
+ {{AudioParam}} would normally have without any
+ audio connections), including any timeline changes scheduled
+ for the parameter.
+
+ The down-mixing to mono is equivalent to the down-mixing for an
+ {{AudioNode}} with {{AudioNode/channelCount}} = 1,
+ {{AudioNode/channelCountMode}} = "{{ChannelCountMode/explicit}}", and
+ {{AudioNode/channelInterpretation}} = "{{ChannelInterpretation/speakers}}".
+
+ There can only be one connection between a given output of one
+ specific node and a specific {{AudioParam}}.
+ Multiple connections with the same termini are ignored.
+
+ + nodeA.connect(param); + nodeA.connect(param); ++ + will have the same effect as + +
+ nodeA.connect(param); ++
+ destinationParam: The+ +destination
parameter is the {{AudioParam}} to connect to. This method does not return thedestination
{{AudioParam}} object. If {{AudioNode/connect(destinationParam, output)/destinationParam}} belongs to an {{AudioNode}} that belongs to a {{BaseAudioContext}} that is different from the {{BaseAudioContext}} that has created the {{AudioNode}} on which this method was called, an {{InvalidAccessError}} MUST be thrown. + output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to connect. If theparameter
is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. +
void
+ void
+
+ output: This parameter is an index describing which output of the {{AudioNode}} to disconnect. It disconnects all outgoing connections from the given output. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown.
+
+
+ void
+ + destinationNode: The+destinationNode
parameter is the {{AudioNode}} to disconnect. It disconnects all outgoing connections to the givendestinationNode
. If there is no connection to thedestinationNode
, an {{InvalidAccessError}} exception MUST be thrown. +
void
+ + destinationNode: The+ +destinationNode
parameter is the {{AudioNode}} to disconnect. If there is no connection to thedestinationNode
from the given output, an {{InvalidAccessError}} exception MUST be thrown. + output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. +
void
+ + destinationNode: The+ +destinationNode
parameter is the {{AudioNode}} to disconnect. If there is no connection to thedestinationNode
from the given input to the given output, an {{InvalidAccessError}} exception MUST be thrown. + output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. + input: Theinput
parameter is an index describing which input of the destination {{AudioNode}} to disconnect. If this parameter is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. +
void
+ + destinationParam: The+destinationParam
parameter is the {{AudioParam}} to disconnect. If there is no connection to thedestinationParam
, an {{InvalidAccessError}} exception MUST be thrown. +
void
+ + destinationParam: The+destinationParam
parameter is the {{AudioParam}} to disconnect. If there is no connection to thedestinationParam
, an {{InvalidAccessError}} exception MUST be thrown. + output: Theoutput
parameter is an index describing which output of the {{AudioNode}} from which to disconnect. If theparameter
is out-of-bounds, an {{IndexSizeError}} exception MUST be thrown. +
void
+ dictionary AudioNodeOptions { - unsigned long channelCount; - ChannelCountMode channelCountMode; - ChannelInterpretation channelInterpretation; + unsigned long channelCount; + ChannelCountMode channelCountMode; + ChannelInterpretation channelInterpretation; };@@ -3913,14 +3254,14 @@ dictionary AudioNodeOptions { Dictionary {{AudioNodeOptions}} Members
enum AutomationRate { - "a-rate", - "k-rate" + "a-rate", + "k-rate" };
Enum value | Description | - -
---|---|
- "a-rate" - | -- This {{AudioParam}} is set for [=a-rate=] processing. - | -
- "k-rate" - | -- This {{AudioParam}} is set for [=k-rate=] processing. - | -
+ Enumeration description + | +|
+ "a-rate" + | ++ This {{AudioParam}} is set for [=a-rate=] processing. + | +
+ "k-rate" + | ++ This {{AudioParam}} is set for [=k-rate=] processing. + | +
[Exposed=Window] interface AudioParam { - attribute float value; - attribute AutomationRate automationRate; - readonly attribute float defaultValue; - readonly attribute float minValue; - readonly attribute float maxValue; - AudioParam setValueAtTime (float value, double startTime); - AudioParam linearRampToValueAtTime (float value, double endTime); - AudioParam exponentialRampToValueAtTime (float value, double endTime); - AudioParam setTargetAtTime (float target, double startTime, float timeConstant); - AudioParam setValueCurveAtTime (sequence<float> values, - double startTime, - double duration); - AudioParam cancelScheduledValues (double cancelTime); - AudioParam cancelAndHoldAtTime (double cancelTime); + attribute float value; + attribute AutomationRate automationRate; + readonly attribute float defaultValue; + readonly attribute float minValue; + readonly attribute float maxValue; + AudioParam setValueAtTime (float value, double startTime); + AudioParam linearRampToValueAtTime (float value, double endTime); + AudioParam exponentialRampToValueAtTime (float value, double endTime); + AudioParam setTargetAtTime (float target, double startTime, float timeConstant); + AudioParam setValueCurveAtTime (sequence<float> values, + double startTime, + double duration); + AudioParam cancelScheduledValues (double cancelTime); + AudioParam cancelAndHoldAtTime (double cancelTime); };@@ -4105,428 +3446,428 @@ interface AudioParam { Attributes
value
attribute.
-
- : maxValue
- ::
- The nominal maximum value that the parameter can take. Together
- with minValue
, this forms the nominal range
- for this parameter.
-
- : minValue
- ::
- The nominal minimum value that the parameter can take. Together
- with maxValue
, this forms the nominal range
- for this parameter.
-
- : value
- ::
- The parameter's floating-point value. This attribute is
- initialized to the defaultValue
.
-
- Getting this attribute returns the contents of the
- {{[[current value]]}} slot. See
- [[#computation-of-value]] for the algorithm for the
- value that is returned.
-
- Setting this attribute has the effect of assigning the
- requested value to the {{[[current value]]}} slot, and
- calling the setValueAtTime()
- method with the current {{AudioContext}}'s
- currentTime
and {{[[current value]]}}. Any
- exceptions that would be thrown by
- setValueAtTime()
will also be thrown by setting
- this attribute.
+ : automationRate
+ ::
+ The automation rate for the {{AudioParam}}. The
+ default value depends on the actual {{AudioParam}};
+ see the description of each individual {{AudioParam}} for the
+ default value.
+
+ Some nodes have additional automation rate constraints as follows:
+
+ : {{AudioBufferSourceNode}}
+ ::
+ The {{AudioParam}}s
+ {{AudioBufferSourceNode/playbackRate}} and
+ {{AudioBufferSourceNode/detune}} MUST be
+ "{{AutomationRate/k-rate}}". An {{InvalidStateError}}
+ must be thrown if the rate is changed to
+ "{{AutomationRate/a-rate}}".
+
+ : {{DynamicsCompressorNode}}
+ ::
+ The {{AudioParam}}s
+ {{DynamicsCompressorNode/threshold}},
+ {{DynamicsCompressorNode/knee}},
+ {{DynamicsCompressorNode/ratio}},
+ {{DynamicsCompressorNode/attack}}, and
+ {{DynamicsCompressorNode/release}}
+ MUST be "{{AutomationRate/k-rate}}". An {{InvalidStateError}}
+ must be thrown if the rate is changed to
+ "{{AutomationRate/a-rate}}".
+
+ : {{PannerNode}}
+ ::
+ If the {{PannerNode/panningModel}} is
+ "{{PanningModelType/HRTF}}", the setting of
+ the {{AudioParam/automationRate}} for any
+ {{AudioParam}} of the {{PannerNode}} is ignored.
+ Likewise, the setting of the
+ {{AudioParam/automationRate}} for any {{AudioParam}}
+ of the {{AudioListener}} is ignored. In this
+ case, the {{AudioParam}} behaves as if the
+ {{AudioParam/automationRate}} were set to
+ "{{AutomationRate/k-rate}}".
+
+ : defaultValue
+ ::
+ Initial value for the value
attribute.
+
+ : maxValue
+ ::
+ The nominal maximum value that the parameter can take. Together
+ with minValue
, this forms the nominal range
+ for this parameter.
+
+ : minValue
+ ::
+ The nominal minimum value that the parameter can take. Together
+ with maxValue
, this forms the nominal range
+ for this parameter.
+
+ : value
+ ::
+ The parameter's floating-point value. This attribute is
+ initialized to the defaultValue
.
+
+ Getting this attribute returns the contents of the
+ {{[[current value]]}} slot. See
+ [[#computation-of-value]] for the algorithm for the
+ value that is returned.
+
+ Setting this attribute has the effect of assigning the
+ requested value to the {{[[current value]]}} slot, and
+ calling the setValueAtTime()
+ method with the current {{AudioContext}}'s
+ currentTime
and {{[[current value]]}}. Any
+ exceptions that would be thrown by
+ setValueAtTime()
will also be thrown by setting
+ this attribute.
setTarget
event,
- 1. Implicitly insert a setValueAtTime
- event at time \(t_c\) with the value that the
- setTarget
would have at time
- \(t_c\).
-
-
- 2. Go to step 5.
-
- 2. If \(E_1\) is a setValueCurve
with a start
- time of \(t_3\) and a duration of \(d\)
-
- 1. If \(t_c \gt t_3 + d\), go to step 5.
-
- 2. Otherwise,
- 1. Effectively replace this event with a
- setValueCurve
event with a start time
- of \(t_3\) and a new duration of \(t_c-t_3\).
- However, this is not a true replacement; this
- automation MUST take care to produce the same
- output as the original, and not one computed using
- a different duration. (That would cause sampling of
- the value curve in a slightly different way,
- producing different results.)
-
-
- 2. Go to step 5.
-
- 5. Remove all events with time greater than \(t_c\).
-
- If no events are added, then the automation value after
- {{AudioParam/cancelAndHoldAtTime()}} is the constant value that
- the original timeline would have had at time \(t_c\).
-
- cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if cancelTime
is negative. If {{AudioParam/cancelAndHoldAtTime()/cancelTime}} is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-
-
- - cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if- -cancelTime
is negative. IfcancelTime
is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}. -
- $$ - v(t) = V_0 \left(\frac{V_1}{V_0}\right)^\frac{t - T_0}{T_1 - T_0} - $$ -- - where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is - the {{AudioParam/exponentialRampToValueAtTime()/value!!argument}} parameter passed into this method. If - \(V_0\) and \(V_1\) have opposite signs or if \(V_0\) is zero, - then \(v(t) = V_0\) for \(T_0 \le t \lt T_1\). - - This also implies an exponential ramp to 0 is not possible. A - good approximation can be achieved using {{AudioParam/setTargetAtTime()}} with an appropriately chosen - time constant. - - If there are no more events after this ExponentialRampToValue - event then for \(t \geq T_1\), \(v(t) = V_1\). - - If there is no event preceding this event, the exponential ramp - behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} - were called where
value
is the current value of
- the attribute and currentTime
is the context
- {{BaseAudioContext/currentTime}} at the time
- {{AudioParam/exponentialRampToValueAtTime()}} is called.
-
- If the preceding event is a SetTarget
event, \(T_0\)
- and \(V_0\) are chosen from the current time and value of
- SetTarget
automation. That is, if the
- SetTarget
event has not started, \(T_0\) is the start
- time of the event, and \(V_0\) is the value just before the
- SetTarget
event starts. In this case, the
- ExponentialRampToValue
event effectively replaces the
- SetTarget
event. If the SetTarget
event has
- already started, \(T_0\) is the current context time, and
- \(V_0\) is the current SetTarget
automation value at
- time \(T_0\). In both cases, the automation curve is
- continuous.
-
-
- value: The value the parameter will exponentially ramp to at the given time. A {{RangeError}} exception MUST be thrown if this value is equal to 0.
- endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute where the exponential ramp ends. A {{RangeError}} exception MUST be thrown if endTime
is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-
-
- - $$ - v(t) = V_0 + (V_1 - V_0) \frac{t - T_0}{T_1 - T_0} - $$ -- - where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is - the {{AudioParam/linearRampToValueAtTime()/value!!argument}} parameter passed into this method. - - If there are no more events after this LinearRampToValue event - then for \(t \geq T_1\), \(v(t) = V_1\). - - If there is no event preceding this event, the linear ramp - behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} - were called where
value
is the current value of
- the attribute and currentTime
is the context
- {{BaseAudioContext/currentTime}} at the time
- {{AudioParam/linearRampToValueAtTime()}} is called.
-
- If the preceding event is a SetTarget
event, \(T_0\)
- and \(V_0\) are chosen from the current time and value of
- SetTarget
automation. That is, if the
- SetTarget
event has not started, \(T_0\) is the start
- time of the event, and \(V_0\) is the value just before the
- SetTarget
event starts. In this case, the
- LinearRampToValue
event effectively replaces the
- SetTarget
event. If the SetTarget
event has
- already started, \(T_0\) is the current context time, and
- \(V_0\) is the current SetTarget
automation value at
- time \(T_0\). In both cases, the automation curve is
- continuous.
-
-
- value: The value the parameter will linearly ramp to at the given time.
- endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the automation ends. A {{RangeError}} exception MUST be thrown if endTime
is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-
-
- - $$ - v(t) = V_1 + (V_0 - V_1)\, e^{-\left(\frac{t - T_0}{\tau}\right)} - $$ -- - where \(V_0\) is the initial value (the {{[[current value]]}} - attribute) at \(T_0\) (the {{AudioParam/setTargetAtTime()/startTime!!argument}} parameter), - \(V_1\) is equal to the {{AudioParam/setTargetAtTime()/target!!argument}} parameter, and - \(\tau\) is the {{AudioParam/setTargetAtTime()/timeConstant!!argument}} parameter. - - If a
LinearRampToValue
or
- ExponentialRampToValue
event follows this event, the
- behavior is described in {{AudioParam/linearRampToValueAtTime()}} or
- {{AudioParam/exponentialRampToValueAtTime()}},
- respectively. For all other events, the SetTarget
- event ends at the time of the next event.
-
- - target: The value the parameter will start changing to at the given time. - startTime: The time at which the exponential approach will begin, in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if- -start
is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}. - timeConstant: The time-constant value of first-order filter (exponential) approach to the target value. The larger this value is, the slower the transition will be. The value MUST be non-negative or a {{RangeError}} exception MUST be thrown. IftimeConstant
is zero, the output value jumps immediately to the final value. More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value \(1 - 1/e\) (around 63.2%) given a step input response (transition from 0 to 1 value). -
SetValue
event,
- then for \(t \geq T_0\), \(v(t) = V\), where \(T_0\) is the
- {{AudioParam/setValueAtTime()/startTime!!argument}} parameter and \(V\) is the
- {{AudioParam/setValueAtTime()/value!!argument}} parameter. In other words, the value will
- remain constant.
-
- If the next event (having time \(T_1\)) after this
- SetValue
event is not of type
- LinearRampToValue
or ExponentialRampToValue
,
- then, for \(T_0 \leq t < T_1\):
-
- - $$ - v(t) = V - $$ -- - In other words, the value will remain constant during this time - interval, allowing the creation of "step" functions. - - If the next event after this
SetValue
event is of type
- LinearRampToValue
or ExponentialRampToValue
- then please see {{AudioParam/linearRampToValueAtTime()}} or
- {{AudioParam/exponentialRampToValueAtTime()}},
- respectively.
-
-
- value: The value the parameter will change to at the given time.
- startTime: The time in the same time coordinate system as the {{BaseAudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the parameter changes to the given value. A {{RangeError}} exception MUST be thrown if startTime
is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
-
-
- - $$ - \begin{align*} k &= \left\lfloor \frac{N - 1}{T_D}(t-T_0) \right\rfloor \\ - \end{align*} - $$ -- - Then \(v(t)\) is computed by linearly interpolating between - \(V[k]\) and \(V[k+1]\), - - After the end of the curve time interval (\(t \ge T_0 + T_D\)), - the value will remain constant at the final curve value, until - there is another automation event (if any). - - An implicit call to {{AudioParam/setValueAtTime()}} is made at time \(T_0 + - T_D\) with value \(V\[N-1]\) so that following automations will - start from the end of the {{AudioParam/setValueCurveAtTime()}} event. - -
- values: A sequence of float values representing a parameter value curve. These values will apply starting at the given time and lasting for the given duration. When this method is called, an internal copy of the curve is created for automation purposes. Subsequent modifications of the contents of the passed-in array therefore have no effect on the {{AudioParam}}. An {{InvalidStateError}} MUST be thrown if this attribute is a- -sequence<float>
object that has a length less than 2. - startTime: The start time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the value curve will be applied. A {{RangeError}} exception MUST be thrown ifstartTime
is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}. - duration: The amount of time in seconds (after thestartTime
parameter) where values will be calculated according to thevalues
parameter. A {{RangeError}} exception MUST be thrown ifduration
is not strictly positive or is not a finite number. -
setTarget
event,
+ 1. Implicitly insert a setValueAtTime
+ event at time \(t_c\) with the value that the
+ setTarget
would have at time
+ \(t_c\).
+
+
+ 2. Go to step 5.
+
+ 2. If \(E_1\) is a setValueCurve
with a start
+ time of \(t_3\) and a duration of \(d\)
+
+ 1. If \(t_c \gt t_3 + d\), go to step 5.
+
+ 2. Otherwise,
+ 1. Effectively replace this event with a
+ setValueCurve
event with a start time
+ of \(t_3\) and a new duration of \(t_c-t_3\).
+ However, this is not a true replacement; this
+ automation MUST take care to produce the same
+ output as the original, and not one computed using
+ a different duration. (That would cause sampling of
+ the value curve in a slightly different way,
+ producing different results.)
+
+
+ 2. Go to step 5.
+
+ 5. Remove all events with time greater than \(t_c\).
+
+ If no events are added, then the automation value after
+ {{AudioParam/cancelAndHoldAtTime()}} is the constant value that
+ the original timeline would have had at time \(t_c\).
+
+ cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if cancelTime
is negative or is not a finite number. If {{AudioParam/cancelAndHoldAtTime()/cancelTime}} is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+
+
+ + cancelTime: The time after which any previously scheduled parameter changes will be cancelled. It is a time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if+ +cancelTime
is negative or is not a finite number. IfcancelTime
is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}. +
+ $$ + v(t) = V_0 \left(\frac{V_1}{V_0}\right)^\frac{t - T_0}{T_1 - T_0} + $$ ++ + where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is + the {{AudioParam/exponentialRampToValueAtTime()/value!!argument}} parameter passed into this method. If + \(V_0\) and \(V_1\) have opposite signs or if \(V_0\) is zero, + then \(v(t) = V_0\) for \(T_0 \le t \lt T_1\). + + This also implies an exponential ramp to 0 is not possible. A + good approximation can be achieved using {{AudioParam/setTargetAtTime()}} with an appropriately chosen + time constant. + + If there are no more events after this ExponentialRampToValue + event then for \(t \geq T_1\), \(v(t) = V_1\). + + If there is no event preceding this event, the exponential ramp + behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} + were called where
value
is the current value of
+ the attribute and currentTime
is the context
+ {{BaseAudioContext/currentTime}} at the time
+ {{AudioParam/exponentialRampToValueAtTime()}} is called.
+
+ If the preceding event is a SetTarget
event, \(T_0\)
+ and \(V_0\) are chosen from the current time and value of
+ SetTarget
automation. That is, if the
+ SetTarget
event has not started, \(T_0\) is the start
+ time of the event, and \(V_0\) is the value just before the
+ SetTarget
event starts. In this case, the
+ ExponentialRampToValue
event effectively replaces the
+ SetTarget
event. If the SetTarget
event has
+ already started, \(T_0\) is the current context time, and
+ \(V_0\) is the current SetTarget
automation value at
+ time \(T_0\). In both cases, the automation curve is
+ continuous.
+
+
+ value: The value the parameter will exponentially ramp to at the given time. A {{RangeError}} exception MUST be thrown if this value is equal to 0.
+ endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute where the exponential ramp ends. A {{RangeError}} exception MUST be thrown if endTime
is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+
+
+ + $$ + v(t) = V_0 + (V_1 - V_0) \frac{t - T_0}{T_1 - T_0} + $$ ++ + where \(V_0\) is the value at the time \(T_0\) and \(V_1\) is + the {{AudioParam/linearRampToValueAtTime()/value!!argument}} parameter passed into this method. + + If there are no more events after this LinearRampToValue event + then for \(t \geq T_1\), \(v(t) = V_1\). + + If there is no event preceding this event, the linear ramp + behaves as if {{AudioParam/setValueAtTime()|setValueAtTime(value, currentTime)}} + were called where
value
is the current value of
+ the attribute and currentTime
is the context
+ {{BaseAudioContext/currentTime}} at the time
+ {{AudioParam/linearRampToValueAtTime()}} is called.
+
+ If the preceding event is a SetTarget
event, \(T_0\)
+ and \(V_0\) are chosen from the current time and value of
+ SetTarget
automation. That is, if the
+ SetTarget
event has not started, \(T_0\) is the start
+ time of the event, and \(V_0\) is the value just before the
+ SetTarget
event starts. In this case, the
+ LinearRampToValue
event effectively replaces the
+ SetTarget
event. If the SetTarget
event has
+ already started, \(T_0\) is the current context time, and
+ \(V_0\) is the current SetTarget
automation value at
+ time \(T_0\). In both cases, the automation curve is
+ continuous.
+
+
+ value: The value the parameter will linearly ramp to at the given time.
+ endTime: The time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the automation ends. A {{RangeError}} exception MUST be thrown if endTime
is negative or is not a finite number. If endTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+
+
+ + $$ + v(t) = V_1 + (V_0 - V_1)\, e^{-\left(\frac{t - T_0}{\tau}\right)} + $$ ++ + where \(V_0\) is the initial value (the {{[[current value]]}} + attribute) at \(T_0\) (the {{AudioParam/setTargetAtTime()/startTime!!argument}} parameter), + \(V_1\) is equal to the {{AudioParam/setTargetAtTime()/target!!argument}} parameter, and + \(\tau\) is the {{AudioParam/setTargetAtTime()/timeConstant!!argument}} parameter. + + If a
LinearRampToValue
or
+ ExponentialRampToValue
event follows this event, the
+ behavior is described in {{AudioParam/linearRampToValueAtTime()}} or
+ {{AudioParam/exponentialRampToValueAtTime()}},
+ respectively. For all other events, the SetTarget
+ event ends at the time of the next event.
+
+ + target: The value the parameter will start changing to at the given time. + startTime: The time at which the exponential approach will begin, in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. A {{RangeError}} exception MUST be thrown if+ +start
is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}. + timeConstant: The time-constant value of first-order filter (exponential) approach to the target value. The larger this value is, the slower the transition will be. The value MUST be non-negative or a {{RangeError}} exception MUST be thrown. IftimeConstant
is zero, the output value jumps immediately to the final value. More precisely, timeConstant is the time it takes a first-order linear continuous time-invariant system to reach the value \(1 - 1/e\) (around 63.2%) given a step input response (transition from 0 to 1 value). +
SetValue
event,
+ then for \(t \geq T_0\), \(v(t) = V\), where \(T_0\) is the
+ {{AudioParam/setValueAtTime()/startTime!!argument}} parameter and \(V\) is the
+ {{AudioParam/setValueAtTime()/value!!argument}} parameter. In other words, the value will
+ remain constant.
+
+ If the next event (having time \(T_1\)) after this
+ SetValue
event is not of type
+ LinearRampToValue
or ExponentialRampToValue
,
+ then, for \(T_0 \leq t < T_1\):
+
+ + $$ + v(t) = V + $$ ++ + In other words, the value will remain constant during this time + interval, allowing the creation of "step" functions. + + If the next event after this
SetValue
event is of type
+ LinearRampToValue
or ExponentialRampToValue
+ then please see {{AudioParam/linearRampToValueAtTime()}} or
+ {{AudioParam/exponentialRampToValueAtTime()}},
+ respectively.
+
+
+ value: The value the parameter will change to at the given time.
+ startTime: The time in the same time coordinate system as the {{BaseAudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the parameter changes to the given value. A {{RangeError}} exception MUST be thrown if startTime
is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}.
+
+
+ + $$ + \begin{align*} k &= \left\lfloor \frac{N - 1}{T_D}(t-T_0) \right\rfloor \\ + \end{align*} + $$ ++ + Then \(v(t)\) is computed by linearly interpolating between + \(V[k]\) and \(V[k+1]\), + + After the end of the curve time interval (\(t \ge T_0 + T_D\)), + the value will remain constant at the final curve value, until + there is another automation event (if any). + + An implicit call to {{AudioParam/setValueAtTime()}} is made at time \(T_0 + + T_D\) with value \(V\[N-1]\) so that following automations will + start from the end of the {{AudioParam/setValueCurveAtTime()}} event. + +
+ values: A sequence of float values representing a parameter value curve. These values will apply starting at the given time and lasting for the given duration. When this method is called, an internal copy of the curve is created for automation purposes. Subsequent modifications of the contents of the passed-in array therefore have no effect on the {{AudioParam}}. An {{InvalidStateError}} MUST be thrown if this attribute is a+ +sequence<float>
object that has a length less than 2. + startTime: The start time in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute at which the value curve will be applied. A {{RangeError}} exception MUST be thrown ifstartTime
is negative or is not a finite number. If startTime is less than {{BaseAudioContext/currentTime}}, it is clamped to {{BaseAudioContext/currentTime}}. + duration: The amount of time in seconds (after thestartTime
parameter) where values will be calculated according to thevalues
parameter. A {{RangeError}} exception MUST be thrown ifduration
is not strictly positive or is not a finite number. +
NaN
, replace the sum with the {{AudioParam/defaultValue}}.
-
- 3. If this {{AudioParam}} is a compound parameter,
- compute its final value with other {{AudioParam}}s.
-
- 4. Set computedValue to paramComputedValue.
+ 1. paramIntrinsicValue will be calculated at
+ each time, which is either the value set directly to
+ the {{AudioParam/value}} attribute, or, if there are
+ any automation
+ events with times before or at this time, the
+ value as calculated from these events. If automation
+ events are removed from a given time range, then the
+ paramIntrinsicValue value will remain
+ unchanged and stay at its previous value until either
+ the {{AudioParam/value}} attribute is directly set, or
+ automation events are added for the time range.
+
+ 1. Set {{[[current value]]}} to the value of
+ paramIntrinsicValue at the beginning of
+ this render quantum.
+
+ 2. paramComputedValue is the sum of the paramIntrinsicValue
+ value and the value of the input
+ AudioParam buffer. If the sum is NaN
, replace the sum with the {{AudioParam/defaultValue}}.
+
+ 3. If this {{AudioParam}} is a compound parameter,
+ compute its final value with other {{AudioParam}}s.
+
+ 4. Set computedValue to paramComputedValue.
- N.p.setValueAtTime(0, 0); - N.p.linearRampToValueAtTime(4, 1); - N.p.linearRampToValueAtTime(0, 2); -- - The initial slope of the curve is 4, until it reaches the maximum - value of 1, at which time, the output is held constant. Finally, - near time 2, the slope of the curve is -4. This is illustrated in - the graph below where the dashed line indicates what would have - happened without clipping, and the solid line indicates the actual - expected behavior of the audioparam due to clipping to the nominal - range. - - + +
- const curveLength = 44100; - const curve = new Float32Array(curveLength); - for (const i = 0; i < curveLength; ++i) - curve[i] = Math.sin(Math.PI * i / curveLength); - - const t0 = 0; - const t1 = 0.1; - const t2 = 0.2; - const t3 = 0.3; - const t4 = 0.325; - const t5 = 0.5; - const t6 = 0.6; - const t7 = 0.7; - const t8 = 1.0; - const timeConstant = 0.1; - - param.setValueAtTime(0.2, t0); - param.setValueAtTime(0.3, t1); - param.setValueAtTime(0.4, t2); - param.linearRampToValueAtTime(1, t3); - param.linearRampToValueAtTime(0.8, t4); - param.setTargetAtTime(.5, t4, timeConstant); - // Compute where the setTargetAtTime will be at time t5 so we can make - // the following exponential start at the right point so there's no - // jump discontinuity. From the spec, we have - // v(t) = 0.5 + (0.8 - 0.5)*exp(-(t-t4)/timeConstant) - // Thus v(t5) = 0.5 + (0.8 - 0.5)*exp(-(t5-t4)/timeConstant) - param.setValueAtTime(0.5 + (0.8 - 0.5)*Math.exp(-(t5 - t4)/timeConstant), t5); - param.exponentialRampToValueAtTime(0.75, t6); - param.exponentialRampToValueAtTime(0.05, t7); - param.setValueCurveAtTime(curve, t7, t8 - t7); + const curveLength = 44100; + const curve = new Float32Array(curveLength); + for (const i = 0; i < curveLength; ++i) + curve[i] = Math.sin(Math.PI * i / curveLength); + + const t0 = 0; + const t1 = 0.1; + const t2 = 0.2; + const t3 = 0.3; + const t4 = 0.325; + const t5 = 0.5; + const t6 = 0.6; + const t7 = 0.7; + const t8 = 1.0; + const timeConstant = 0.1; + + param.setValueAtTime(0.2, t0); + param.setValueAtTime(0.3, t1); + param.setValueAtTime(0.4, t2); + param.linearRampToValueAtTime(1, t3); + param.linearRampToValueAtTime(0.8, t4); + param.setTargetAtTime(.5, t4, timeConstant); + // Compute where the setTargetAtTime will be at time t5 so we can make + // the following exponential start at the right point so there's no + // jump discontinuity. From the spec, we have + // v(t) = 0.5 + (0.8 - 0.5)*exp(-(t-t4)/timeConstant) + // Thus v(t5) = 0.5 + (0.8 - 0.5)*exp(-(t5-t4)/timeConstant) + param.setValueAtTime(0.5 + (0.8 - 0.5)*Math.exp(-(t5 - t4)/timeConstant), t5); + param.exponentialRampToValueAtTime(0.75, t6); + param.exponentialRampToValueAtTime(0.05, t7); + param.setValueCurveAtTime(curve, t7, t8 - t7);@@ -4679,7 +4020,7 @@ http://googlechrome.github.io/web-audio-samples/samples/audio/timeline.html --> ██ ██ ██████ ██████ ██ ██ ███████ ████████ ████████ --> -
[Exposed=Window] interface AudioScheduledSourceNode : AudioNode { - attribute EventHandler onended; - undefined start(optional double when = 0); - undefined stop(optional double when = 0); + attribute EventHandler onended; + void start(optional double when = 0); + void stop(optional double when = 0); };@@ -4716,112 +4057,112 @@ interface AudioScheduledSourceNode : AudioNode { Attributes
EventHandler
(described
+ in
+ HTML[[!HTML]]) for the ended event that is
+ dispatched for {{AudioScheduledSourceNode}} node
+ types. When the source node has stopped playing (as determined
+ by the concrete node), an event of type {{Event}}
+ (described in
+ HTML [[!HTML]]) will be dispatched to the event
+ handler.
+
+ For all {{AudioScheduledSourceNode}}s, the
+ onended
event is dispatched when the stop time
+ determined by {{AudioScheduledSourceNode/stop()}} is reached.
+ For an {{AudioBufferSourceNode}}, the event is
+ also dispatched because the {{AudioBufferSourceNode/start(when, offset, duration)/duration}} has been
+ reached or if the entire {{AudioBufferSourceNode/buffer}} has been
+ played.
true
.
-
- 4. Queue a control message to start the
- {{AudioScheduledSourceNode}}, including the parameter
- values in the message.
-
- 5. Send a control message to the associated {{AudioContext}} to
- start running its rendering thread only when
- all the following conditions are met:
- 1. The context's {{[[control thread state]]}} is
- "{{AudioContextState/suspended}}".
- 1. The context is allowed to start.
- 1. {{[[suspended by user]]}} flag is false
.
-
- NOTE: This can allow {{AudioScheduledSourceNode/start()}} to start
- an {{AudioContext}} that is currently allowed to start,
- but has previously been prevented from starting.
- - when: The {{AudioScheduledSourceNode/start(when)/when}} parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. When the signal emitted by the {{AudioScheduledSourceNode}} depends on the sound's start time, the exact value of- -when
is always used without rounding to the nearest sample frame. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown ifwhen
is negative. -
stop
is called again after already having been
- called, the last invocation will be the only one applied; stop
- times set by previous calls will not be applied, unless the
- buffer has already stopped prior to any subsequent calls. If
- the buffer has already stopped, further calls to
- stop
will have no effect. If a stop time is
- reached prior to the scheduled start time, the sound will not
- play.
-
- true
,
- an {{InvalidStateError}} exception MUST be thrown.
-
- 2. Check for any errors that must be thrown due to parameter
- constraints described below.
-
- 3. Queue a control message to stop the
- {{AudioScheduledSourceNode}}, including the parameter
- values in the message.
- handleStop()
function in the playback algorithm.
-
- when: The {{AudioScheduledSourceNode/stop(when)/when}} parameter describes at what time (in seconds) the source should stop playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will stop playing immediately. A {{RangeError}} exception MUST be thrown if when
is negative.
-
-
- true
.
+
+ 4. Queue a control message to start the
+ {{AudioScheduledSourceNode}}, including the parameter
+ values in the messsage.
+
+ 5. Send a control message to the associated {{AudioContext}} to
+ run it in the rendering thread only when
+ the following conditions are met:
+ 1. The context's control thread state is
+ suspended
.
+ 2. The context is allowed to start.
+ 3. {{[[suspended by user]]}} flag is false
.
+ + when: The {{AudioScheduledSourceNode/start(when)/when}} parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. When the signal emitted by the {{AudioScheduledSourceNode}} depends on the sound's start time, the exact value of+ +when
is always used without rounding to the nearest sample frame. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown ifwhen
is negative. +
stop
is called again after already having been
+ called, the last invocation will be the only one applied; stop
+ times set by previous calls will not be applied, unless the
+ buffer has already stopped prior to any subsequent calls. If
+ the buffer has already stopped, further calls to
+ stop
will have no effect. If a stop time is
+ reached prior to the scheduled start time, the sound will not
+ play.
+
+ true
,
+ an {{InvalidStateError}} exception MUST be thrown.
+
+ 2. Check for any errors that must be thrown due to parameter
+ constraints described below.
+
+ 3. Queue a control message to stop the
+ {{AudioScheduledSourceNode}}, including the parameter
+ values in the messsage.
+ handleStop()
function in the playback algorithm.
+
+ when: The {{AudioScheduledSourceNode/stop(when)/when}} parameter describes at what time (in seconds) the source should stop playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than {{BaseAudioContext/currentTime}}, then the sound will stop playing immediately. A {{RangeError}} exception MUST be thrown if when
is negative.
+
+
+ path: audionode.include macros: - noi: 1 - noo: 1 - noo-notes: This output may be left unconnected. - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: No + noi: 1 + noo: 1 + noo-notes: This output may be left unconnected. + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: No
[Exposed=Window] interface AnalyserNode : AudioNode { - constructor (BaseAudioContext context, optional AnalyserOptions options = {}); - undefined getFloatFrequencyData (Float32Array array); - undefined getByteFrequencyData (Uint8Array array); - undefined getFloatTimeDomainData (Float32Array array); - undefined getByteTimeDomainData (Uint8Array array); - attribute unsigned long fftSize; - readonly attribute unsigned long frequencyBinCount; - attribute double minDecibels; - attribute double maxDecibels; - attribute double smoothingTimeConstant; + constructor (BaseAudioContext context, optional AnalyserOptions options = {}); + void getFloatFrequencyData (Float32Array array); + void getByteFrequencyData (Uint8Array array); + void getFloatTimeDomainData (Float32Array array); + void getByteTimeDomainData (Uint8Array array); + attribute unsigned long fftSize; + readonly attribute unsigned long frequencyBinCount; + attribute double minDecibels; + attribute double maxDecibels; + attribute double smoothingTimeConstant; };@@ -4874,196 +4215,209 @@ interface AnalyserNode : AudioNode { Constructors
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{AnalyserNode}} will be associated with. - options: Optional initial parameter value for this {{AnalyserNode}}. -+
+ context: The {{BaseAudioContext}} this new {{AnalyserNode}} will be associated with. + options: Optional initial parameter value for this {{AnalyserNode}}. +
- $$ - b[k] = \left\lfloor - \frac{255}{\mbox{dB}_{max} - \mbox{dB}_{min}} - \left(Y[k] - \mbox{dB}_{min}\right) - \right\rfloor - $$ -- - where \(\mbox{dB}_{min}\) is {{AnalyserNode/minDecibels}} - and \(\mbox{dB}_{max}\) is
{{AnalyserNode/maxDecibels}}
. If
- \(b[k]\) lies outside the range of 0 to 255, \(b[k]\) is
- clipped to lie in that range.
-
- - array: This parameter is where the frequency-domain analysis data will be copied. -- -
- $$ - b[k] = \left\lfloor 128(1 + x[k]) \right\rfloor. - $$ -- - If \(b[k]\) lies outside the range 0 to 255, \(b[k]\) is - clipped to lie in that range. - -
- array: This parameter is where the time-domain sample data will be copied. -- -
- array: This parameter is where the frequency-domain analysis data will be copied. -- -
- array: This parameter is where the time-domain sample data will be copied. -- -
+ $$ + b[k] = \left\lfloor + \frac{255}{\mbox{dB}_{max} - \mbox{dB}_{min}} + \left(Y[k] - \mbox{dB}_{min}\right) + \right\rfloor + $$ ++ + where \(\mbox{dB}_{min}\) is {{AnalyserNode/minDecibels}} + and \(\mbox{dB}_{max}\) is
{{AnalyserNode/maxDecibels}}
. If
+ \(b[k]\) lies outside the range of 0 to 255, \(b[k]\) is
+ clipped to lie in that range.
+
+ + array: This parameter is where the frequency-domain analysis data will be copied. ++ +
void
+ + $$ + b[k] = \left\lfloor 128(1 + x[k]) \right\rfloor. + $$ ++ + If \(b[k]\) lies outside the range 0 to 255, \(b[k]\) is + clipped to lie in that range. + +
+ array: This parameter is where the time-domain sample data will be copied. ++ +
void
+ + array: This parameter is where the frequency-domain analysis data will be copied. ++ +
void
+ + array: This parameter is where the time-domain sample data will be copied. ++ +
void
+ dictionary AnalyserOptions : AudioNodeOptions { - unsigned long fftSize = 2048; - double maxDecibels = -30; - double minDecibels = -100; - double smoothingTimeConstant = 0.8; + unsigned long fftSize = 2048; + double maxDecibels = -30; + double minDecibels = -100; + double smoothingTimeConstant = 0.8; };@@ -5084,30 +4438,30 @@ dictionary AnalyserOptions : AudioNodeOptions { Dictionary {{AnalyserOptions}} Members
- $$ - \begin{align*} - \alpha &= \mbox{0.16} \\ a_0 &= \frac{1-\alpha}{2} \\ - a_1 &= \frac{1}{2} \\ - a_2 &= \frac{\alpha}{2} \\ - w[n] &= a_0 - a_1 \cos\frac{2\pi n}{N} + a_2 \cos\frac{4\pi n}{N}, \mbox{ for } n = 0, \ldots, N - 1 - \end{align*} - $$ -- - The windowed signal \(\hat{x}[n]\) is - -
- $$ - \hat{x}[n] = x[n] w[n], \mbox{ for } n = 0, \ldots, N - 1 - $$ -+ Applying a Blackman window consists + in the following operation on the input time domain data. Let + \(x[n]\) for \(n = 0, \ldots, N - 1\) be the time domain data. The + Blackman window is defined by + +
+ $$ + \begin{align*} + \alpha &= \mbox{0.16} \\ a_0 &= \frac{1-\alpha}{2} \\ + a_1 &= \frac{1}{2} \\ + a_2 &= \frac{\alpha}{2} \\ + w[n] &= a_0 - a_1 \cos\frac{2\pi n}{N} + a_2 \cos\frac{4\pi n}{N}, \mbox{ for } n = 0, \ldots, N - 1 + \end{align*} + $$ ++ + The windowed signal \(\hat{x}[n]\) is + +
+ $$ + \hat{x}[n] = x[n] w[n], \mbox{ for } n = 0, \ldots, N - 1 + $$ +
- $$ - X[k] = \frac{1}{N} \sum_{n = 0}^{N - 1} \hat{x}[n]\, W^{-kn}_{N} - $$ -- - for \(k = 0, \dots, N/2-1\) where \(W_N = e^{2\pi i/N}\). + Applying a Fourier transform + consists of computing the Fourier transform in the following way. + Let \(X[k]\) be the complex frequency domain data and + \(\hat{x}[n]\) be the windowed time domain data computed above. + Then + +
+ $$ + X[k] = \frac{1}{N} \sum_{n = 0}^{N - 1} \hat{x}[n]\, W^{-kn}_{N} + $$ ++ + for \(k = 0, \dots, N/2-1\) where \(W_N = e^{2\pi i/N}\).
- $$ - \hat{X}[k] = \tau\, \hat{X}_{-1}[k] + (1 - \tau)\, \left|X[k]\right| - $$ -+ Then the smoothed value, \(\hat{X}[k]\), is computed by - * If \(\hat{X}[k]\) is
NaN
, positive infinity or negative infinity, set \(\hat{X}[k]\) = 0.
-+ $$ + \hat{X}[k] = \tau\, \hat{X}_{-1}[k] + (1 - \tau)\, \left|X[k]\right| + $$ +- for \(k = 0, \ldots, N - 1\). + for \(k = 0, \ldots, N - 1\).
- $$ - Y[k] = 20\log_{10}\hat{X}[k] - $$ -- - for \(k = 0, \ldots, N-1\). - - This array, \(Y[k]\), is copied to the output array for - {{AnalyserNode/getFloatFrequencyData()}}. For - {{AnalyserNode/getByteFrequencyData()}}, the \(Y[k]\) is clipped to lie - between {{AnalyserNode/minDecibels}} and -
{{AnalyserNode/maxDecibels}}
and then scaled to fit in an
- unsigned byte such that {{AnalyserNode/minDecibels}} is
- represented by the value 0 and {{AnalyserNode/maxDecibels}}
is
- represented by the value 255.
+ Conversion to dB consists of the
+ following operation, where \(\hat{X}[k]\) is computed in smoothing over time:
+
+ + $$ + Y[k] = 20\log_{10}\hat{X}[k] + $$ ++ + for \(k = 0, \ldots, N-1\). + + This array, \(Y[k]\), is copied to the output array for + {{AnalyserNode/getFloatFrequencyData()}}. For + {{AnalyserNode/getByteFrequencyData()}}, the \(Y[k]\) is clipped to lie + between {{AnalyserNode/minDecibels}} and +
{{AnalyserNode/maxDecibels}}
and then scaled to fit in an
+ unsigned byte such that {{AnalyserNode/minDecibels}} is
+ represented by the value 0 and {{AnalyserNode/maxDecibels}}
is
+ represented by the value 255.
path: audionode.include macros: - noi: 0 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: No + noi: 0 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: NoThe number of channels of the output equals the number of channels of the @@ -5277,9 +4628,9 @@ In addition, if the buffer has more than one channel, then the {{AudioBufferSourceNode}} output must change to a single channel of silence at the beginning of a render quantum after the time at which any one of the following conditions holds: - * the end of the {{AudioBufferSourceNode/buffer}} has been reached; - * the {{AudioBufferSourceNode/start(when, offset, duration)/duration}} has been reached; - * the {{AudioScheduledSourceNode/stop(when)/when|stop}} time has been reached. + * the end of the {{AudioBufferSourceNode/buffer}} has been reached; + * the {{AudioBufferSourceNode/start(when, offset, duration)/duration}} has been reached; + * the {{AudioScheduledSourceNode/stop(when)/when|stop}} time has been reached. A playhead position for an {{AudioBufferSourceNode}} is defined as any quantity representing a time offset in seconds, @@ -5308,192 +4659,180 @@ slot [[buffer set]], initially
[Exposed=Window] interface AudioBufferSourceNode : AudioScheduledSourceNode { - constructor (BaseAudioContext context, - optional AudioBufferSourceOptions options = {}); - attribute AudioBuffer? buffer; - readonly attribute AudioParam playbackRate; - readonly attribute AudioParam detune; - attribute boolean loop; - attribute double loopStart; - attribute double loopEnd; - undefined start (optional double when = 0, - optional double offset, - optional double duration); + constructor (BaseAudioContext context, + optional AudioBufferSourceOptions options = {}); + attribute AudioBuffer? buffer; + readonly attribute AudioParam playbackRate; + readonly attribute AudioParam detune; + attribute boolean loop; + attribute double loopStart; + attribute double loopEnd; + void start (optional double when = 0, + optional double offset, + optional double duration); };
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{AudioBufferSourceNode}} will be associated with. - options: Optional initial parameter value for this {{AudioBufferSourceNode}}. -+
+ context: The {{BaseAudioContext}} this new {{AudioBufferSourceNode}} will be associated with. + options: Optional initial parameter value for this {{AudioBufferSourceNode}}. +
null
value to be assigned to {{AudioBufferSourceNode/buffer}}.
-
- 2. If new buffer is not null
and
- {{AudioBufferSourceNode/[[buffer set]]}} is true, throw an
- {{InvalidStateError}} and abort these steps.
-
- 3. If new buffer is not null
, set
- {{AudioBufferSourceNode/[[buffer set]]}} to true.
-
- 4. Assign new buffer to the {{AudioBufferSourceNode/buffer}}
- attribute.
-
- 5. If start()
has previously been called on this
- node, perform the operation acquire the content on
- {{AudioBufferSourceNode/buffer}}.
- - path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/k-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : loop - :: - Indicates if the region of audio data designated by - {{AudioBufferSourceNode/loopStart}} and {{AudioBufferSourceNode/loopEnd}} should be played continuously - in a loop. The default value is
false
.
-
- : loopEnd
- ::
- An optional playhead position where looping should end if
- the {{AudioBufferSourceNode/loop}} attribute is true. Its value is exclusive of the
- content of the loop. Its default value
is 0, and it
- may usefully be set to any value between 0 and the duration of
- the buffer. If {{AudioBufferSourceNode/loopEnd}} is less than or equal to 0, or if
- {{AudioBufferSourceNode/loopEnd}} is greater than the duration of the buffer,
- looping will end at the end of the buffer.
-
- : loopStart
- ::
- An optional playhead position where looping should begin
- if the {{AudioBufferSourceNode/loop}} attribute is true. Its default
- value
is 0, and it may usefully be set to any value
- between 0 and the duration of the buffer. If {{AudioBufferSourceNode/loopStart}} is
- less than 0, looping will begin at 0. If {{AudioBufferSourceNode/loopStart}} is
- greater than the duration of the buffer, looping will begin at
- the end of the buffer.
-
- : playbackRate
- ::
- The speed at which to render the audio stream. This is a
- compound parameter with {{AudioBufferSourceNode/detune}} to form a
- computedPlaybackRate.
-
- - path: audioparam.include - macros: - default: 1 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/k-rate}}" - rate-notes: Has [=automation rate constraints=] -+ : buffer + :: + Represents the audio asset to be played. + +
null
value to be assigned to {{AudioBufferSourceNode/buffer}}.
+
+ 2. If new buffer is not null
and
+ {{AudioBufferSourceNode/[[buffer set]]}} is true, throw an
+ {{InvalidStateError}} and abort these steps.
+
+ 3. If new buffer is not null
, set
+ {{AudioBufferSourceNode/[[buffer set]]}} to true.
+
+ 4. Assign new buffer to the {{AudioBufferSourceNode/buffer}}
+ attribute.
+
+ 5. If start()
has previously been called on this
+ node, perform the operation acquire the content on
+ {{AudioBufferSourceNode/buffer}}.
+ + path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/k-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : loop + :: + Indicates if the region of audio data designated by + {{AudioBufferSourceNode/loopStart}} and {{AudioBufferSourceNode/loopEnd}} should be played continuously + in a loop. The default value is
false
.
+
+ : loopEnd
+ ::
+ An optional playhead position where looping should end if
+ the {{AudioBufferSourceNode/loop}} attribute is true. Its value is exclusive of the
+ content of the loop. Its default value
is 0, and it
+ may usefully be set to any value between 0 and the duration of
+ the buffer. If {{AudioBufferSourceNode/loopEnd}} is less than or equal to 0, or if
+ {{AudioBufferSourceNode/loopEnd}} is greater than the duration of the buffer,
+ looping will end at the end of the buffer.
+
+ : loopStart
+ ::
+ An optional playhead position where looping should begin
+ if the {{AudioBufferSourceNode/loop}} attribute is true. Its default
+ value
is 0, and it may usefully be set to any value
+ between 0 and the duration of the buffer. If {{AudioBufferSourceNode/loopStart}} is
+ less than 0, looping will begin at 0. If {{AudioBufferSourceNode/loopStart}} is
+ greater than the duration of the buffer, looping will begin at
+ the end of the buffer.
+
+ : playbackRate
+ ::
+ The speed at which to render the audio stream. This is a
+ compound parameter with {{AudioBufferSourceNode/detune}} to form a
+ computedPlaybackRate.
+
+ + path: audioparam.include + macros: + default: 1 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/k-rate}}" + rate-notes: Has [=automation rate constraints=] +
true
.
-
- 1. Queue a control message to start the
- {{AudioBufferSourceNode}}, including the parameter values
- in the message.
-
- 1. Acquire the contents of the
- {{AudioBufferSourceNode/buffer}} if the
- {{AudioBufferSourceNode/buffer}} has been set.
- 1. Send a control message to the associated {{AudioContext}} to
- start running its rendering thread only when
- all the following conditions are met:
- 1. The context's {{[[control thread state]]}} is
- {{AudioContextState/suspended}}.
- 1. The context is allowed to start.
- 1. {{[[suspended by user]]}} flag is false
.
-
- NOTE: This can allow {{AudioBufferSourceNode/start()}} to start
- an {{AudioContext}} that is currently allowed to start,
- but has previously been prevented from starting.
- handleStart()
function in the
- [[#playback-AudioBufferSourceNode|playback algorithm]] which follows.
- - when: The- -when
parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than currentTime, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown ifwhen
is negative. - offset: Theoffset
parameter supplies a playhead position where playback will begin. If 0 is passed in for this value, then playback will start from the beginning of the buffer. A {{RangeError}} exception MUST be thrown ifoffset
is negative. Ifoffset
is greater than {{AudioBufferSourceNode/loopEnd}}, {{AudioBufferSourceNode/playbackRate}} is positive or zero, and {{AudioBufferSourceNode/loop}} istrue
, playback will begin at {{AudioBufferSourceNode/loopEnd}}. Ifoffset
is greater than {{AudioBufferSourceNode/loopStart}}, {{AudioBufferSourceNode/playbackRate}} is negative, and {{AudioBufferSourceNode/loop}} istrue
, playback will begin at {{AudioBufferSourceNode/loopStart}}.offset
is silently clamped to [0,duration
], whenstartTime
is reached, whereduration
is the value of theduration
attribute of the {{AudioBuffer}} set to the {{AudioBufferSourceNode/buffer}} attribute of thisAudioBufferSourceNode
. - duration: The {{AudioBufferSourceNode/start(when, offset, duration)/duration}} parameter describes the duration of sound to be played, expressed as seconds of total buffer content to be output, including any whole or partial loop iterations. The units of {{AudioBufferSourceNode/start(when, offset, duration)/duration}} are independent of the effects of {{AudioBufferSourceNode/playbackRate}}. For example, a {{AudioBufferSourceNode/start(when, offset, duration)/duration}} of 5 seconds with a playback rate of 0.5 will output 5 seconds of buffer content at half speed, producing 10 seconds of audible output. A {{RangeError}} exception MUST be thrown ifduration
is negative. -
stop
has been called on this node, or if an
+ earlier call to start
has already occurred, an
+ {{InvalidStateError}} exception MUST be thrown.
+
+ 2. Check for any errors that must be thrown due to parameter
+ constraints described below.
+
+ 3. Queue a control message to start the
+ {{AudioBufferSourceNode}}, including the parameter values
+ in the messsage.
+
+ 4. Send a control message to the associated {{AudioContext}} to
+ run it in the rendering thread only when
+ the following conditions are met:
+ 1. The context's control thread state is
+ suspended
.
+ 2. The context is allowed to start.
+ 3. {{[[suspended by user]]}} flag is false
.
+ handleStart()
function in the
+ [[#playback-AudioBufferSourceNode|playback algorithm]] which follows.
+ + when: The+ +when
parameter describes at what time (in seconds) the sound should start playing. It is in the same time coordinate system as the {{AudioContext}}'s {{BaseAudioContext/currentTime}} attribute. If 0 is passed in for this value or if the value is less than currentTime, then the sound will start playing immediately. A {{RangeError}} exception MUST be thrown ifwhen
is negative. + offset: Theoffset
parameter supplies a playhead position where playback will begin. If 0 is passed in for this value, then playback will start from the beginning of the buffer. A {{RangeError}} exception MUST be thrown ifoffset
is negative. Ifoffset
is greater than {{AudioBufferSourceNode/loopEnd}}, {{AudioBufferSourceNode/playbackRate}} is positive or zero, and {{AudioBufferSourceNode/loop}} istrue
, playback will begin at {{AudioBufferSourceNode/loopEnd}}. Ifoffset
is greater than {{AudioBufferSourceNode/loopStart}}, {{AudioBufferSourceNode/playbackRate}} is negative, and {{AudioBufferSourceNode/loop}} istrue
, playback will begin at {{AudioBufferSourceNode/loopStart}}.offset
is silently clamped to [0,duration
], whenstartTime
is reached, whereduration
is the value of theduration
attribute of the {{AudioBuffer}} set to the {{AudioBufferSourceNode/buffer}} attribute of thisAudioBufferSourceNode
. + duration: The {{AudioBufferSourceNode/start(when, offset, duration)/duration}} parameter describes the duration of sound to be played, expressed as seconds of total buffer content to be output, including any whole or partial loop iterations. The units of {{AudioBufferSourceNode/start(when, offset, duration)/duration}} are independent of the effects of {{AudioBufferSourceNode/playbackRate}}. For example, a {{AudioBufferSourceNode/start(when, offset, duration)/duration}} of 5 seconds with a playback rate of 0.5 will output 5 seconds of buffer content at half speed, producing 10 seconds of audible output. A {{RangeError}} exception MUST be thrown ifduration
is negative. +
dictionary AudioBufferSourceOptions { - AudioBuffer? buffer; - float detune = 0; - boolean loop = false; - double loopEnd = 0; - double loopStart = 0; - float playbackRate = 1; + AudioBuffer? buffer; + float detune = 0; + boolean loop = false; + double loopEnd = 0; + double loopStart = 0; + float playbackRate = 1; };@@ -5516,31 +4855,31 @@ dictionary AudioBufferSourceOptions { Dictionary {{AudioBufferSourceOptions}} Members
duration
has been exceeded, if
- {{AudioBufferSourceNode/start()}}
- was called with a duration
value.
+ {{AudioBufferSourceNode/start()}}
+ was called with a duration
value.
The body of the loop is considered to occupy a region from
{{AudioBufferSourceNode/loopStart}} up to, but
@@ -5615,25 +4954,25 @@ the following factors working in combination:
* A starting offset, which can be expressed with sub-sample precision.
* Loop points, which can be expressed with sub-sample precision and can
- vary dynamically during playback.
+ vary dynamically during playback.
* Playback rate and detuning parameters, which combine to yield a
- single computedPlaybackRate that can assume finite values
- which may be positive or negative.
+ single computedPlaybackRate that can assume finite values
+ which may be positive or negative.
The algorithm to be followed internally to generate output from an
{{AudioBufferSourceNode}} conforms to the following principles:
* Resampling of the buffer may be performed arbitrarily by the UA
- at any desired point to increase the efficiency or quality of the
- output.
+ at any desired point to increase the efficiency or quality of the
+ output.
* Sub-sample start offsets or loop points may require additional
- interpolation between sample frames.
+ interpolation between sample frames.
* The playback of a looped buffer should behave identically to an
- unlooped buffer containing consecutive occurrences of the looped
- audio content, excluding any effects from interpolation.
+ unlooped buffer containing consecutive occurrences of the looped
+ audio content, excluding any effects from interpolation.
The description of the algorithm is as follows:
@@ -5661,141 +5000,141 @@ let dt = 1 / context.sampleRate;
// Handle invocation of start method call
function handleStart(when, pos, dur) {
- if (arguments.length >= 1) {
- start = when;
- }
- offset = pos;
- if (arguments.length >= 3) {
- duration = dur;
- }
+ if (arguments.length >= 1) {
+ start = when;
+ }
+ offset = pos;
+ if (arguments.length >= 3) {
+ duration = dur;
+ }
}
// Handle invocation of stop method call
function handleStop(when) {
- if (arguments.length >= 1) {
- stop = when;
- } else {
- stop = context.currentTime;
- }
+ if (arguments.length >= 1) {
+ stop = when;
+ } else {
+ stop = context.currentTime;
+ }
}
// Interpolate a multi-channel signal value for some sample frame.
// Returns an array of signal values.
function playbackSignal(position) {
- /*
- This function provides the playback signal function for buffer, which is a
- function that maps from a playhead position to a set of output signal
- values, one for each output channel. If |position| corresponds to the
- location of an exact sample frame in the buffer, this function returns
- that frame. Otherwise, its return value is determined by a UA-supplied
- algorithm that interpolates sample frames in the neighborhood of
- |position|.
-
- If |position| is greater than or equal to |loopEnd| and there is no subsequent
- sample frame in buffer, then interpolation should be based on the sequence
- of subsequent frames beginning at |loopStart|.
- */
- ...
+ /*
+ This function provides the playback signal function for buffer, which is a
+ function that maps from a playhead position to a set of output signal
+ values, one for each output channel. If |position| corresponds to the
+ location of an exact sample frame in the buffer, this function returns
+ that frame. Otherwise, its return value is determined by a UA-supplied
+ algorithm that interpolates between sample frames in the neighborhood of
+ position.
+
+ If position is greater than or equal to loopEnd and there is no subsequent
+ sample frame in buffer, then interpolation should be based on the sequence
+ of subsequent frames beginning at loopStart.
+ */
+ ...
}
// Generate a single render quantum of audio to be placed
// in the channel arrays defined by output. Returns an array
// of |numberOfFrames| sample frames to be output.
function process(numberOfFrames) {
- let currentTime = context.currentTime; // context time of next rendered frame
- const output = []; // accumulates rendered sample frames
-
- // Combine the two k-rate parameters affecting playback rate
- const computedPlaybackRate = playbackRate * Math.pow(2, detune / 1200);
-
- // Determine loop endpoints as applicable
- let actualLoopStart, actualLoopEnd;
- if (loop && buffer != null) {
- if (loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) {
- actualLoopStart = loopStart;
- actualLoopEnd = Math.min(loopEnd, buffer.duration);
- } else {
- actualLoopStart = 0;
- actualLoopEnd = buffer.duration;
- }
- } else {
- // If the loop flag is false, remove any record of the loop having been entered
- enteredLoop = false;
- }
-
- // Handle null buffer case
- if (buffer == null) {
- stop = currentTime; // force zero output for all time
- }
-
- // Render each sample frame in the quantum
- for (let index = 0; index < numberOfFrames; index++) {
- // Check that currentTime and bufferTimeElapsed are
- // within allowable range for playback
- if (currentTime < start || currentTime >= stop || bufferTimeElapsed >= duration) {
- output.push(0); // this sample frame is silent
- currentTime += dt;
- continue;
- }
-
- if (!started) {
- // Take note that buffer has started playing and get initial
- // playhead position.
- if (loop && computedPlaybackRate >= 0 && offset >= actualLoopEnd) {
- offset = actualLoopEnd;
- }
- if (computedPlaybackRate < 0 && loop && offset < actualLoopStart) {
- offset = actualLoopStart;
- }
- bufferTime = offset;
- started = true;
- }
-
- // Handle loop-related calculations
- if (loop) {
- // Determine if looped portion has been entered for the first time
- if (!enteredLoop) {
- if (offset < actualLoopEnd && bufferTime >= actualLoopStart) {
- // playback began before or within loop, and playhead is
- // now past loop start
- enteredLoop = true;
- }
- if (offset >= actualLoopEnd && bufferTime < actualLoopEnd) {
- // playback began after loop, and playhead is now prior
- // to the loop end
- enteredLoop = true;
- }
- }
-
- // Wrap loop iterations as needed. Note that enteredLoop
- // may become true inside the preceding conditional.
- if (enteredLoop) {
- while (bufferTime >= actualLoopEnd) {
- bufferTime -= actualLoopEnd - actualLoopStart;
- }
- while (bufferTime < actualLoopStart) {
- bufferTime += actualLoopEnd - actualLoopStart;
- }
- }
- }
-
- if (bufferTime >= 0 && bufferTime < buffer.duration) {
- output.push(playbackSignal(bufferTime));
- } else {
- output.push(0); // past end of buffer, so output silent frame
- }
-
- bufferTime += dt * computedPlaybackRate;
- bufferTimeElapsed += dt * computedPlaybackRate;
- currentTime += dt;
- } // End of render quantum loop
-
- if (currentTime >= stop) {
- // End playback state of this node. No further invocations of process()
- // will occur. Schedule a change to set the number of output channels to 1.
- }
-
- return output;
+ let currentTime = context.currentTime; // context time of next rendered frame
+ const output = []; // accumulates rendered sample frames
+
+ // Combine the two k-rate parameters affecting playback rate
+ const computedPlaybackRate = playbackRate * Math.pow(2, detune / 1200);
+
+ // Determine loop endpoints as applicable
+ let actualLoopStart, actualLoopEnd;
+ if (loop && buffer != null) {
+ if (loopStart >= 0 && loopEnd > 0 && loopStart < loopEnd) {
+ actualLoopStart = loopStart;
+ actualLoopEnd = Math.min(loopEnd, buffer.duration);
+ } else {
+ actualLoopStart = 0;
+ actualLoopEnd = buffer.duration;
+ }
+ } else {
+ // If the loop flag is false, remove any record of the loop having been entered
+ enteredLoop = false;
+ }
+
+ // Handle null buffer case
+ if (buffer == null) {
+ stop = currentTime; // force zero output for all time
+ }
+
+ // Render each sample frame in the quantum
+ for (let index = 0; index < numberOfFrames; index++) {
+ // Check that currentTime and bufferTimeElapsed are
+ // within allowable range for playback
+ if (currentTime < start || currentTime >= stop || bufferTimeElapsed >= duration) {
+ output.push(0); // this sample frame is silent
+ currentTime += dt;
+ continue;
+ }
+
+ if (!started) {
+ // Take note that buffer has started playing and get initial
+ // playhead position.
+ if (loop && computedPlaybackRate >= 0 && offset >= actualLoopEnd) {
+ offset = actualLoopEnd;
+ }
+ if (computedPlaybackRate < 0 && loop && offset < actualLoopStart) {
+ offset = actualLoopStart;
+ }
+ bufferTime = offset;
+ started = true;
+ }
+
+ // Handle loop-related calculations
+ if (loop) {
+ // Determine if looped portion has been entered for the first time
+ if (!enteredLoop) {
+ if (offset < actualLoopEnd && bufferTime >= actualLoopStart) {
+ // playback began before or within loop, and playhead is
+ // now past loop start
+ enteredLoop = true;
+ }
+ if (offset >= actualLoopEnd && bufferTime < actualLoopEnd) {
+ // playback began after loop, and playhead is now prior
+ // to the loop end
+ enteredLoop = true;
+ }
+ }
+
+ // Wrap loop iterations as needed. Note that enteredLoop
+ // may become true inside the preceding conditional.
+ if (enteredLoop) {
+ while (bufferTime >= actualLoopEnd) {
+ bufferTime -= actualLoopEnd - actualLoopStart;
+ }
+ while (bufferTime < actualLoopStart) {
+ bufferTime += actualLoopEnd - actualLoopStart;
+ }
+ }
+ }
+
+ if (bufferTime >= 0 && bufferTime < buffer.duration) {
+ output.push(playbackSignal(bufferTime));
+ } else {
+ output.push(0); // past end of buffer, so output silent frame
+ }
+
+ bufferTime += dt * computedPlaybackRate;
+ bufferTimeElapsed += dt * computedPlaybackRate;
+ currentTime += dt;
+ } // End of render quantum loop
+
+ if (currentTime >= stop) {
+ // End playback state of this node. No further invocations of process()
+ // will occur. Schedule a change to set the number of output channels to 1.
+ }
+
+ return output;
}
@@ -5809,13 +5148,13 @@ apply:
* context sample rate is 1000 Hz
* {{AudioBuffer}} content is shown with the first sample frame
- at the x origin.
+ at the x origin.
* output signals are shown with the sample frame located at time
- start
at the x origin.
+ start
at the x origin.
* linear interpolation is depicted throughout, although a UA
- could employ other interpolation techniques.
+ could employ other interpolation techniques.
* the duration
values noted in the figures refer to the buffer
, not arguments to {{AudioBufferSourceNode/start()}}
@@ -5823,10 +5162,10 @@ This figure illustrates basic playback of a buffer, with a simple
loop that ends after the last sample frame in the buffer:
This figure illustrates playbackRate
interpolation,
@@ -5836,10 +5175,10 @@ sample frame in the looped output, which is interpolated using the
loop start point:
This figure illustrates sample rate interpolation, showing playback
@@ -5850,10 +5189,10 @@ resulting output is the same as the preceding example, but for
different reasons.
This figure illustrates subsample offset playback, in which the
@@ -5861,10 +5200,10 @@ offset within the buffer begins at exactly half a sample frame.
Consequently, every output frame is interpolated:
This figure illustrates subsample loop playback, showing how
@@ -5873,10 +5212,10 @@ data points in the buffer that respect these offsets as if they
were references to exact sample frames:
@@ -5918,12 +5257,12 @@ For an {{AudioContext}}, the defaults are
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-mode: explicit - cc-interp: speakers - tail-time: No + noi: 1 + noo: 1 + cc: 2 + cc-mode: explicit + cc-interp: speakers + tail-time: NoThe {{AudioNode/channelCount}} can be set to any @@ -5938,12 +5277,12 @@ For an {{OfflineAudioContext}}, the defaults are
path: audionode.include macros: - noi: 1 - noo: 1 - cc: numberOfChannels - cc-mode: explicit - cc-interp: speakers - tail-time: No + noi: 1 + noo: 1 + cc: numberOfChannels + cc-mode: explicit + cc-interp: speakers + tail-time: Nowhere
numberOfChannels
is the number of channels
@@ -5956,7 +5295,7 @@ different value.
[Exposed=Window] interface AudioDestinationNode : AudioNode { - readonly attribute unsigned long maxChannelCount; + readonly attribute unsigned long maxChannelCount; };@@ -5964,14 +5303,14 @@ interface AudioDestinationNode : AudioNode { Attributes
[Exposed=Window] interface AudioListener { - readonly attribute AudioParam positionX; - readonly attribute AudioParam positionY; - readonly attribute AudioParam positionZ; - readonly attribute AudioParam forwardX; - readonly attribute AudioParam forwardY; - readonly attribute AudioParam forwardZ; - readonly attribute AudioParam upX; - readonly attribute AudioParam upY; - readonly attribute AudioParam upZ; - undefined setPosition (float x, float y, float z); - undefined setOrientation (float x, float y, float z, float xUp, float yUp, float zUp); + readonly attribute AudioParam positionX; + readonly attribute AudioParam positionY; + readonly attribute AudioParam positionZ; + readonly attribute AudioParam forwardX; + readonly attribute AudioParam forwardY; + readonly attribute AudioParam forwardZ; + readonly attribute AudioParam upX; + readonly attribute AudioParam upY; + readonly attribute AudioParam upZ; + void setPosition (float x, float y, float z); + void setOrientation (float x, float y, float z, float xUp, float yUp, float zUp); };@@ -6030,235 +5369,235 @@ interface AudioListener { Attributes
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : forwardY - :: - Sets the y coordinate component of the forward direction the - listener is pointing in 3D Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : forwardZ - :: - Sets the z coordinate component of the forward direction the - listener is pointing in 3D Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: -1 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : positionX - :: - Sets the x coordinate position of the audio listener in a 3D - Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : positionY - :: - Sets the y coordinate position of the audio listener in a 3D - Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : positionZ - :: - Sets the z coordinate position of the audio listener in a 3D - Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : upX - :: - Sets the x coordinate component of the up direction the - listener is pointing in 3D Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : upY - :: - Sets the y coordinate component of the up direction the - listener is pointing in 3D Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 1 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -- - : upZ - :: - Sets the z coordinate component of the up direction the - listener is pointing in 3D Cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -+ : forwardX + :: + Sets the x coordinate component of the forward direction the + listener is pointing in 3D Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : forwardY + :: + Sets the y coordinate component of the forward direction the + listener is pointing in 3D Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : forwardZ + :: + Sets the z coordinate component of the forward direction the + listener is pointing in 3D Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: -1 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : positionX + :: + Sets the x coordinate position of the audio listener in a 3D + Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : positionY + :: + Sets the y coordinate position of the audio listener in a 3D + Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : positionZ + :: + Sets the z coordinate position of the audio listener in a 3D + Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : upX + :: + Sets the x coordinate component of the up direction the + listener is pointing in 3D Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : upY + :: + Sets the y coordinate component of the up direction the + listener is pointing in 3D Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 1 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" ++ + : upZ + :: + Sets the z coordinate component of the up direction the + listener is pointing in 3D Cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" +
x
, y
, z
,
- xUp
, yUp
, and zUp
- values, respectively.
-
- Consequently, if any of the {{forwardX}}, {{forwardY}},
- {{forwardZ}}, {{upX}}, {{upY}} and {{upZ}}
- {{AudioParam}}s have an automation curve set using
- {{AudioParam/setValueCurveAtTime()}} at the time
- this method is called, a {{NotSupportedError}} MUST be
- thrown.
-
- {{AudioListener/setOrientation()}} describes which direction the listener is pointing in the 3D
- cartesian coordinate space. Both a [=forward=] vector and an
- [=up=] vector are provided. In simple human terms, the
- forward vector represents which direction the person's
- nose is pointing. The up vector represents the direction
- the top of a person's head is pointing. These two vectors are
- expected to be linearly independent. For normative requirements
- of how these values are to be interpreted, see the [[#Spatialization]].
-
- The {{AudioListener/setOrientation()/x!!argument}}, {{AudioListener/setOrientation()/y!!argument}}, and {{AudioListener/setOrientation()/z!!argument}} parameters represent a forward
- direction vector in 3D space, with the default value being
- (0,0,-1).
-
- The {{AudioListener/setOrientation()/xUp!!argument}}, {{AudioListener/setOrientation()/yUp!!argument}}, and {{AudioListener/setOrientation()/zUp!!argument}} parameters represent an
- up direction vector in 3D space, with the default value
- being (0,1,0).
-
- - x: forward x direction fo the {{AudioListener}} - y: forward y direction fo the {{AudioListener}} - z: forward z direction fo the {{AudioListener}} - xUp: up x direction fo the {{AudioListener}} - yUp: up y direction fo the {{AudioListener}} - zUp: up z direction fo the {{AudioListener}} -- -
x
, y
, and
- z
values, respectively.
-
- Consequently, any of the {{AudioListener/positionX}}, {{AudioListener/positionY}},
- and {{AudioListener/positionZ}} {{AudioParam}}s for this
- {{AudioListener}} have an automation curve set using
- {{AudioParam/setValueCurveAtTime()}} at the time
- this method is called, a {{NotSupportedError}} MUST be
- thrown.
-
- {{AudioListener/setPosition()}} sets the position of the listener in a 3D cartesian coordinate
- space. {{PannerNode}} objects use this position
- relative to individual audio sources for spatialization.
-
- The {{AudioListener/setPosition()/x!!argument}}, {{AudioListener/setPosition()/y!!argument}}, and {{AudioListener/setPosition()/z!!argument}} parameters represent the coordinates
- in 3D space.
-
- The default value is (0,0,0).
-
- - x: x-coordinate of the position of the {{AudioListener}} - y: y-coordinate of the position of the {{AudioListener}} - z: z-coordinate of the position of the {{AudioListener}} -+ : setOrientation(x, y, z, xUp, yUp, zUp) + :: + This method is DEPRECATED. It is equivalent to setting + {{forwardX}}.{{AudioParam/value}}, + {{forwardY}}.{{AudioParam/value}}, + {{forwardZ}}.{{AudioParam/value}}, + {{upX}}.{{AudioParam/value}}, + {{upY}}.{{AudioParam/value}}, and + {{upZ}}.{{AudioParam/value}} directly + with the given
x
, y
, z
,
+ xUp
, yUp
, and zUp
+ values, respectively.
+
+ Consequently, if any of the {{forwardX}}, {{forwardY}},
+ {{forwardZ}}, {{upX}}, {{upY}} and {{upZ}}
+ {{AudioParam}}s have an automation curve set using
+ {{AudioParam/setValueCurveAtTime()}} at the time
+ this method is called, a {{NotSupportedError}} MUST be
+ thrown.
+
+ {{AudioListener/setOrientation()}} describes which direction the listener is pointing in the 3D
+ cartesian coordinate space. Both a [=forward=] vector and an
+ [=up=] vector are provided. In simple human terms, the
+ forward vector represents which direction the person's
+ nose is pointing. The up vector represents the direction
+ the top of a person's head is pointing. These two vectors are
+ expected to be linearly independent. For normative requirements
+ of how these values are to be interpreted, see the [[#Spatialization]].
+
+ The {{AudioListener/setOrientation()/x!!argument}}, {{AudioListener/setOrientation()/y!!argument}}, and {{AudioListener/setOrientation()/z!!argument}} parameters represent a forward
+ direction vector in 3D space, with the default value being
+ (0,0,-1).
+
+ The {{AudioListener/setOrientation()/xUp!!argument}}, {{AudioListener/setOrientation()/yUp!!argument}}, and {{AudioListener/setOrientation()/zUp!!argument}} parameters represent an
+ up direction vector in 3D space, with the default value
+ being (0,1,0).
+
+ + x: forward x direction fo the {{AudioListener}} + y: forward y direction fo the {{AudioListener}} + z: forward z direction fo the {{AudioListener}} + xUp: up x direction fo the {{AudioListener}} + yUp: up y direction fo the {{AudioListener}} + zUp: up z direction fo the {{AudioListener}} ++ +
void
+ x
, y
, and
+ z
values, respectively.
+
+ Consequently, any of the {{AudioListener/positionX}}, {{AudioListener/positionY}},
+ and {{AudioListener/positionZ}} {{AudioParam}}s for this
+ {{AudioListener}} have an automation curve set using
+ {{AudioParam/setValueCurveAtTime()}} at the time
+ this method is called, a {{NotSupportedError}} MUST be
+ thrown.
+
+ {{AudioListener/setPosition()}} sets the position of the listener in a 3D cartesian coordinate
+ space. {{PannerNode}} objects use this position
+ relative to individual audio sources for spatialization.
+
+ The {{AudioListener/setPosition()/x!!argument}}, {{AudioListener/setPosition()/y!!argument}}, and {{AudioListener/setPosition()/z!!argument}} parameters represent the coordinates
+ in 3D space.
+
+ The default value is (0,0,0).
+
+ + x: x-coordinate of the position of the {{AudioListener}} + y: y-coordinate of the position of the {{AudioListener}} + z: z-coordinate of the position of the {{AudioListener}} +
[Exposed=Window] interface AudioProcessingEvent : Event { - constructor (DOMString type, AudioProcessingEventInit eventInitDict); - readonly attribute double playbackTime; - readonly attribute AudioBuffer inputBuffer; - readonly attribute AudioBuffer outputBuffer; + constructor (DOMString type, AudioProcessingEventInit eventInitDict); + readonly attribute double playbackTime; + readonly attribute AudioBuffer inputBuffer; + readonly attribute AudioBuffer outputBuffer; };@@ -6308,42 +5647,42 @@ interface AudioProcessingEvent : Event { Attributes
numberOfInputChannels
parameter of the
- createScriptProcessor() method. This AudioBuffer is only valid
- while in the scope of the {{ScriptProcessorNode/audioprocess}} event handler functions.
- Its values will be meaningless outside of this scope.
-
- : outputBuffer
- ::
- An AudioBuffer where the output audio data MUST be written. It
- will have a number of channels equal to the
- numberOfOutputChannels
parameter of the
- createScriptProcessor() method. Script code within the scope of
- the {{ScriptProcessorNode/audioprocess}} event handler functions are
- expected to modify the {{Float32Array}} arrays
- representing channel data in this AudioBuffer. Any script
- modifications to this AudioBuffer outside of this scope will not
- produce any audible effects.
-
- : playbackTime
- ::
- The time when the audio will be played in the same time
- coordinate system as the {{AudioContext}}'s
- {{BaseAudioContext/currentTime}}.
+ : inputBuffer
+ ::
+ An AudioBuffer containing the input audio data. It will have a
+ number of channels equal to the
+ numberOfInputChannels
parameter of the
+ createScriptProcessor() method. This AudioBuffer is only valid
+ while in the scope of the {{ScriptProcessorNode/onaudioprocess}} function.
+ Its values will be meaningless outside of this scope.
+
+ : outputBuffer
+ ::
+ An AudioBuffer where the output audio data MUST be written. It
+ will have a number of channels equal to the
+ numberOfOutputChannels
parameter of the
+ createScriptProcessor() method. Script code within the scope of
+ the {{ScriptProcessorNode/onaudioprocess}} function is
+ expected to modify the {{Float32Array}} arrays
+ representing channel data in this AudioBuffer. Any script
+ modifications to this AudioBuffer outside of this scope will not
+ produce any audible effects.
+
+ : playbackTime
+ ::
+ The time when the audio will be played in the same time
+ coordinate system as the {{AudioContext}}'s
+ {{BaseAudioContext/currentTime}}.
dictionary AudioProcessingEventInit : EventInit { - required double playbackTime; - required AudioBuffer inputBuffer; - required AudioBuffer outputBuffer; + required double playbackTime; + required AudioBuffer inputBuffer; + required AudioBuffer outputBuffer; };@@ -6351,23 +5690,23 @@ dictionary AudioProcessingEventInit : EventInit { Dictionary {{AudioProcessingEventInit}} Members
- computedFrequency(t) = frequency(t) * pow(2, detune(t) / 1200) + computedFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)The nominal range for this compound parameter is [0, @@ -6400,13 +5739,13 @@ The nominal range for this compound parameter is [0,
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: Yes - tail-time-notes: Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients. + noi: 1 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: Yes + tail-time-notes: Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.The number of channels of the output always equals the number of @@ -6414,150 +5753,150 @@ channels of the input.
enum BiquadFilterType { - "lowpass", - "highpass", - "bandpass", - "lowshelf", - "highshelf", - "peaking", - "notch", - "allpass" + "lowpass", + "highpass", + "bandpass", + "lowshelf", + "highshelf", + "peaking", + "notch", + "allpass" };
Enum value | Description | -||
---|---|---|---|
"lowpass" - | - A lowpass - filter allows frequencies below the cutoff frequency to - pass through and attenuates frequencies above the cutoff. It - implements a standard second-order resonant lowpass filter with - 12dB/octave rolloff. - - : frequency - :: The cutoff frequency - : Q - :: Controls how peaked the response will be at the cutoff - frequency. A large value makes the response more peaked. - : gain - :: Not used in this filter type - | ||
"highpass" - | - A highpass - filter is the opposite of a lowpass filter. Frequencies - above the cutoff frequency are passed through, but frequencies - below the cutoff are attenuated. It implements a standard - second-order resonant highpass filter with 12dB/octave rolloff. - - : frequency - :: The cutoff frequency below which the frequencies are - attenuated - : Q - :: Controls how peaked the response will be at the cutoff - frequency. A large value makes the response more peaked. - : gain - :: Not used in this filter type - | ||
"bandpass" - | - A bandpass - filter allows a range of frequencies to pass through and - attenuates the frequencies below and above this frequency - range. It implements a second-order bandpass filter. - - : frequency - :: The center of the frequency band - : Q - :: Controls the width of the band. The width becomes narrower - as the Q value increases. - : gain - :: Not used in this filter type - | ||
"lowshelf" - | - The lowshelf filter allows all frequencies through, but adds a - boost (or attenuation) to the lower frequencies. It implements - a second-order lowshelf filter. - - : frequency - :: The upper limit of the frequences where the boost (or - attenuation) is applied. - : Q - :: Not used in this filter type. - : gain - :: The boost, in dB, to be applied. If the value is negative, - the frequencies are attenuated. - | ||
"highshelf" - | - The highshelf filter is the opposite of the lowshelf filter and - allows all frequencies through, but adds a boost to the higher - frequencies. It implements a second-order highshelf filter - - : frequency - :: The lower limit of the frequences where the boost (or - attenuation) is applied. - : Q - :: Not used in this filter type. - : gain - :: The boost, in dB, to be applied. If the value is negative, - the frequencies are attenuated. - | ||
"peaking" - | - The peaking filter allows all frequencies through, but adds a - boost (or attenuation) to a range of frequencies. - - : frequency - :: The center frequency of where the boost is applied. - : Q - :: Controls the width of the band of frequencies that are - boosted. A large value implies a narrow width. - : gain - :: The boost, in dB, to be applied. If the value is negative, - the frequencies are attenuated. - | ||
"notch" - | - The notch filter (also known as a band-stop or - band-rejection filter) is the opposite of a bandpass - filter. It allows all frequencies through, except for a set of - frequencies. - - : frequency - :: The center frequency of where the notch is applied. - : Q - :: Controls the width of the band of frequencies that are - attenuated. A large value implies a narrow width. - : gain - :: Not used in this filter type. - | ||
"allpass" - |
- An
- allpass filter allows all frequencies through, but changes
- the phase relationship between the various frequencies. It
- implements a second-order allpass filter
-
- : frequency
- :: The frequency where the center of the phase transition
- occurs. Viewed another way, this is the frequency with
- maximal group
- delay.
- : Q
- :: Controls how sharp the phase transition is at the center
- frequency. A larger value implies a sharper transition and
- a larger group delay.
- : gain
- :: Not used in this filter type.
+
+
+ Enumeration description
+ | | |
"lowpass" + | + A lowpass + filter allows frequencies below the cutoff frequency to + pass through and attenuates frequencies above the cutoff. It + implements a standard second-order resonant lowpass filter with + 12dB/octave rolloff. + + : frequency + :: The cutoff frequency + : Q + :: Controls how peaked the response will be at the cutoff + frequency. A large value makes the response more peaked. + : gain + :: Not used in this filter type + | ||
"highpass" + | + A highpass + filter is the opposite of a lowpass filter. Frequencies + above the cutoff frequency are passed through, but frequencies + below the cutoff are attenuated. It implements a standard + second-order resonant highpass filter with 12dB/octave rolloff. + + : frequency + :: The cutoff frequency below which the frequencies are + attenuated + : Q + :: Controls how peaked the response will be at the cutoff + frequency. A large value makes the response more peaked. + : gain + :: Not used in this filter type + | ||
"bandpass" + | + A bandpass + filter allows a range of frequencies to pass through and + attenuates the frequencies below and above this frequency + range. It implements a second-order bandpass filter. + + : frequency + :: The center of the frequency band + : Q + :: Controls the width of the band. The width becomes narrower + as the Q value increases. + : gain + :: Not used in this filter type + | ||
"lowshelf" + | + The lowshelf filter allows all frequencies through, but adds a + boost (or attenuation) to the lower frequencies. It implements + a second-order lowshelf filter. + + : frequency + :: The upper limit of the frequences where the boost (or + attenuation) is applied. + : Q + :: Not used in this filter type. + : gain + :: The boost, in dB, to be applied. If the value is negative, + the frequencies are attenuated. + | ||
"highshelf" + | + The highshelf filter is the opposite of the lowshelf filter and + allows all frequencies through, but adds a boost to the higher + frequencies. It implements a second-order highshelf filter + + : frequency + :: The lower limit of the frequences where the boost (or + attenuation) is applied. + : Q + :: Not used in this filter type. + : gain + :: The boost, in dB, to be applied. If the value is negative, + the frequencies are attenuated. + | ||
"peaking" + | + The peaking filter allows all frequencies through, but adds a + boost (or attenuation) to a range of frequencies. + + : frequency + :: The center frequency of where the boost is applied. + : Q + :: Controls the width of the band of frequencies that are + boosted. A large value implies a narrow width. + : gain + :: The boost, in dB, to be applied. If the value is negative, + the frequencies are attenuated. + | ||
"notch" + | + The notch filter (also known as a band-stop or + band-rejection filter) is the opposite of a bandpass + filter. It allows all frequencies through, except for a set of + frequencies. + + : frequency + :: The center frequency of where the notch is applied. + : Q + :: Controls the width of the band of frequencies that are + attenuated. A large value implies a narrow width. + : gain + :: Not used in this filter type. + | ||
"allpass" + | + An + allpass filter allows all frequencies through, but changes + the phase relationship between the various frequencies. It + implements a second-order allpass filter + + : frequency + :: The frequency where the center of the phase transition + occurs. Viewed another way, this is the frequency with + maximal group + delay. + : Q + :: Controls how sharp the phase transition is at the center + frequency. A larger value implies a sharper transition and + a larger group delay. + : gain + :: Not used in this filter type. |
[Exposed=Window] interface BiquadFilterNode : AudioNode { - constructor (BaseAudioContext context, optional BiquadFilterOptions options = {}); - attribute BiquadFilterType type; - readonly attribute AudioParam frequency; - readonly attribute AudioParam detune; - readonly attribute AudioParam Q; - readonly attribute AudioParam gain; - undefined getFrequencyResponse (Float32Array frequencyHz, - Float32Array magResponse, - Float32Array phaseResponse); + constructor (BaseAudioContext context, optional BiquadFilterOptions options = {}); + attribute BiquadFilterType type; + readonly attribute AudioParam frequency; + readonly attribute AudioParam detune; + readonly attribute AudioParam Q; + readonly attribute AudioParam gain; + void getFrequencyResponse (Float32Array frequencyHz, + Float32Array magResponse, + Float32Array phaseResponse); };@@ -6582,145 +5921,145 @@ interface BiquadFilterNode : AudioNode { Constructors
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{BiquadFilterNode}} will be associated with. - options: Optional initial parameter value for this {{BiquadFilterNode}}. -+
+ context: The {{BaseAudioContext}} this new {{BiquadFilterNode}} will be associated with. + options: Optional initial parameter value for this {{BiquadFilterNode}}. +
- path: audioparam.include - macros: - default: 1 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38, but see above for the actual limits for different filters - max: most-positive-single-float - max-notes: Approximately 3.4028235e38, but see above for the actual limits for different filters - rate: "{{AutomationRate/a-rate}}" -- - : detune - :: - A detune value, in cents, for the frequency. It forms a - compound parameter with {{BiquadFilterNode/frequency}} to form the computedFrequency. - -
- path: audioparam.include - macros: - default: 0 - min: \(\approx -153600\) - min-notes: - max: \(\approx 153600\) - max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value. - rate: "{{AutomationRate/a-rate}}" -- - : frequency - :: - The frequency at which the {{BiquadFilterNode}} - will operate, in Hz. It forms a compound parameter with - {{BiquadFilterNode/detune}} to form the computedFrequency. - -
- path: audioparam.include - macros: - default: 350 - min: 0 - max: Nyquist frequency - rate: "{{AutomationRate/a-rate}}" -- - : gain - :: - The gain of the filter. Its value is in dB units. The gain is - only used for {{BiquadFilterType/lowshelf}}, - {{BiquadFilterType/highshelf}}, and - {{BiquadFilterType/peaking}} filters. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: \(\approx 1541\) - max-notes: This value is approximately \(40\ \log_{10} \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value. - rate: "{{AutomationRate/a-rate}}" -- - : type - :: - The type of this {{BiquadFilterNode}}. Its - default value is "{{BiquadFilterType/lowpass}}". The exact meaning of the other - parameters depend on the value of the {{BiquadFilterNode/type}} - attribute. + : Q + :: + The Q + factor of the filter. + + For {{BiquadFilterType/lowpass}} and + {{BiquadFilterType/highpass}} filters the + {{BiquadFilterNode/Q}} value is interpreted to be in + dB. For these filters the nominal range is + \([-Q_{lim}, Q_{lim}]\) where \(Q_{lim}\) is the largest + value for which \(10^{Q/20}\) does not overflow. This + is approximately \(770.63678\). + + For the {{BiquadFilterType/bandpass}}, + {{BiquadFilterType/notch}}, + {{BiquadFilterType/allpass}}, and + {{BiquadFilterType/peaking}} filters, this value is a + linear value. The value is related to the bandwidth + of the filter and hence should be a positive value. + The nominal range is \([0, 3.4028235e38]\), the upper + limit being the most-positive-single-float. + + This is not used for the {{BiquadFilterType/lowshelf}} + and {{BiquadFilterType/highshelf}} filters. + +
+ path: audioparam.include + macros: + default: 1 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38, but see above for the actual limits for different filters + max: most-positive-single-float + max-notes: Approximately 3.4028235e38, but see above for the actual limits for different filters + rate: "{{AutomationRate/a-rate}}" ++ + : detune + :: + A detune value, in cents, for the frequency. It forms a + compound parameter with {{BiquadFilterNode/frequency}} to form the computedFrequency. + +
+ path: audioparam.include + macros: + default: 0 + min: \(\approx -153600\) + min-notes: + max: \(\approx 153600\) + max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value. + rate: "{{AutomationRate/a-rate}}" ++ + : frequency + :: + The frequency at which the {{BiquadFilterNode}} + will operate, in Hz. It forms a compound parameter with + {{BiquadFilterNode/detune}} to form the computedFrequency. + +
+ path: audioparam.include + macros: + default: 350 + min: 0 + max: Nyquist frequency + rate: "{{AutomationRate/a-rate}}" ++ + : gain + :: + The gain of the filter. Its value is in dB units. The gain is + only used for {{BiquadFilterType/lowshelf}}, + {{BiquadFilterType/highshelf}}, and + {{BiquadFilterType/peaking}} filters. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: \(\approx 1541\) + max-notes: This value is approximately \(40\ \log_{10} \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value. + rate: "{{AutomationRate/a-rate}}" ++ + : type + :: + The type of this {{BiquadFilterNode}}. Its + default value is "{{BiquadFilterType/lowpass}}". The exact meaning of the other + parameters depend on the value of the {{BiquadFilterNode/type}} + attribute.
- frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated. - magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the- -frequencyHz
parameter is not within [0, sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of themagResponse
array MUST beNaN
. - phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in thefrequencyHz
parameter is not within [0; sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of thephaseResponse
array MUST beNaN
. -
+ frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated. + magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the+ +frequencyHz
parameter is not within [0, sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of themagResponse
array MUST beNaN
. + phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in thefrequencyHz
parameter is not within [0; sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of thephaseResponse
array MUST beNaN
. +
void
+ dictionary BiquadFilterOptions : AudioNodeOptions { - BiquadFilterType type = "lowpass"; - float Q = 1; - float detune = 0; - float frequency = 350; - float gain = 0; + BiquadFilterType type = "lowpass"; + float Q = 1; + float detune = 0; + float frequency = 350; + float gain = 0; };@@ -6742,20 +6081,20 @@ dictionary BiquadFilterOptions : AudioNodeOptions { Dictionary {{BiquadFilterOptions}} Members
$$ a_0 y(n) + a_1 y(n-1) + a_2 y(n-2) = - b_0 x(n) + b_1 x(n-1) + b_2 x(n-2) + b_0 x(n) + b_1 x(n-1) + b_2 x(n-2) $$@@ -6811,142 +6150,142 @@ their computation, based on the computedValue of the * Let \(Q\) be the value of the {{BiquadFilterNode/Q}} {{AudioParam}}. * Finally let - -
- $$ - \begin{align*} - A &= 10^{\frac{G}{40}} \\ - \omega_0 &= 2\pi\frac{f_0}{F_s} \\ - \alpha_Q &= \frac{\sin\omega_0}{2Q} \\ - \alpha_{Q_{dB}} &= \frac{\sin\omega_0}{2 \cdot 10^{Q/20}} \\ - S &= 1 \\ - \alpha_S &= \frac{\sin\omega_0}{2}\sqrt{\left(A+\frac{1}{A}\right)\left(\frac{1}{S}-1\right)+2} - \end{align*} - $$ -+ +
+ $$ + \begin{align*} + A &= 10^{\frac{G}{40}} \\ + \omega_0 &= 2\pi\frac{f_0}{F_s} \\ + \alpha_Q &= \frac{\sin\omega_0}{2Q} \\ + \alpha_{Q_{dB}} &= \frac{\sin\omega_0}{2 \cdot 10^{Q/20}} \\ + S &= 1 \\ + \alpha_S &= \frac{\sin\omega_0}{2}\sqrt{\left(A+\frac{1}{A}\right)\left(\frac{1}{S}-1\right)+2} + \end{align*} + $$ +The six coefficients (\(b_0, b_1, b_2, a_0, a_1, a_2\)) for each filter type, are: : "{{lowpass}}" :: -
- $$ - \begin{align*} - b_0 &= \frac{1 - \cos\omega_0}{2} \\ - b_1 &= 1 - \cos\omega_0 \\ - b_2 &= \frac{1 - \cos\omega_0}{2} \\ - a_0 &= 1 + \alpha_{Q_{dB}} \\ - a_1 &= -2 \cos\omega_0 \\ - a_2 &= 1 - \alpha_{Q_{dB}} - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= \frac{1 - \cos\omega_0}{2} \\ + b_1 &= 1 - \cos\omega_0 \\ + b_2 &= \frac{1 - \cos\omega_0}{2} \\ + a_0 &= 1 + \alpha_{Q_{dB}} \\ + a_1 &= -2 \cos\omega_0 \\ + a_2 &= 1 - \alpha_{Q_{dB}} + \end{align*} + $$ +: "{{highpass}}" :: -
- $$ - \begin{align*} - b_0 &= \frac{1 + \cos\omega_0}{2} \\ - b_1 &= -(1 + \cos\omega_0) \\ - b_2 &= \frac{1 + \cos\omega_0}{2} \\ - a_0 &= 1 + \alpha_{Q_{dB}} \\ - a_1 &= -2 \cos\omega_0 \\ - a_2 &= 1 - \alpha_{Q_{dB}} - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= \frac{1 + \cos\omega_0}{2} \\ + b_1 &= -(1 + \cos\omega_0) \\ + b_2 &= \frac{1 + \cos\omega_0}{2} \\ + a_0 &= 1 + \alpha_{Q_{dB}} \\ + a_1 &= -2 \cos\omega_0 \\ + a_2 &= 1 - \alpha_{Q_{dB}} + \end{align*} + $$ +: "{{bandpass}}" :: -
- $$ - \begin{align*} - b_0 &= \alpha_Q \\ - b_1 &= 0 \\ - b_2 &= -\alpha_Q \\ - a_0 &= 1 + \alpha_Q \\ - a_1 &= -2 \cos\omega_0 \\ - a_2 &= 1 - \alpha_Q - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= \alpha_Q \\ + b_1 &= 0 \\ + b_2 &= -\alpha_Q \\ + a_0 &= 1 + \alpha_Q \\ + a_1 &= -2 \cos\omega_0 \\ + a_2 &= 1 - \alpha_Q + \end{align*} + $$ +: "{{notch}}" :: -
- $$ - \begin{align*} - b_0 &= 1 \\ - b_1 &= -2\cos\omega_0 \\ - b_2 &= 1 \\ - a_0 &= 1 + \alpha_Q \\ - a_1 &= -2 \cos\omega_0 \\ - a_2 &= 1 - \alpha_Q - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= 1 \\ + b_1 &= -2\cos\omega_0 \\ + b_2 &= 1 \\ + a_0 &= 1 + \alpha_Q \\ + a_1 &= -2 \cos\omega_0 \\ + a_2 &= 1 - \alpha_Q + \end{align*} + $$ +: "{{allpass}}" :: -
- $$ - \begin{align*} - b_0 &= 1 - \alpha_Q \\ - b_1 &= -2\cos\omega_0 \\ - b_2 &= 1 + \alpha_Q \\ - a_0 &= 1 + \alpha_Q \\ - a_1 &= -2 \cos\omega_0 \\ - a_2 &= 1 - \alpha_Q - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= 1 - \alpha_Q \\ + b_1 &= -2\cos\omega_0 \\ + b_2 &= 1 + \alpha_Q \\ + a_0 &= 1 + \alpha_Q \\ + a_1 &= -2 \cos\omega_0 \\ + a_2 &= 1 - \alpha_Q + \end{align*} + $$ +: "{{peaking}}" :: -
- $$ - \begin{align*} - b_0 &= 1 + \alpha_Q\, A \\ - b_1 &= -2\cos\omega_0 \\ - b_2 &= 1 - \alpha_Q\,A \\ - a_0 &= 1 + \frac{\alpha_Q}{A} \\ - a_1 &= -2 \cos\omega_0 \\ - a_2 &= 1 - \frac{\alpha_Q}{A} - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= 1 + \alpha_Q\, A \\ + b_1 &= -2\cos\omega_0 \\ + b_2 &= 1 - \alpha_Q\,A \\ + a_0 &= 1 + \frac{\alpha_Q}{A} \\ + a_1 &= -2 \cos\omega_0 \\ + a_2 &= 1 - \frac{\alpha_Q}{A} + \end{align*} + $$ +: "{{lowshelf}}" :: -
- $$ - \begin{align*} - b_0 &= A \left[ (A+1) - (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A})\right] \\ - b_1 &= 2 A \left[ (A-1) - (A+1) \cos\omega_0 )\right] \\ - b_2 &= A \left[ (A+1) - (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A}) \right] \\ - a_0 &= (A+1) + (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A} \\ - a_1 &= -2 \left[ (A-1) + (A+1) \cos\omega_0\right] \\ - a_2 &= (A+1) + (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A}) - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= A \left[ (A+1) - (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A})\right] \\ + b_1 &= 2 A \left[ (A-1) - (A+1) \cos\omega_0 )\right] \\ + b_2 &= A \left[ (A+1) - (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A}) \right] \\ + a_0 &= (A+1) + (A-1) \cos\omega_0 + 2 \alpha_S \sqrt{A} \\ + a_1 &= -2 \left[ (A-1) + (A+1) \cos\omega_0\right] \\ + a_2 &= (A+1) + (A-1) \cos\omega_0 - 2 \alpha_S \sqrt{A}) + \end{align*} + $$ +: "{{highshelf}}" :: -
- $$ - \begin{align*} - b_0 &= A\left[ (A+1) + (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} )\right] \\ - b_1 &= -2A\left[ (A-1) + (A+1)\cos\omega_0 )\right] \\ - b_2 &= A\left[ (A+1) + (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A} )\right] \\ - a_0 &= (A+1) - (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} \\ - a_1 &= 2\left[ (A-1) - (A+1)\cos\omega_0\right] \\ - a_2 &= (A+1) - (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A} - \end{align*} - $$ -+
+ $$ + \begin{align*} + b_0 &= A\left[ (A+1) + (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} )\right] \\ + b_1 &= -2A\left[ (A-1) + (A+1)\cos\omega_0 )\right] \\ + b_2 &= A\left[ (A+1) + (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A} )\right] \\ + a_0 &= (A+1) - (A-1)\cos\omega_0 + 2\alpha_S\sqrt{A} \\ + a_1 &= 2\left[ (A-1) - (A+1)\cos\omega_0\right] \\ + a_2 &= (A+1) - (A-1)\cos\omega_0 - 2\alpha_S\sqrt{A} + \end{align*} + $$ +-
path: audionode.include macros: - noi: see notes - noi-notes: Defaults to 6, but is determined by {{ChannelMergerOptions}},{{ChannelMergerOptions/numberOfInputs}} or the value specified by {{BaseAudioContext/createChannelMerger}}. - noo: 1 - cc: 1 - cc-notes: Has channelCount constraints - cc-mode: explicit - cc-mode-notes: Has channelCountMode constraints - cc-interp: speakers - tail-time: No + noi: see notes + noi-notes: Defaults to 6, but is determined by {{ChannelMergerOptions}},{{ChannelMergerOptions/numberOfInputs}} or the value specified by {{BaseAudioContext/createChannelMerger}}. + noo: 1 + cc: 1 + cc-notes: Has channelCount constraints + cc-mode: explicit + cc-mode-notes: Has channelCountMode constraints + cc-interp: speakers + tail-time: NoThis interface represents an {{AudioNode}} for @@ -7003,56 +6342,56 @@ output. Changing input streams does not affect the order of output channels.
[Exposed=Window] interface ChannelMergerNode : AudioNode { - constructor (BaseAudioContext context, optional ChannelMergerOptions options = {}); + constructor (BaseAudioContext context, optional ChannelMergerOptions options = {}); };
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{ChannelMergerNode}} will be associated with. - options: Optional initial parameter value for this {{ChannelMergerNode}}. -+
+ context: The {{BaseAudioContext}} this new {{ChannelMergerNode}} will be associated with. + options: Optional initial parameter value for this {{ChannelMergerNode}}. +
dictionary ChannelMergerOptions : AudioNodeOptions { - unsigned long numberOfInputs = 6; + unsigned long numberOfInputs = 6; };@@ -7060,8 +6399,8 @@ dictionary ChannelMergerOptions : AudioNodeOptions { Dictionary {{ChannelMergerOptions}} Members
path: audionode.include macros: - noi: 1 - noo: see notes - noo-notes: This defaults to 6, but is otherwise determined from {{ChannelSplitterOptions/numberOfOutputs|ChannelSplitterOptions.numberOfOutputs}} or the value specified by {{BaseAudioContext/createChannelSplitter}} or the {{ChannelSplitterOptions/numberOfOutputs}} member of the {{ChannelSplitterOptions}} dictionary for the {{ChannelSplitterNode/ChannelSplitterNode()|constructor}}. - cc: {{AudioNode/numberOfOutputs}} - cc-notes: Has channelCount constraints - cc-mode: explicit - cc-mode-notes: Has channelCountMode constraints - cc-interp: discrete - cc-interp-notes: Has channelInterpretation constraints - tail-time: No + noi: 1 + noo: see notes + noo-notes: This defaults to 6, but is otherwise determined from {{ChannelSplitterOptions/numberOfOutputs|ChannelSplitterOptions.numberOfOutputs}} or the value specified by {{BaseAudioContext/createChannelSplitter}} or the {{ChannelSplitterOptions/numberOfOutputs}} member of the {{ChannelSplitterOptions}} dictionary for the {{ChannelSplitterNode/ChannelSplitterNode()|constructor}}. + cc: {{AudioNode/numberOfOutputs}} + cc-notes: Has channelCount constraints + cc-mode: explicit + cc-mode-notes: Has channelCountMode constraints + cc-interp: discrete + cc-interp-notes: Has channelInterpretation constraints + tail-time: NoThis interface represents an {{AudioNode}} for @@ -7120,16 +6459,16 @@ are not "active" will output silence and would typically not be connected to anything.
[Exposed=Window] interface ChannelSplitterNode : AudioNode { - constructor (BaseAudioContext context, optional ChannelSplitterOptions options = {}); + constructor (BaseAudioContext context, optional ChannelSplitterOptions options = {}); };
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{ChannelSplitterNode}} will be associated with. - options: Optional initial parameter value for this {{ChannelSplitterNode}}. -+
+ context: The {{BaseAudioContext}} this new {{ChannelSplitterNode}} will be associated with. + options: Optional initial parameter value for this {{ChannelSplitterNode}}. +
dictionary ChannelSplitterOptions : AudioNodeOptions { - unsigned long numberOfOutputs = 6; + unsigned long numberOfOutputs = 6; };@@ -7173,8 +6512,8 @@ dictionary ChannelSplitterOptions : AudioNodeOptions { Dictionary {{ChannelSplitterOptions}} Members
path: audionode.include macros: - noi: 0 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: No + noi: 0 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: No
[Exposed=Window] interface ConstantSourceNode : AudioScheduledSourceNode { - constructor (BaseAudioContext context, optional ConstantSourceOptions options = {}); - readonly attribute AudioParam offset; + constructor (BaseAudioContext context, optional ConstantSourceOptions options = {}); + readonly attribute AudioParam offset; };
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{ConstantSourceNode}} will be associated with. - options: Optional initial parameter value for this {{ConstantSourceNode}}. -+
+ context: The {{BaseAudioContext}} this new {{ConstantSourceNode}} will be associated with. + options: Optional initial parameter value for this {{ConstantSourceNode}}. +
- path: audioparam.include - macros: - default: 1 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -+ : offset + :: + The constant value of the source. + +
+ path: audioparam.include + macros: + default: 1 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" +
dictionary ConstantSourceOptions { - float offset = 1; + float offset = 1; };@@ -7281,8 +6620,8 @@ dictionary ConstantSourceOptions { Dictionary {{ConstantSourceOptions}} Members
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-notes: Has channelCount constraints - cc-mode: clamped-max - cc-mode-notes: Has channelCountMode constraints - cc-interp: speakers - tail-time: Yes - tail-time-notes: Continues to output non-silent audio with zero input for the length of the {{ConvolverNode/buffer}}. + noi: 1 + noo: 1 + cc: 2 + cc-notes: Has channelCount constraints + cc-mode: clamped-max + cc-mode-notes: Has channelCountMode constraints + cc-interp: speakers + tail-time: Yes + tail-time-notes: Continues to output non-silent audio with zero input for the length of the {{ConvolverNode/buffer}}.The input of this node is either mono (1 channel) or stereo (2 @@ -7327,178 +6666,180 @@ input to the node is either mono or stereo.
[Exposed=Window] interface ConvolverNode : AudioNode { - constructor (BaseAudioContext context, optional ConvolverOptions options = {}); - attribute AudioBuffer? buffer; - attribute boolean normalize; + constructor (BaseAudioContext context, optional ConvolverOptions options = {}); + attribute AudioBuffer? buffer; + attribute boolean normalize; };
- context: The {{BaseAudioContext}} this new {{ConvolverNode}} will be associated with. - options: Optional initial parameter value for this {{ConvolverNode}}. -+
+ context: The {{BaseAudioContext}} this new {{ConvolverNode}} will be associated with. + options: Optional initial parameter value for this {{ConvolverNode}}. +
false
, then the convolution will be rendered with
- no pre-processing/scaling of the impulse response. Changes to
- this value do not take effect until the next time the
- {{ConvolverNode/buffer}} attribute is set.
-
- If the {{normalize}} attribute is false when the
- {{ConvolverNode/buffer}} attribute is set then the
- {{ConvolverNode}} will perform a linear
- convolution given the exact impulse response contained within
- the {{ConvolverNode/buffer}}.
-
- Otherwise, if the {{normalize}} attribute is true when the
- {{ConvolverNode/buffer}} attribute is set then the
- {{ConvolverNode}} will first perform a scaled
- RMS-power analysis of the audio data contained within
- {{ConvolverNode/buffer}} to calculate a normalizationScale
- given this algorithm:
-
- - function calculateNormalizationScale(buffer) { - const GainCalibration = 0.00125; - const GainCalibrationSampleRate = 44100; - const MinPower = 0.000125; - - // Normalize by RMS power. - const numberOfChannels = buffer.numberOfChannels; - const length = buffer.length; - - let power = 0; - - for (let i = 0; i < numberOfChannels; i++) { - let channelPower = 0; - const channelData = buffer.getChannelData(i); - - for (let j = 0; j < length; j++) { - const sample = channelData[j]; - channelPower += sample * sample; - } - - power += channelPower; - } - - power = Math.sqrt(power / (numberOfChannels * length)); - - // Protect against accidental overload. - if (!isFinite(power) || isNaN(power) || power < MinPower) - power = MinPower; - - let scale = 1 / power; - - // Calibrate to make perceived volume same as unprocessed. - scale *= GainCalibration; - - // Scale depends on sample-rate. - if (buffer.sampleRate) - scale *= GainCalibrationSampleRate / buffer.sampleRate; - - // True-stereo compensation. - if (numberOfChannels == 4) - scale *= 0.5; - - return scale; - } -- - During processing, the ConvolverNode will then take this - calculated normalizationScale value and multiply it by - the result of the linear convolution resulting from processing - the input with the impulse response (represented by the - {{ConvolverNode/buffer}}) to produce the final output. Or any - mathematically equivalent operation may be used, such as - pre-multiplying the input by normalizationScale, or - pre-multiplying a version of the impulse-response by - normalizationScale. + : buffer + :: + + At the time when this attribute is set, the {{ConvolverNode/buffer}} and + the state of the {{normalize}} attribute will be used to + configure the {{ConvolverNode}} with this + impulse response having the given normalization. The initial + value of this attribute is null. + + :: +
false
, then the convolution will be rendered with
+ no pre-processing/scaling of the impulse response. Changes to
+ this value do not take effect until the next time the
+ {{ConvolverNode/buffer}} attribute is set.
+
+ If the {{normalize}} attribute is false when the
+ {{ConvolverNode/buffer}} attribute is set then the
+ {{ConvolverNode}} will perform a linear
+ convolution given the exact impulse response contained within
+ the {{ConvolverNode/buffer}}.
+
+ Otherwise, if the {{normalize}} attribute is true when the
+ {{ConvolverNode/buffer}} attribute is set then the
+ {{ConvolverNode}} will first perform a scaled
+ RMS-power analysis of the audio data contained within
+ {{ConvolverNode/buffer}} to calculate a normalizationScale
+ given this algorithm:
+
+ + function calculateNormalizationScale(buffer) { + const GainCalibration = 0.00125; + const GainCalibrationSampleRate = 44100; + const MinPower = 0.000125; + + // Normalize by RMS power. + const numberOfChannels = buffer.numberOfChannels; + const length = buffer.length; + + let power = 0; + + for (let i = 0; i < numberOfChannels; i++) { + let channelPower = 0; + const channelData = buffer.getChannelData(i); + + for (let j = 0; j < length; j++) { + const sample = channelData[j]; + channelPower += sample * sample; + } + + power += channelPower; + } + + power = Math.sqrt(power / (numberOfChannels * length)); + + // Protect against accidental overload. + if (!isFinite(power) || isNaN(power) || power < MinPower) + power = MinPower; + + let scale = 1 / power; + + // Calibrate to make perceived volume same as unprocessed. + scale *= GainCalibration; + + // Scale depends on sample-rate. + if (buffer.sampleRate) + scale *= GainCalibrationSampleRate / buffer.sampleRate; + + // True-stereo compensation. + if (numberOfChannels == 4) + scale *= 0.5; + + return scale; + } ++ + During processing, the ConvolverNode will then take this + calculated normalizationScale value and multiply it by + the result of the linear convolution resulting from processing + the input with the impulse response (represented by the + {{ConvolverNode/buffer}}) to produce the final output. Or any + mathematically equivalent operation may be used, such as + pre-multiplying the input by normalizationScale, or + pre-multiplying a version of the impulse-response by + normalizationScale.
dictionary ConvolverOptions : AudioNodeOptions { - AudioBuffer? buffer; - boolean disableNormalization = false; + AudioBuffer? buffer; + boolean disableNormalization = false; };@@ -7516,17 +6857,17 @@ dictionary ConvolverOptions : AudioNodeOptions { Dictionary {{ConvolverOptions}} Members
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: Yes - tail-time-notes: Continues to output non-silent audio with zero input up to theThe number of channels of the output always equals the number of @@ -7614,56 +6955,56 @@ latency equal to the amount of the delay.{{DelayOptions/maxDelayTime}} of the node. + noi: 1 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: Yes + tail-time-notes: Continues to output non-silent audio with zero input up to the{{DelayOptions/maxDelayTime}} of the node.
[Exposed=Window] interface DelayNode : AudioNode { - constructor (BaseAudioContext context, optional DelayOptions options = {}); - readonly attribute AudioParam delayTime; + constructor (BaseAudioContext context, optional DelayOptions options = {}); + readonly attribute AudioParam delayTime; };
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{DelayNode}} will be associated with. - options: Optional initial parameter value for this {{DelayNode}}. -+
+ context: The {{BaseAudioContext}} this new {{DelayNode}} will be associated with. + options: Optional initial parameter value for this {{DelayNode}}. +
value
is 0 (no delay). The minimum value is 0 and
- the maximum value is determined by the
- {{maxDelayTime!!argument}} argument to the
- {{AudioContext}} method {{createDelay()}} or the {{DelayOptions/maxDelayTime}} member of the {{DelayOptions}} dictionary for the {{DelayNode/DelayNode()|constructor}}.
-
- If {{DelayNode}} is part of a cycle,
- then the value of the {{DelayNode/delayTime}} attribute
- is clamped to a minimum of one render quantum.
-
- - path: audioparam.include - macros: - default: 0 - min: 0 - max:+ : delayTime + :: + An {{AudioParam}} object representing the + amount of delay (in seconds) to apply. Its default +{{DelayOptions/maxDelayTime}} - rate: "{{AutomationRate/a-rate}}" -
value
is 0 (no delay). The minimum value is 0 and
+ the maximum value is determined by the
+ {{maxDelayTime!!argument}} argument to the
+ {{AudioContext}} method {{createDelay()}} or the {{DelayOptions/maxDelayTime}} member of the {{DelayOptions}} dictionary for the {{DelayNode/DelayNode()|constructor}}.
+
+ If {{DelayNode}} is part of a cycle,
+ then the value of the {{DelayNode/delayTime}} attribute
+ is clamped to a minimum of one render quantum.
+
+ + path: audioparam.include + macros: + default: 0 + min: 0 + max:{{DelayOptions/maxDelayTime}} + rate: "{{AutomationRate/a-rate}}" +
dictionary DelayOptions : AudioNodeOptions { - double maxDelayTime = 1; - double delayTime = 0; + double maxDelayTime = 1; + double delayTime = 0; };@@ -7681,11 +7022,11 @@ dictionary DelayOptions : AudioNodeOptions { Dictionary {{DelayOptions}} Members
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-notes: Has channelCount constraints - cc-mode: clamped-max - cc-mode-notes: Has channelCountMode constraints - cc-interp: speakers - tail-time: Yes - tail-time-notes: This node has a tail-time such that this node continues to output non-silent audio with zero input due to the look-ahead delay. + noi: 1 + noo: 1 + cc: 2 + cc-notes: Has channelCount constraints + cc-mode: clamped-max + cc-mode-notes: Has channelCountMode constraints + cc-interp: speakers + tail-time: Yes + tail-time-notes: This node has a tail-time such that this node continues to output non-silent audio with zero input due to the look-ahead delay.
[Exposed=Window] interface DynamicsCompressorNode : AudioNode { - constructor (BaseAudioContext context, - optional DynamicsCompressorOptions options = {}); - readonly attribute AudioParam threshold; - readonly attribute AudioParam knee; - readonly attribute AudioParam ratio; - readonly attribute float reduction; - readonly attribute AudioParam attack; - readonly attribute AudioParam release; + constructor (BaseAudioContext context, + optional DynamicsCompressorOptions options = {}); + readonly attribute AudioParam threshold; + readonly attribute AudioParam knee; + readonly attribute AudioParam ratio; + readonly attribute float reduction; + readonly attribute AudioParam attack; + readonly attribute AudioParam release; };
- path: audionode-init.include -+
+ path: audionode-init.include +- Let [[internal reduction]] - be a private slot on this, that holds a floating point number, in - decibels. Set {{[[internal reduction]]}} to 0.0. + Let [[internal reduction]] + be a private slot on this, that holds a floating point number, in + decibels. Set {{[[internal reduction]]}} to 0.0. -
- context: The {{BaseAudioContext}} this new {{DynamicsCompressorNode}} will be associated with. - options: Optional initial parameter value for this {{DynamicsCompressorNode}}. -+
+ context: The {{BaseAudioContext}} this new {{DynamicsCompressorNode}} will be associated with. + options: Optional initial parameter value for this {{DynamicsCompressorNode}}. +
- path: audioparam.include - macros: - default: .003 - min: 0 - max: 1 - rate: "{{AutomationRate/k-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : knee - :: - A decibel value representing the range above the threshold - where the curve smoothly transitions to the "ratio" portion. - -
- path: audioparam.include - macros: - default: 30 - min: 0 - max: 40 - rate: "{{AutomationRate/k-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : ratio - :: - The amount of dB change in input for a 1 dB change in output. - -
- path: audioparam.include - macros: - default: 12 - min: 1 - max: 20 - rate: "{{AutomationRate/k-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : reduction - :: - A read-only decibel value for metering purposes, representing the - current amount of gain reduction that the compressor is applying - to the signal. If fed no signal the value will be 0 (no gain - reduction). When this attribute is read, return the value of the - private slot {{[[internal reduction]]}}. - - : release - :: - The amount of time (in seconds) to increase the gain by 10dB. - -
- path: audioparam.include - macros: - default: .25 - min: 0 - max: 1 - rate: "{{AutomationRate/k-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : threshold - :: - The decibel value above which the compression will start taking effect. - -
- path: audioparam.include - macros: - default: -24 - min: -100 - max: 0 - rate: "{{AutomationRate/k-rate}}" - rate-notes: Has [=automation rate constraints=] -+ : attack + :: + The amount of time (in seconds) to reduce the gain by 10dB. + +
+ path: audioparam.include + macros: + default: .003 + min: 0 + max: 1 + rate: "{{AutomationRate/k-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : knee + :: + A decibel value representing the range above the threshold + where the curve smoothly transitions to the "ratio" portion. + +
+ path: audioparam.include + macros: + default: 30 + min: 0 + max: 40 + rate: "{{AutomationRate/k-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : ratio + :: + The amount of dB change in input for a 1 dB change in output. + +
+ path: audioparam.include + macros: + default: 12 + min: 1 + max: 20 + rate: "{{AutomationRate/k-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : reduction + :: + A read-only decibel value for metering purposes, representing the + current amount of gain reduction that the compressor is applying + to the signal. If fed no signal the value will be 0 (no gain + reduction). When this attribute is read, return the value of the + private slot {{[[internal reduction]]}}. + + : release + :: + The amount of time (in seconds) to increase the gain by 10dB. + +
+ path: audioparam.include + macros: + default: .25 + min: 0 + max: 1 + rate: "{{AutomationRate/k-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : threshold + :: + The decibel value above which the compression will start taking effect. + +
+ path: audioparam.include + macros: + default: -24 + min: -100 + max: 0 + rate: "{{AutomationRate/k-rate}}" + rate-notes: Has [=automation rate constraints=] +
dictionary DynamicsCompressorOptions : AudioNodeOptions { - float attack = 0.003; - float knee = 30; - float ratio = 12; - float release = 0.25; - float threshold = -24; + float attack = 0.003; + float knee = 30; + float ratio = 12; + float release = 0.25; + float threshold = -24; };@@ -7910,20 +7251,20 @@ dictionary DynamicsCompressorOptions : AudioNodeOptions { Dictionary {{DynamicsCompressorOptions}} Members
reduction
property on the
- {{DynamicsCompressorNode}}.
+ reduction
property on the
+ {{DynamicsCompressorNode}}.
* The compression curve has three parts:
- * The first part is the identity: \(f(x) = x\).
+ * The first part is the identity: \(f(x) = x\).
- * The second part is the soft-knee portion, which MUST be a
- monotonically increasing function.
+ * The second part is the soft-knee portion, which MUST be a
+ monotonically increasing function.
- * The third part is a linear function: \(f(x) =
- \frac{1}{ratio} \cdot x \).
+ * The third part is a linear function: \(f(x) =
+ \frac{1}{ratio} \cdot x \).
- This curve MUST be continuous and piece-wise differentiable,
- and corresponds to a target output level, based on the input
- level.
+ This curve MUST be continuous and piece-wise differentiable,
+ and corresponds to a target output level, based on the input
+ level.
Graphically, such a curve would look something like this:
Internally, the {{DynamicsCompressorNode}} is described with a
@@ -7982,21 +7323,21 @@ special object that behaves like an {{AudioNode}}, described
below:
- const delay = new DelayNode(context, {delayTime: 0.006}); - const gain = new GainNode(context); - const compression = new EnvelopeFollower(); + const delay = new DelayNode(context, {delayTime: 0.006}); + const gain = new GainNode(context); + const compression = new EnvelopeFollower(); - input.connect(delay).connect(gain).connect(output); - input.connect(compression).connect(gain.gain); + input.connect(delay).connect(gain).connect(output); + input.connect(compression).connect(gain.gain);Note: This implements the pre-delay and the application of the reduction gain. @@ -8008,81 +7349,81 @@ signal to produce the gain reduction value. An values. Those values persist accros invocation of this algorithm. * Let [[detector average]] be a floating point - number, initialized to 0.0. + number, initialized to 0.0. * Let [[compressor gain]] be a floating point - number, initialized to 1.0. + number, initialized to 1.0.
false
otherwise.
+ 2. Let releasing be `true` if
+ attenuation is greater than compressor
+ gain, false
otherwise.
- 3. Let detector rate be the result of applying the
- detector curve to
- attenuation.
+ 3. Let detector rate be the result of applying the
+ detector curve to
+ attenuation.
- 4. Subtract detector average from
- attenuation, and multiply the result by
- detector rate. Add this new result to detector average.
+ 4. Subtract detector average from
+ attenuation, and multiply the result by
+ detector rate. Add this new result to detector average.
- 5. Clamp detector average to a maximum of 1.0.
+ 5. Clamp detector average to a maximum of 1.0.
- 6. Let envelope rate be the result of computing the envelope rate based on values of attack and release.
+ 6. Let envelope rate be the result of computing the envelope rate based on values of attack and release.
- 7. If releasing is `true`, set
- compressor gain to be the product of
- compressor gain and envelope rate, clamped
- to a maximum of 1.0.
+ 7. If releasing is `true`, set
+ compressor gain to be the product of
+ compressor gain and envelope rate, clamped
+ to a maximum of 1.0.
- 8. Else, if releasing is false
, let
- gain increment to be detector average
- minus compressor gain. Multiply gain
- increment by envelope rate, and add the result
- to compressor gain.
+ 8. Else, if releasing is false
, let
+ gain increment to be detector average
+ minus compressor gain. Multiply gain
+ increment by envelope rate, and add the result
+ to compressor gain.
- 9. Compute reduction gain to be compressor
- gain multiplied by the return value of computing the
- makeup gain.
+ 9. Compute reduction gain to be compressor
+ gain multiplied by the return value of computing the
+ makeup gain.
- 10. Compute metering gain to be reduction gain, converted to
- decibel.
+ 10. Compute metering gain to be reduction gain, converted to
+ decibel.
- 1. Set {{[[compressor gain]]}} to compressor
- gain.
+ 1. Set {{[[compressor gain]]}} to compressor
+ gain.
- 1. Set {{[[detector average]]}} to detector
- average.
+ 1. Set {{[[detector average]]}} to detector
+ average.
- 1. Atomically set the internal slot {{[[internal reduction]]}}
- to the value of metering gain.
+ 1. Atomically set the internal slot {{[[internal reduction]]}}
+ to the value of metering gain.
- Note: This step makes the metering gain update once per block, at the
- end of the block processing.
+ Note: This step makes the metering gain update once per block, at the
+ end of the block processing.
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: No + noi: 1 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: NoEach sample of each channel of the input data of the @@ -8234,49 +7566,49 @@ Each sample of each channel of the input data of the
[Exposed=Window] interface GainNode : AudioNode { - constructor (BaseAudioContext context, optional GainOptions options = {}); - readonly attribute AudioParam gain; + constructor (BaseAudioContext context, optional GainOptions options = {}); + readonly attribute AudioParam gain; };
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{GainNode}} will be associated with. - options: Optional initial parameter values for this {{GainNode}}. -+
+ context: The {{BaseAudioContext}} this new {{GainNode}} will be associated with. + options: Optional initial parameter values for this {{GainNode}}. +
- path: audioparam.include - macros: - default: 1 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" -+ : gain + :: + Represents the amount of gain to apply. + +
+ path: audioparam.include + macros: + default: 1 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" +
dictionary GainOptions : AudioNodeOptions { - float gain = 1.0; + float gain = 1.0; };@@ -8293,8 +7625,8 @@ dictionary GainOptions : AudioNodeOptions { Dictionary {{GainOptions}} Members
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: Yes - tail-time-notes: Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients. + noi: 1 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: Yes + tail-time-notes: Continues to output non-silent audio with zero input. Since this is an IIR filter, the filter produces non-zero input forever, but in practice, this can be limited after some finite time where the output is sufficiently close to zero. The actual time depends on the filter coefficients.The number of channels of the output always equals the number of @@ -8346,63 +7678,63 @@ channels of the input.
[Exposed=Window] interface IIRFilterNode : AudioNode { - constructor (BaseAudioContext context, IIRFilterOptions options); - undefined getFrequencyResponse (Float32Array frequencyHz, - Float32Array magResponse, - Float32Array phaseResponse); + constructor (BaseAudioContext context, IIRFilterOptions options); + void getFrequencyResponse (Float32Array frequencyHz, + Float32Array magResponse, + Float32Array phaseResponse); };
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{IIRFilterNode}} will be associated with. - options: Initial parameter value for this {{IIRFilterNode}}. -+
+ context: The {{BaseAudioContext}} this new {{IIRFilterNode}} will be associated with. + options: Initial parameter value for this {{IIRFilterNode}}. +
- frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated. - magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the- -frequencyHz
parameter is not within [0, sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of themagResponse
array MUST beNaN
. - phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in thefrequencyHz
parameter is not within [0; sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of thephaseResponse
array MUST beNaN
. -
+ frequencyHz: This parameter specifies an array of frequencies, in Hz, at which the response values will be calculated. + magResponse: This parameter specifies an output array receiving the linear magnitude response values. If a value in the+ +frequencyHz
parameter is not within [0, sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of themagResponse
array MUST beNaN
. + phaseResponse: This parameter specifies an output array receiving the phase response values in radians. If a value in thefrequencyHz
parameter is not within [0; sampleRate/2], wheresampleRate
is the value of the {{BaseAudioContext/sampleRate}} property of the {{AudioContext}}, the corresponding value at the same index of thephaseResponse
array MUST beNaN
. +
void
+ IIRFilterOptions
dictionary is used to specify the
filter coefficients of the {{IIRFilterNode}}.
$$ - H(z) = \frac{\sum_{m=0}^{M} b_m z^{-m}}{\sum_{n=0}^{N} a_n z^{-n}} + H(z) = \frac{\sum_{m=0}^{M} b_m z^{-m}}{\sum_{n=0}^{N} a_n z^{-n}} $$@@ -8443,7 +7775,7 @@ Equivalently, the time-domain equation is:
$$ - \sum_{k=0}^{N} a_k y(n-k) = \sum_{k=0}^{M} b_k x(n-k) + \sum_{k=0}^{N} a_k y(n-k) = \sum_{k=0}^{M} b_k x(n-k) $$@@ -8471,7 +7803,7 @@ Note: The UA may produce a warning to notify the user that NaN values have occur ██ ██ ███████ ████████ ████ ███████ ██████ ███████ ███████ ██ ██ ██████ ████████ --> -
path: audionode-noinput.include macros: - noo: 1 - tail-time: No + noo: 1 + tail-time: NoThe number of channels of the output corresponds to the number of @@ -8490,11 +7822,6 @@ channels of the media referenced by the
src
attribute can change the number of channels output
by this node.
-If the sample rate of the {{HTMLMediaElement}} differs from the sample
-rate of the associated {{AudioContext}}, then the output from the
-{{HTMLMediaElement}} must be resampled to match the context's
-{{BaseAudioContext/sampleRate|sample rate}}.
-
A {{MediaElementAudioSourceNode}} is created given an
{{HTMLMediaElement}} using the {{AudioContext}}
{{createMediaElementSource()}} method or the {{MediaElementAudioSourceOptions/mediaElement}} member of the {{MediaElementAudioSourceOptions}} dictionary for the {{MediaElementAudioSourceNode/MediaElementAudioSourceNode()|constructor}}.
@@ -8515,16 +7842,16 @@ attribute changes, and other aspects of the
not used with a {{MediaElementAudioSourceNode}}.
- const mediaElement = document.getElementById('mediaElementID'); - const sourceNode = context.createMediaElementSource(mediaElement); - sourceNode.connect(filterNode); + const mediaElement = document.getElementById('mediaElementID'); + const sourceNode = context.createMediaElementSource(mediaElement); + sourceNode.connect(filterNode);
[Exposed=Window] interface MediaElementAudioSourceNode : AudioNode { - constructor (AudioContext context, MediaElementAudioSourceOptions options); - [SameObject] readonly attribute HTMLMediaElement mediaElement; + constructor (AudioContext context, MediaElementAudioSourceOptions options); + [SameObject] readonly attribute HTMLMediaElement mediaElement; };@@ -8532,28 +7859,28 @@ interface MediaElementAudioSourceNode : AudioNode { Constructors
- context: The {{AudioContext}} this new {{MediaElementAudioSourceNode}} will be associated with. - options: Initial parameter value for this {{MediaElementAudioSourceNode}}. -+ : MediaElementAudioSourceNode(context, options) + :: + 1. initialize the AudioNode + this, with context and options as arguments. + +
+ context: The {{AudioContext}} this new {{MediaElementAudioSourceNode}} will be associated with. + options: Initial parameter value for this {{MediaElementAudioSourceNode}}. +
dictionary MediaElementAudioSourceOptions { - required HTMLMediaElement mediaElement; + required HTMLMediaElement mediaElement; };@@ -8569,8 +7896,8 @@ dictionary MediaElementAudioSourceOptions { Dictionary {{MediaElementAudioSourceOptions}} Members
RTCPeerConnection
(described in
path: audionode.include macros: - noi: 1 - noo: 0 - cc: 2 - cc-mode: explicit - cc-interp: speakers - tail-time: No + noi: 1 + noo: 0 + cc: 2 + cc-mode: explicit + cc-interp: speakers + tail-time: NoThe number of channels of the input is by default 2 (stereo). @@ -8626,8 +7953,8 @@ The number of channels of the input is by default 2 (stereo).
[Exposed=Window] interface MediaStreamAudioDestinationNode : AudioNode { - constructor (AudioContext context, optional AudioNodeOptions options = {}); - readonly attribute MediaStream stream; + constructor (AudioContext context, optional AudioNodeOptions options = {}); + readonly attribute MediaStream stream; };@@ -8635,33 +7962,33 @@ interface MediaStreamAudioDestinationNode : AudioNode { Constructors
- context: The {{BaseAudioContext}} this new {{MediaStreamAudioDestinationNode}} will be associated with. - options: Optional initial parameter value for this {{MediaStreamAudioDestinationNode}}. -+ : MediaStreamAudioDestinationNode(context, options) + :: + 1. Initialize the AudioNode + this, with context and options as arguments. + +
+ context: The {{BaseAudioContext}} this new {{MediaStreamAudioDestinationNode}} will be associated with. + options: Optional initial parameter value for this {{MediaStreamAudioDestinationNode}}. +
kind
attribute has the value "audio"
.
+ : stream
+ ::
+ A {{MediaStream}} containing a single {{MediaStreamTrack}} with the same
+ number of channels as the node itself, and whose
+ kind
attribute has the value "audio"
.
path: audionode-noinput.include macros: - noo: 1 - tail-time: No + noo: 1 + tail-time: NoThe number of channels of the output corresponds to the number of channels of @@ -8679,16 +8006,11 @@ the {{MediaStreamTrack}}. When the {{MediaStreamTrack}} ends, this {{AudioNode}} outputs one channel of silence. -If the sample rate of the {{MediaStreamTrack}} differs from the sample -rate of the associated {{AudioContext}}, then the output of the -{{MediaStreamTrack}} is resampled to match the context's -{{BaseAudioContext/sampleRate|sample rate}}. -
[Exposed=Window] interface MediaStreamAudioSourceNode : AudioNode { - constructor (AudioContext context, MediaStreamAudioSourceOptions options); - [SameObject] readonly attribute MediaStream mediaStream; + constructor (AudioContext context, MediaStreamAudioSourceOptions options); + [SameObject] readonly attribute MediaStream mediaStream; };@@ -8696,70 +8018,70 @@ interface MediaStreamAudioSourceNode : AudioNode { Constructors
kind
attribute has the value "audio"
,
- throw an {{InvalidStateError}} and abort these steps. Else, let
- this stream be inputStream.
- 1. Let tracks be the list of all
- {{MediaStreamTrack}}s of
- inputStream that have a kind
of
- "audio"
.
- 1. Sort the elements in tracks based on their id
- attribute using an ordering on sequences of [=code unit=]
- values.
- 1. Initialize the AudioNode
- this, with context and options as arguments.
- 1. Set an internal slot [[input track]] on this
- {{MediaStreamAudioSourceNode}} to be the first element of
- tracks. This is the track used as the input audio for this
- {{MediaStreamAudioSourceNode}}.
-
- After construction, any change to the
- {{MediaStream}} that was passed to
- the constructor do not affect the underlying output of this {{AudioNode}}.
-
- The slot {{[[input track]]}} is only used to keep a reference to the
- {{MediaStreamTrack}}.
-
- Note: This means that when removing the track chosen by the constructor
- of the {{MediaStreamAudioSourceNode}} from the
- {{MediaStream}} passed into this
- constructor, the {{MediaStreamAudioSourceNode}} will still take its input
- from the same track.
-
- Note: The behaviour for picking the track to output is arbitrary for
- legacy reasons. {{MediaStreamTrackAudioSourceNode}} can be used
- instead to be explicit about which track to use as input.
-
- - context: The {{AudioContext}} this new {{MediaStreamAudioSourceNode}} will be associated with. - options: Initial parameter value for this {{MediaStreamAudioSourceNode}}. -- + : MediaStreamAudioSourceNode(context, options) + :: + 1. If the {{MediaStreamAudioSourceOptions/mediaStream}} member of + {{MediaStreamAudioSourceNode/MediaStreamAudioSourceNode()/options!!argument}} does not reference a + {{MediaStream}} that has at least one + {{MediaStreamTrack}} whose +
kind
attribute has the value "audio"
,
+ throw an {{InvalidStateError}} and abort these steps. Else, let
+ this stream be inputStream.
+ 1. Let tracks be the list of all
+ {{MediaStreamTrack}}s of
+ inputStream that have a kind
of
+ "audio"
.
+ 1. Sort the elements in tracks based on their id
+ attribute using an ordering on sequences of [=code unit=]
+ values.
+ 1. Initialize the AudioNode
+ this, with context and options as arguments.
+ 1. Set an internal slot [[input track]] on this
+ {{MediaStreamAudioSourceNode}} to be the first element of
+ tracks. This is the track used as the input audio for this
+ {{MediaStreamAudioSourceNode}}.
+
+ After construction, any change to the
+ {{MediaStream}} that was passed to
+ the constructor do not affect the underlying output of this {{AudioNode}}.
+
+ The slot {{[[input track]]}} is only used to keep a reference to the
+ {{MediaStreamTrack}}.
+
+ Note: This means that when removing the track chosen by the constructor
+ of the {{MediaStreamAudioSourceNode}} from the
+ {{MediaStream}} passed into this
+ constructor, the {{MediaStreamAudioSourceNode}} will still take its input
+ from the same track.
+
+ Note: The behaviour for picking the track to output is arbitrary for
+ legacy reasons. {{MediaStreamTrackAudioSourceNode}} can be used
+ instead to be explicit about which track to use as input.
+
+ + context: The {{AudioContext}} this new {{MediaStreamAudioSourceNode}} will be associated with. + options: Initial parameter value for this {{MediaStreamAudioSourceNode}}. ++
dictionary MediaStreamAudioSourceOptions { - required MediaStream mediaStream; + required MediaStream mediaStream; };@@ -8767,8 +8089,8 @@ dictionary MediaStreamAudioSourceOptions { Dictionary {{MediaStreamAudioSourceOptions}} Members
path: audionode-noinput.include macros: - noo: 1 - tail-time: No + noo: 1 + tail-time: NoThe number of channels of the output corresponds to the number of -channels of the {{MediaStreamTrackAudioSourceOptions/mediaStreamTrack}}. - -If the sample rate of the {{MediaStreamTrack}} differs from the sample -rate of the associated {{AudioContext}}, then the output of the -{{MediaStreamTrackAudioSourceOptions/mediaStreamTrack}} is resampled -to match the context's {{BaseAudioContext/sampleRate|sample rate}}. +channels of the {{MediaStreamTrack}}.
[Exposed=Window] interface MediaStreamTrackAudioSourceNode : AudioNode { - constructor (AudioContext context, MediaStreamTrackAudioSourceOptions options); + constructor (AudioContext context, MediaStreamTrackAudioSourceOptions options); };@@ -8807,21 +8124,21 @@ interface MediaStreamTrackAudioSourceNode : AudioNode { Constructors
kind
attribute is not "audio"
, throw an
- {{InvalidStateError}} and abort these steps.
- 1. Initialize the AudioNode
- this, with context and options as arguments.
-
- - context: The {{AudioContext}} this new {{MediaStreamTrackAudioSourceNode}} will be associated with. - options: Initial parameter value for this {{MediaStreamTrackAudioSourceNode}}. -+ : MediaStreamTrackAudioSourceNode(context, options) + :: + 1. If the {{MediaStreamTrackAudioSourceOptions/mediaStreamTrack}}'s +
kind
attribute is not "audio"
, throw an
+ {{InvalidStateError}} and abort these steps.
+ 1. Initialize the AudioNode
+ this, with context and options as arguments.
+
+ + context: The {{AudioContext}} this new {{MediaStreamTrackAudioSourceNode}} will be associated with. + options: Initial parameter value for this {{MediaStreamTrackAudioSourceNode}}. +
dictionary MediaStreamTrackAudioSourceOptions { - required MediaStreamTrack mediaStreamTrack; + required MediaStreamTrack mediaStreamTrack; };@@ -8838,18 +8155,18 @@ dictionary MediaStreamTrackAudioSourceOptions { Dictionary {{MediaStreamTrackAudioSourceOptions}} Members
kind
attribute is
- not "audio"
, an {{InvalidStateError}}
- MUST be thrown.
+ : mediaStreamTrack
+ ::
+ The media stream track that will act as a source. If this
+ {{MediaStreamTrack}} kind
attribute is
+ not "audio"
, an {{InvalidStateError}}
+ MUST be thrown.
- computedOscFrequency(t) = frequency(t) * pow(2, detune(t) / 1200) + computedOscFrequency(t) = frequency(t) * pow(2, detune(t) / 1200)The OscillatorNode's instantaneous phase at each time is the definite @@ -8899,51 +8216,48 @@ time integral of computedOscFrequency, assuming a phase angle of zero at the node's exact start time. Its nominal range is [-Nyquist frequency, Nyquist frequency]. -The single output of this node consists of one channel (mono). -
path: audionode.include macros: - noi: 0 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: No + noi: 0 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: No
enum OscillatorType { - "sine", - "square", - "sawtooth", - "triangle", - "custom" + "sine", + "square", + "sawtooth", + "triangle", + "custom" };
Enum value | Description | - -||
---|---|---|---|
"sine" | A sine wave - | ||
"square" | A square wave of duty period 0.5 - | ||
"sawtooth" | A sawtooth wave - | ||
"triangle" | A triangle wave - | ||
"custom" | A custom periodic wave
+
+ Enumeration description
+ | | |
"sine" | A sine wave + | ||
"square" | A square wave of duty period 0.5 + | ||
"sawtooth" | A sawtooth wave + | ||
"triangle" | A triangle wave + | ||
"custom" | A custom periodic wave |
[Exposed=Window] interface OscillatorNode : AudioScheduledSourceNode { - constructor (BaseAudioContext context, optional OscillatorOptions options = {}); - attribute OscillatorType type; - readonly attribute AudioParam frequency; - readonly attribute AudioParam detune; - undefined setPeriodicWave (PeriodicWave periodicWave); + constructor (BaseAudioContext context, optional OscillatorOptions options = {}); + attribute OscillatorType type; + readonly attribute AudioParam frequency; + readonly attribute AudioParam detune; + void setPeriodicWave (PeriodicWave periodicWave); };@@ -8951,90 +8265,90 @@ interface OscillatorNode : AudioScheduledSourceNode { Constructors
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{OscillatorNode}} will be associated with. - options: Optional initial parameter value for this {{OscillatorNode}}. -+
+ context: The {{BaseAudioContext}} this new {{OscillatorNode}} will be associated with. + options: Optional initial parameter value for this {{OscillatorNode}}. +
value
is 0. This parameter is a-rate. It
- forms a compound parameter with {{OscillatorNode/frequency}}
- to form the computedOscFrequency. The nominal
- range listed below allows this parameter to detune the
- {{OscillatorNode/frequency}} over the entire possible
- range of frequencies.
-
- - path: audioparam.include - macros: - default: 0 - min: \(\approx -153600\) - min-notes: - max: \(\approx 153600\) - max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value. - rate: "{{AutomationRate/a-rate}}" -- - : frequency - :: - The frequency (in Hertz) of the periodic waveform. Its default -
value
is 440. This parameter is a-rate. It
- forms a compound parameter with {{OscillatorNode/detune}} to
- form the computedOscFrequency. Its nominal range
- is [-Nyquist frequency, Nyquist frequency].
-
- - path: audioparam.include - macros: - default: 440 - min: -Nyquist frequency - max: Nyquist frequency - rate: "{{AutomationRate/a-rate}}" -- - : type - :: The shape of the periodic waveform. It may directly be set to any - of the type constant values except for "{{OscillatorType/custom}}". Doing so MUST throw an - {{InvalidStateError}} exception. The - {{OscillatorNode/setPeriodicWave()}} method can be - used to set a custom waveform, which results in this attribute - being set to "{{OscillatorType/custom}}". The default value is "{{OscillatorType/sine}}". When this - attribute is set, the phase of the oscillator MUST be conserved. + : detune + :: + A detuning value (in cents) which will offset the + {{OscillatorNode/frequency}} by the given amount. Its default +
value
is 0. This parameter is a-rate. It
+ forms a compound parameter with {{OscillatorNode/frequency}}
+ to form the computedOscFrequency. The nominal
+ range listed below allows this parameter to detune the
+ {{OscillatorNode/frequency}} over the entire possible
+ range of frequencies.
+
+ + path: audioparam.include + macros: + default: 0 + min: \(\approx -153600\) + min-notes: + max: \(\approx 153600\) + max-notes: This value is approximately \(1200\ \log_2 \mathrm{FLT\_MAX}\) where FLT_MAX is the largest {{float}} value. + rate: "{{AutomationRate/a-rate}}" ++ + : frequency + :: + The frequency (in Hertz) of the periodic waveform. Its default +
value
is 440. This parameter is a-rate. It
+ forms a compound parameter with {{OscillatorNode/detune}} to
+ form the computedOscFrequency. Its nominal range
+ is [-Nyquist frequency, Nyquist frequency].
+
+ + path: audioparam.include + macros: + default: 440 + min: -Nyquist frequency + max: Nyquist frequency + rate: "{{AutomationRate/a-rate}}" ++ + : type + :: The shape of the periodic waveform. It may directly be set to any + of the type constant values except for "{{OscillatorType/custom}}". Doing so MUST throw an + {{InvalidStateError}} exception. The + {{OscillatorNode/setPeriodicWave()}} method can be + used to set a custom waveform, which results in this attribute + being set to "{{OscillatorType/custom}}". The default value is "{{OscillatorType/sine}}". When this + attribute is set, the phase of the oscillator MUST be conserved.
- periodicWave: custom waveform to be used by the oscillator -+
+ periodicWave: custom waveform to be used by the oscillator +-
void
+ dictionary OscillatorOptions : AudioNodeOptions { - OscillatorType type = "sine"; - float frequency = 440; - float detune = 0; - PeriodicWave periodicWave; + OscillatorType type = "sine"; + float frequency = 440; + float detune = 0; + PeriodicWave periodicWave; };@@ -9055,27 +8369,27 @@ dictionary OscillatorOptions : AudioNodeOptions { Dictionary {{OscillatorOptions}} Members
- $$ - x(t) = \sin t - $$ -+
+ $$ + x(t) = \sin t + $$ +: "{{OscillatorType/square}}" :: - The waveform for the square wave oscillator is: + The waveform for the square wave oscillator is: -
- $$ - x(t) = \begin{cases} - 1 & \mbox{for } 0≤ t < \pi \\ - -1 & \mbox{for } -\pi < t < 0. - \end{cases} - $$ -+
+ $$ + x(t) = \begin{cases} + 1 & \mbox{for } 0≤ t < \pi \\ + -1 & \mbox{for } -\pi < t < 0. + \end{cases} + $$ +- This is extended to all \(t\) by using the fact that the - waveform is an odd function with period \(2\pi\). + This is extended to all \(t\) by using the fact that the + waveform is an odd function with period \(2\pi\). : "{{OscillatorType/sawtooth}}" :: - The waveform for the sawtooth oscillator is the ramp: + The waveform for the sawtooth oscillator is the ramp: -
- $$ - x(t) = \frac{t}{\pi} \mbox{ for } -\pi < t ≤ \pi; - $$ -+
+ $$ + x(t) = \frac{t}{\pi} \mbox{ for } -\pi < t ≤ \pi; + $$ +- This is extended to all \(t\) by using the fact that the - waveform is an odd function with period \(2\pi\). + This is extended to all \(t\) by using the fact that the + waveform is an odd function with period \(2\pi\). : "{{OscillatorType/triangle}}" :: - The waveform for the triangle oscillator is: + The waveform for the triangle oscillator is: -
- $$ - x(t) = \begin{cases} - \frac{2}{\pi} t & \mbox{for } 0 ≤ t ≤ \frac{\pi}{2} \\ - 1-\frac{2}{\pi} \left(t-\frac{\pi}{2}\right) & \mbox{for } - \frac{\pi}{2} < t ≤ \pi. - \end{cases} - $$ -+
+ $$ + x(t) = \begin{cases} + \frac{2}{\pi} t & \mbox{for } 0 ≤ t ≤ \frac{\pi}{2} \\ + 1-\frac{2}{\pi} \left(t-\frac{\pi}{2}\right) & \mbox{for } + \frac{\pi}{2} < t ≤ \pi. + \end{cases} + $$ +- This is extended to all \(t\) by using the fact that the - waveform is an odd function with period \(2\pi\). + This is extended to all \(t\) by using the fact that the + waveform is an odd function with period \(2\pi\). -
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-notes: Has channelCount constraints - cc-mode: clamped-max - cc-mode-notes: Has channelCountMode constraints - cc-interp: speakers - tail-time: Maybe - tail-time-notes: If the {{PannerNode/panningModel}} is set to "{{PanningModelType/HRTF}}", the node will produce non-silent output for silent input due to the inherent processing for head responses. Otherwise the tail-time is zero. + noi: 1 + noo: 1 + cc: 2 + cc-notes: Has channelCount constraints + cc-mode: clamped-max + cc-mode-notes: Has channelCountMode constraints + cc-interp: speakers + tail-time: Maybe + tail-time-notes: If the {{PannerNode/panningModel}} is set to "{{PanningModelType/HRTF}}", the node will produce non-silent output for silent input due to the inherent processing for head responses. Otherwise the tail-time is zero.The input of this node is either mono (1 channel) or stereo (2 @@ -9190,33 +8504,32 @@ space. The default is "{{PanningModelType/equalpower}}".
enum PanningModelType { - "equalpower", - "HRTF" + "equalpower", + "HRTF" };
Enum value | Description | - -||
---|---|---|---|
"equalpower" - | - A simple and efficient spatialization algorithm using equal-power - panning. - - Note: When this panning model is used, all the {{AudioParam}}s - used to compute the output of this node are a-rate. - - | ||
"HRTF" - |
- A higher quality spatialization algorithm using a convolution
- with measured impulse responses from human subjects. This panning
- method renders stereo output.
-
- Note:When this panning model is used, all the {{AudioParam}}s
- used to compute the output of this node are k-rate.
+
+ Enumeration description
+ | | |
"equalpower" + | + A simple and efficient spatialization algorithm using equal-power + panning. + + Note: When this panning model is used, all the {{AudioParam}}s + used to compute the output of this node are a-rate. + + | ||
"HRTF" + | + A higher quality spatialization algorithm using a convolution + with measured impulse responses from human subjects. This panning + method renders stereo output. + + Note:When this panning model is used, all the {{AudioParam}}s + used to compute the output of this node are k-rate. |
enum DistanceModelType { - "linear", - "inverse", - "exponential" + "linear", + "inverse", + "exponential" };
Enum value | Description | - -||
---|---|---|---|
"linear" - |
- A linear distance model which calculates distanceGain
- according to:
-
- - $$ - 1 - f\ \frac{\max\left[\min\left(d, d'_{max}\right), d'_{ref}\right] - d'_{ref}}{d'_{max} - d'_{ref}} - $$ -- - where \(d'_{ref} = \min\left(d_{ref}, d_{max}\right)\) and \(d'_{max} = - \max\left(d_{ref}, d_{max}\right)\). In the case where \(d'_{ref} = - d'_{max}\), the value of the linear model is taken to be - \(1-f\). - - Note that \(d\) is clamped to the interval \(\left[d'_{ref},\, - d'_{max}\right]\). - - | ||
"inverse" - |
- An inverse distance model which calculates
- distanceGain according to:
-
- - $$ - \frac{d_{ref}}{d_{ref} + f\ \left[\max\left(d, d_{ref}\right) - d_{ref}\right]} - $$ -- - That is, \(d\) is clamped to the interval \(\left[d_{ref},\, - \infty\right)\). If \(d_{ref} = 0\), the value of the inverse model - is taken to be 0, independent of the value of \(d\) and \(f\). - - | ||
"exponential" - |
- An exponential distance model which calculates
- distanceGain according to:
-
- - $$ - \left[\frac{\max\left(d, d_{ref}\right)}{d_{ref}}\right]^{-f} - $$ -- - That is, \(d\) is clamped to the interval \(\left[d_{ref},\, - \infty\right)\). If \(d_{ref} = 0\), the value of the exponential - model is taken to be 0, independent of \(d\) and \(f\). + + Enumeration description
+ | | |
"linear" + |
+ A linear distance model which calculates distanceGain
+ according to:
+
+ + $$ + 1 - f\ \frac{\max\left[\min\left(d, d'_{max}\right), d'_{ref}\right] - d'_{ref}}{d'_{max} - d'_{ref}} + $$ ++ + where \(d'_{ref} = \min\left(d_{ref}, d_{max}\right)\) and \(d'_{max} = + \max\left(d_{ref}, d_{max}\right)\). In the case where \(d'_{ref} = + d'_{max}\), the value of the linear model is taken to be + \(1-f\). + + Note that \(d\) is clamped to the interval \(\left[d'_{ref},\, + d'_{max}\right]\). + + | ||
"inverse" + |
+ An inverse distance model which calculates
+ distanceGain according to:
+
+ + $$ + \frac{d_{ref}}{d_{ref} + f\ \left[\max\left(d, d_{ref}\right) - d_{ref}\right]} + $$ ++ + That is, \(d\) is clamped to the interval \(\left[d_{ref},\, + \infty\right)\). If \(d_{ref} = 0\), the value of the inverse model + is taken to be 0, independent of the value of \(d\) and \(f\). + + | ||
"exponential" + |
+ An exponential distance model which calculates
+ distanceGain according to:
+
+ + $$ + \left[\frac{\max\left(d, d_{ref}\right)}{d_{ref}}\right]^{-f} + $$ ++ + That is, \(d\) is clamped to the interval \(\left[d_{ref},\, + \infty\right)\). If \(d_{ref} = 0\), the value of the exponential + model is taken to be 0, independent of \(d\) and \(f\). |
[Exposed=Window] interface PannerNode : AudioNode { - constructor (BaseAudioContext context, optional PannerOptions options = {}); - attribute PanningModelType panningModel; - readonly attribute AudioParam positionX; - readonly attribute AudioParam positionY; - readonly attribute AudioParam positionZ; - readonly attribute AudioParam orientationX; - readonly attribute AudioParam orientationY; - readonly attribute AudioParam orientationZ; - attribute DistanceModelType distanceModel; - attribute double refDistance; - attribute double maxDistance; - attribute double rolloffFactor; - attribute double coneInnerAngle; - attribute double coneOuterAngle; - attribute double coneOuterGain; - undefined setPosition (float x, float y, float z); - undefined setOrientation (float x, float y, float z); + constructor (BaseAudioContext context, optional PannerOptions options = {}); + attribute PanningModelType panningModel; + readonly attribute AudioParam positionX; + readonly attribute AudioParam positionY; + readonly attribute AudioParam positionZ; + readonly attribute AudioParam orientationX; + readonly attribute AudioParam orientationY; + readonly attribute AudioParam orientationZ; + attribute DistanceModelType distanceModel; + attribute double refDistance; + attribute double maxDistance; + attribute double rolloffFactor; + attribute double coneInnerAngle; + attribute double coneOuterAngle; + attribute double coneOuterGain; + void setPosition (float x, float y, float z); + void setOrientation (float x, float y, float z); };@@ -9334,278 +8646,274 @@ interface PannerNode : AudioNode { Constructors
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{PannerNode}} will be associated with. - options: Optional initial parameter value for this {{PannerNode}}. -+
+ context: The {{BaseAudioContext}} this new {{PannerNode}} will be associated with. + options: Optional initial parameter value for this {{PannerNode}}. +
- path: audioparam.include - macros: - default: 1 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : orientationY - :: - Describes the \(y\)-component of the vector of the direction the - audio source is pointing in 3D cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : orientationZ - :: - Describes the \(z\)-component of the vector of the direction the - audio source is pointing in 3D cartesian coordinate space. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : panningModel - :: - Specifies the panning model used by this - {{PannerNode}}. Defaults to - "{{PanningModelType/equalpower}}". - - : positionX - :: - Sets the \(x\)-coordinate position of the audio source in a 3D - Cartesian system. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : positionY - :: - Sets the \(y\)-coordinate position of the audio source in a 3D - Cartesian system. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : positionZ - :: - Sets the \(z\)-coordinate position of the audio source in a 3D - Cartesian system. - -
- path: audioparam.include - macros: - default: 0 - min: most-negative-single-float - min-notes: Approximately -3.4028235e38 - max: most-positive-single-float - max-notes: Approximately 3.4028235e38 - rate: "{{AutomationRate/a-rate}}" - rate-notes: Has [=automation rate constraints=] -- - : refDistance - :: - A reference distance for reducing volume as source moves further - from the listener. For distances less than this, the volume is not reduced. The default value is 1. A - {{RangeError}} exception MUST be thrown if this is set - to a negative value. - - : rolloffFactor - :: - Describes how quickly the volume is reduced as source moves - away from listener. The default value is 1. A - {{RangeError}} exception MUST be thrown if this is set to - a negative value. - - The nominal range for the {{PannerNode/rolloffFactor}} specifies - the minimum and maximum values the
rolloffFactor
- can have. Values outside the range are clamped to lie within
- this range. The nominal range depends on the {{PannerNode/distanceModel}} as follows:
-
- + path: audioparam.include + macros: + default: 1 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : orientationY + :: + Describes the \(y\)-component of the vector of the direction the + audio source is pointing in 3D cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : orientationZ + :: + Describes the \(z\)-component of the vector of the direction the + audio source is pointing in 3D cartesian coordinate space. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : panningModel + :: + Specifies the panning model used by this + {{PannerNode}}. Defaults to + "{{PanningModelType/equalpower}}". + + : positionX + :: + Sets the \(x\)-coordinate position of the audio source in a 3D + Cartesian system. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : positionY + :: + Sets the \(y\)-coordinate position of the audio source in a 3D + Cartesian system. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : positionZ + :: + Sets the \(z\)-coordinate position of the audio source in a 3D + Cartesian system. + +
+ path: audioparam.include + macros: + default: 0 + min: most-negative-single-float + min-notes: Approximately -3.4028235e38 + max: most-positive-single-float + max-notes: Approximately 3.4028235e38 + rate: "{{AutomationRate/a-rate}}" + rate-notes: Has [=automation rate constraints=] ++ + : refDistance + :: + A reference distance for reducing volume as source moves further + from the listener. For distances less than this, the volume is not reduced. The default value is 1. A + {{RangeError}} exception MUST be thrown if this is set + to a negative value. + + : rolloffFactor + :: + Describes how quickly the volume is reduced as source moves + away from listener. The default value is 1. A + {{RangeError}} exception MUST be thrown if this is set to + a negative value. + + The nominal range for the
rolloffFactor
specifies
+ the minimum and maximum values the rolloffFactor
+ can have. Values outside the range are clamped to lie within
+ this range. The nominal range depends on the {{PannerNode/distanceModel}} as follows:
+
+ x
, y
and
- z
parameters, respectively.
-
- Consequently, if any of the {{PannerNode/orientationX}},
- {{PannerNode/orientationY}}, and {{PannerNode/orientationZ}} {{AudioParam}}s
- have an automation curve set using {{AudioParam/setValueCurveAtTime()}} at the time
- this method is called, a {{NotSupportedError}} MUST be
- thrown.
-
- Describes which direction the audio source is pointing in the
- 3D cartesian coordinate space. Depending on how directional the
- sound is (controlled by the cone attributes), a sound
- pointing away from the listener can be very quiet or completely
- silent.
-
- The x, y, z
parameters represent a direction
- vector in 3D space.
-
- The default value is (1,0,0).
-
- - x: - y: - z: -- -
x
, y
and
- z
parameters, respectively.
-
- Consequently, if any of the {{PannerNode/positionX}}, {{PannerNode/positionY}},
- and {{PannerNode/positionZ}} {{AudioParam}}s have an automation
- curve set using {{AudioParam/setValueCurveAtTime()}} at the time
- this method is called, a {{NotSupportedError}} MUST be
- thrown.
-
- Sets the position of the audio source relative to the
- {{BaseAudioContext/listener}} attribute. A 3D cartesian
- coordinate system is used.
-
- The x, y, z
parameters represent the coordinates
- in 3D space.
-
- The default value is (0,0,0).
-
- - x: - y: - z: -- -
x
, y
and
+ z
parameters, respectively.
+
+ Consequently, if any of the {{PannerNode/orientationX}},
+ {{PannerNode/orientationY}}, and {{PannerNode/orientationZ}} {{AudioParam}}s
+ have an automation curve set using {{AudioParam/setValueCurveAtTime()}} at the time
+ this method is called, a {{NotSupportedError}} MUST be
+ thrown.
+
+ Describes which direction the audio source is pointing in the
+ 3D cartesian coordinate space. Depending on how directional the
+ sound is (controlled by the cone attributes), a sound
+ pointing away from the listener can be very quiet or completely
+ silent.
+
+ The x, y, z
parameters represent a direction
+ vector in 3D space.
+
+ The default value is (1,0,0).
+
+ + x: + y: + z: ++ +
void
+ x
, y
and
+ z
parameters, respectively.
+
+ Consequently, if any of the {{PannerNode/positionX}}, {{PannerNode/positionY}},
+ and {{PannerNode/positionZ}} {{AudioParam}}s have an automation
+ curve set using {{AudioParam/setValueCurveAtTime()}} at the time
+ this method is called, a {{NotSupportedError}} MUST be
+ thrown.
+
+ Sets the position of the audio source relative to the
+ {{BaseAudioContext/listener}} attribute. A 3D cartesian
+ coordinate system is used.
+
+ The x, y, z
parameters represent the coordinates
+ in 3D space.
+
+ The default value is (0,0,0).
+
+ + x: + y: + z: ++ +
void
+ dictionary PannerOptions : AudioNodeOptions { - PanningModelType panningModel = "equalpower"; - DistanceModelType distanceModel = "inverse"; - float positionX = 0; - float positionY = 0; - float positionZ = 0; - float orientationX = 1; - float orientationY = 0; - float orientationZ = 0; - double refDistance = 1; - double maxDistance = 10000; - double rolloffFactor = 1; - double coneInnerAngle = 360; - double coneOuterAngle = 360; - double coneOuterGain = 0; + PanningModelType panningModel = "equalpower"; + DistanceModelType distanceModel = "inverse"; + float positionX = 0; + float positionY = 0; + float positionZ = 0; + float orientationX = 1; + float orientationY = 0; + float orientationZ = 0; + double refDistance = 1; + double maxDistance = 10000; + double rolloffFactor = 1; + double coneInnerAngle = 360; + double coneOuterAngle = 360; + double coneOuterGain = 0; };@@ -9635,47 +8943,47 @@ dictionary PannerOptions : AudioNodeOptions { Dictionary {{PannerOptions}} Members
[Exposed=Window] interface PeriodicWave { - constructor (BaseAudioContext context, optional PeriodicWaveOptions options = {}); + constructor (BaseAudioContext context, optional PeriodicWaveOptions options = {}); };@@ -9707,46 +9015,46 @@ interface PeriodicWave { Constructors
- context: The {{BaseAudioContext}} this new {{PeriodicWave}} will be associated with. Unlike {{AudioBuffer}}, {{PeriodicWave}}s can't be shared accross {{AudioContext}}s or {{OfflineAudioContext}}s. It is associated with a particular {{BaseAudioContext}}. - options: Optional initial parameter value for this {{PeriodicWave}}. -+ : PeriodicWave(context, options) + :: +
+ context: The {{BaseAudioContext}} this new {{PeriodicWave}} will be associated with. Unlike {{AudioBuffer}}, {{PeriodicWave}}s can't be shared accross {{AudioContext}}s or {{OfflineAudioContext}}s. It is associated with a particular {{BaseAudioContext}}. + options: Optional initial parameter value for this {{PeriodicWave}}. +
dictionary PeriodicWaveConstraints { - boolean disableNormalization = false; + boolean disableNormalization = false; };@@ -9762,14 +9070,14 @@ dictionary PeriodicWaveConstraints { Dictionary {{PeriodicWaveConstraints}} Members
sine
terms. The first element (index 0) does not
- exist in the Fourier series. The second element
- (index 1) represents the fundamental frequency. The
- third represents the first overtone and so on.
-
- : real
- ::
- The {{PeriodicWaveOptions/real}} parameter represents an array of
- cosine
terms. The first element (index 0) is the
- DC-offset of the periodic waveform. The second element
- (index 1) represents the fundmental frequency. The
- third represents the first overtone and so on.
+ : imag
+ ::
+ The {{PeriodicWaveOptions/imag}} parameter represents an array of
+ sine
terms. The first element (index 0) does not
+ exist in the Fourier series. The second element
+ (index 1) represents the fundamental frequency. The
+ third represents the first overtone and so on.
+
+ : real
+ ::
+ The {{PeriodicWaveOptions/real}} parameter represents an array of
+ cosine
terms. The first element (index 0) is the
+ DC-offset of the periodic waveform. The second element
+ (index 1) represents the fundmental frequency. The
+ third represents the first overtone and so on.
$$ - x(t) = \sum_{k=1}^{L-1} \left[a[k]\cos2\pi k t + b[k]\sin2\pi k t\right] + x(t) = \sum_{k=1}^{L-1} \left[a[k]\cos2\pi k t + b[k]\sin2\pi k t\right] $$@@ -9843,7 +9151,7 @@ Let
$$ - \tilde{x}(n) = \sum_{k=1}^{L-1} \left(a[k]\cos\frac{2\pi k n}{N} + b[k]\sin\frac{2\pi k n}{N}\right) + \tilde{x}(n) = \sum_{k=1}^{L-1} \left(a[k]\cos\frac{2\pi k n}{N} + b[k]\sin\frac{2\pi k n}{N}\right) $$@@ -9853,7 +9161,7 @@ normalization factor \(f\) is computed as follows.
$$ - f = \max_{n = 0, \ldots, N - 1} |\tilde{x}(n)| + f = \max_{n = 0, \ldots, N - 1} |\tilde{x}(n)| $$@@ -9861,7 +9169,7 @@ Thus, the actual normalized waveform \(\hat{x}(n)\) is:
$$ - \hat{x}(n) = \frac{\tilde{x}(n)}{f} + \hat{x}(n) = \frac{\tilde{x}(n)}{f} $$@@ -9884,47 +9192,47 @@ functions. Also, \(b[0] = 0\) in all cases. Hence, only \(b[n]\) for \(n \ge 1\) is specified below.
- $$ - b[n] = \begin{cases} - 1 & \mbox{for } n = 1 \\ - 0 & \mbox{otherwise} - \end{cases} - $$ -- - : "{{square}}" - :: -
- $$ - b[n] = \frac{2}{n\pi}\left[1 - (-1)^n\right] - $$ -- - : "{{sawtooth}}" - :: -
- $$ - b[n] = (-1)^{n+1} \dfrac{2}{n\pi} - $$ -- - : "{{triangle}}" - :: -
- $$ - b[n] = \frac{8\sin\dfrac{n\pi}{2}}{(\pi n)^2} - $$ -+ : "{{sine}}" + :: +
+ $$ + b[n] = \begin{cases} + 1 & \mbox{for } n = 1 \\ + 0 & \mbox{otherwise} + \end{cases} + $$ ++ + : "{{square}}" + :: +
+ $$ + b[n] = \frac{2}{n\pi}\left[1 - (-1)^n\right] + $$ ++ + : "{{sawtooth}}" + :: +
+ $$ + b[n] = (-1)^{n+1} \dfrac{2}{n\pi} + $$ ++ + : "{{triangle}}" + :: +
+ $$ + b[n] = \frac{8\sin\dfrac{n\pi}{2}}{(\pi n)^2} + $$ +
path: audionode.include macros: - noi: 1 - noo: 1 - cc: {{BaseAudioContext/createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)/numberOfInputChannels}} - cc-notes: This is the number of channels specified when constructing this node. There are channelCount constraints. - cc-mode: explicit - cc-mode-notes: Has channelCountMode constraints - cc-interp: speakers - tail-time: No + noi: 1 + noo: 1 + cc: {{BaseAudioContext/createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)/numberOfInputChannels}} + cc-notes: This is the number of channels specified when constructing this node. There are channelCount constraints. + cc-mode: explicit + cc-mode-notes: Has channelCountMode constraints + cc-interp: speakers + tail-time: NoThe {{ScriptProcessorNode}} is constructed with a {{BaseAudioContext/createScriptProcessor(bufferSize, numberOfInputChannels, numberOfOutputChannels)/bufferSize}} which MUST be one of the following values: 256, 512, 1024, 2048, 4096, 8192, 16384. This value controls how -frequently the {{ScriptProcessorNode/audioprocess}} event is dispatched and how -many sample-frames need to be processed each call. {{ScriptProcessorNode/audioprocess}} events are only +frequently the {{ScriptProcessorNode/onaudioprocess}} event is dispatched and how +many sample-frames need to be processed each call. {{ScriptProcessorNode/onaudioprocess}} events are only dispatched if the {{ScriptProcessorNode}} has at least one input or one output connected. Lower numbers for {{ScriptProcessorNode/bufferSize}} will result in @@ -9967,8 +9275,8 @@ to be zero.
[Exposed=Window] interface ScriptProcessorNode : AudioNode { - attribute EventHandler onaudioprocess; - readonly attribute long bufferSize; + attribute EventHandler onaudioprocess; + readonly attribute long bufferSize; };@@ -9976,25 +9284,27 @@ interface ScriptProcessorNode : AudioNode { Attributes
EventHandler
(described
+ in
+ HTML[[!HTML]]) for the {{ScriptProcessorNode/onaudioprocess}} event that
+ is dispatched to {{ScriptProcessorNode}} node
+ types. An event of type {{AudioProcessingEvent}}
+ will be dispatched to the event handler.
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-notes: Has channelCount constraints - cc-mode: clamped-max - cc-mode-notes: Has channelCountMode constraints - cc-interp: speakers - tail-time: No + noi: 1 + noo: 1 + cc: 2 + cc-notes: Has channelCount constraints + cc-mode: clamped-max + cc-mode-notes: Has channelCountMode constraints + cc-interp: speakers + tail-time: NoThe input of this node is stereo (2 channels) and cannot be @@ -10026,8 +9336,8 @@ cannot be configured.
[Exposed=Window] interface StereoPannerNode : AudioNode { - constructor (BaseAudioContext context, optional StereoPannerOptions options = {}); - readonly attribute AudioParam pan; + constructor (BaseAudioContext context, optional StereoPannerOptions options = {}); + readonly attribute AudioParam pan; };@@ -10035,39 +9345,39 @@ interface StereoPannerNode : AudioNode { Constructors
- path: audionode-init.include -+
+ path: audionode-init.include +-
- context: The {{BaseAudioContext}} this new {{StereoPannerNode}} will be associated with. - options: Optional initial parameter value for this {{StereoPannerNode}}. -+
+ context: The {{BaseAudioContext}} this new {{StereoPannerNode}} will be associated with. + options: Optional initial parameter value for this {{StereoPannerNode}}. +
- path: audioparam.include - macros: - default: 0 - min: -1 - max: 1 - rate: "{{AutomationRate/a-rate}}" -+ : pan + :: + The position of the input in the output's stereo image. -1 + represents full left, +1 represents full right. + +
+ path: audioparam.include + macros: + default: 0 + min: -1 + max: 1 + rate: "{{AutomationRate/a-rate}}" +
dictionary StereoPannerOptions : AudioNodeOptions { - float pan = 0; + float pan = 0; };@@ -10084,8 +9394,8 @@ dictionary StereoPannerOptions : AudioNodeOptions { Dictionary {{StereoPannerOptions}} Members
path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: Maybe - tail-time-notes: There is a tail-time only if the {{WaveShaperNode/oversample}} attribute is set to "{{OverSampleType/2x}}" or "{{OverSampleType/4x}}". The actual duration of this tail-time depends on the implementation. + noi: 1 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: Maybe + tail-time-notes: There is a tail-time only if the {{WaveShaperNode/oversample}} attribute is set to "{{OverSampleType/2x}}" or "{{OverSampleType/4x}}". The actual duration of this tail-time depends on the implementation.The number of channels of the output always equals the number of @@ -10132,35 +9442,34 @@ channels of the input.
enum OverSampleType { - "none", - "2x", - "4x" + "none", + "2x", + "4x" };
Enum value | Description | - -||
---|---|---|---|
"none" - | Don't oversample - - | ||
"2x" - | Oversample two times - - | ||
"4x" - | Oversample four times
+
+ Enumeration description
+ | | |
"none" + | Don't oversample + + | ||
"2x" + | Oversample two times + + | ||
"4x" + | Oversample four times |
[Exposed=Window] interface WaveShaperNode : AudioNode { - constructor (BaseAudioContext context, optional WaveShaperOptions options = {}); - attribute Float32Array? curve; - attribute OverSampleType oversample; + constructor (BaseAudioContext context, optional WaveShaperOptions options = {}); + attribute Float32Array? curve; + attribute OverSampleType oversample; };@@ -10168,157 +9477,150 @@ interface WaveShaperNode : AudioNode { Constructors
- path: audionode-init.include -- - Also, let [[curve set]] be an internal - slot of this {{WaveShaperNode}}. Initialize this slot to
false
. If
- {{WaveShaperNode/constructor(context, options)/options}} is given and specifies a
- {{WaveShaperOptions/curve}}, set {{[[curve set]]}} to true
.
-
- - context: The {{BaseAudioContext}} this new {{WaveShaperNode}} will be associated with. - options: Optional initial parameter value for this {{WaveShaperNode}}. -+ : WaveShaperNode(context, options) + :: + +
+ path: audionode-init.include ++ +
+ context: The {{BaseAudioContext}} this new {{WaveShaperNode}} will be associated with. + options: Optional initial parameter value for this {{WaveShaperNode}}. +
- $$ - \begin{align*} - v &= \frac{N-1}{2}(x + 1) \\ - k &= \lfloor v \rfloor \\ - f &= v - k - \end{align*} - $$ -- 1. Then -
- $$ - \begin{align*} - y &= - \begin{cases} - c_0 & v \lt 0 \\ - c_{N-1} & v \ge N - 1 \\ - (1-f)\,c_k + fc_{k+1} & \mathrm{otherwise} - \end{cases} - \end{align*} - $$ --
length
less than 2.
-
- When this attribute is set, an internal copy of the curve is
- created by the {{WaveShaperNode}}. Subsequent
- modifications of the contents of the array used to set the
- attribute therefore have no effect.
-
- null
.
- .
-
- 2. If new curve is not null
and
- {{WaveShaperNode/[[curve set]]}} is true, throw an
- {{InvalidStateError}} and abort these steps.
-
- 3. If new curve is not null
, set
- {{WaveShaperNode/[[curve set]]}} to true.
-
- 4. Assign new curve to the {{WaveShaperNode/curve}}
- attribute.
- + $$ + \begin{align*} + v &= \frac{N-1}{2}(x + 1) \\ + k &= \lfloor v \rfloor \\ + f &= v - k + \end{align*} + $$ ++ 1. Then +
+ $$ + \begin{align*} + y &= + \begin{cases} + c_0 & v \lt 0 \\ + c_{N-1} & v \ge N - 1 \\ + (1-f)\,c_k + fc_{k+1} & \mathrm{otherwise} + \end{cases} + \end{align*} + $$ ++
length
less than 2.
+
+ When this attribute is set, an internal copy of the curve is
+ created by the {{WaveShaperNode}}. Subsequent
+ modifications of the contents of the array used to set the
+ attribute therefore have no effect.
+
+ null
.
+ .
+
+ 2. If new curve is not null
and
+ {{WaveShaperNode/[[curve set]]}} is true, throw an
+ {{InvalidStateError}} and abort these steps.
+
+ 3. If new curve is not null
, set
+ {{WaveShaperNode/[[curve set]]}} to true.
+
+ 4. Assign new curve to the {{WaveShaperNode/curve}}
+ attribute.
+ [Exposed=Window, SecureContext] interface AudioWorklet : Worklet { - readonly attribute MessagePort port; };-
"message"
event of this {{AudioWorklet/port}} should
- call {{MessagePort/close}} on either end of the {{MessageChannel}}
- (either in the {{AudioWorklet}} or the {{AudioWorkletGlobalScope}}
- side) to allow for resources to be
- [[html#ports-and-garbage-collection|collected]].
-// bypass-processor.js script file, runs on AudioWorkletGlobalScope class BypassProcessor extends AudioWorkletProcessor { - process (inputs, outputs) { - // Single input, single channel. - const input = inputs[0]; - const output = outputs[0]; - output[0].set(input[0]); - - // Process only while there are active inputs. - return false; - } + process (inputs, outputs) { + // Single input, single channel. + const input = inputs[0]; + const output = outputs[0]; + output[0].set(input[0]); + + // Process only while there are active inputs. + return false; + } }; registerProcessor('bypass-processor', BypassProcessor); @@ -10444,7 +9729,7 @@ registerProcessor('bypass-processor', BypassProcessor); // The main global scope const context = new AudioContext(); context.audioWorklet.addModule('bypass-processor.js').then(() => { - const bypassNode = new AudioWorkletNode(context, 'bypass-processor'); + const bypassNode = new AudioWorkletNode(context, 'bypass-processor'); });@@ -10454,7 +9739,7 @@ created in {{AudioWorkletGlobalScope}}. These two objects communicate via the asynchronous message passing described in [[#processing-model]]. -
"message"
- event of this {{AudioWorkletGlobalScope/port}} should call
- {{MessagePort/close}} on either end of the {{MessageChannel}} (either
- in the {{AudioWorklet}} or the {{AudioWorkletGlobalScope}} side) to
- allow for resources to be
- [[html#ports-and-garbage-collection|collected]].
+ : currentFrame
+ ::
+ The current frame of the block of audio being
+ processed. This must be equal to the value of the
+ {{[[current frame]]}} internal slot of the
+ {{BaseAudioContext}}.
+
+ : currentTime
+ ::
+ The context time of the block of audio being processed. By
+ definition this will be equal to the value of
+ {{BaseAudioContext}}'s {{BaseAudioContext/currentTime}} attribute that was most
+ recently observable in the control thread.
+
+ : sampleRate
+ ::
+ The sample rate of the associated {{BaseAudioContext}}.
IsConstructor(argument=processorCtor)
- is false
,
- throw a {{TypeError}}
- .
-
- 1. Let prototype
be the result of
-
- Get(O=processorCtor,
- P="prototype")
.
-
- 1. If the result of
- Type(argument=prototype)
- is not Object
,
- throw a {{TypeError}}
- .
-
- 1. Let parameterDescriptorsValue be the
- result of
- Get(O=processorCtor, P="parameterDescriptors")
.
-
- 1. If parameterDescriptorsValue is not {{undefined}},
- execute the following steps:
-
- 1. Let parameterDescriptorSequence
- be the result of
-
- the conversion from
- parameterDescriptorsValue
- to an IDL value of type
- sequence<AudioParamDescriptor>
.
-
- 1. Let paramNames be an empty Array.
-
- 1.
- For each descriptor of
- parameterDescriptorSequence:
- 1. Let paramName be the value of
- the member {{AudioParamDescriptor/name}}
- in descriptor. Throw
- a {{NotSupportedError}} if
- paramNames already
- contains paramName value.
-
- 1. Append paramName to
- the paramNames array.
-
- 1. Let defaultValue be the value of
- the member
- {{AudioParamDescriptor/defaultValue}}
- in descriptor.
-
- 1. Let minValue be the value of
- the member
- {{AudioParamDescriptor/minValue}}
- in descriptor.
-
- 1. Let maxValue be the value of
- the member
- {{AudioParamDescriptor/maxValue}}
- in descriptor.
-
- 1. If the expresstion
- minValue <=
- defaultValue <=
- maxValue is false,
- throw
- an {{InvalidStateError}}.
-
- 1. Append the key-value pair name →
- processorCtor to
- node name to processor constructor map
- of the associated {{AudioWorkletGlobalScope}}.
-
- 1.
- queue a media element task to append the key-value pair |name| →
- |parameterDescriptorSequence| to the node name to parameter descriptor map of the
- associated {{BaseAudioContext}}.
- - name: A string key that represents a class constructor to be registered. This key is used to look up the constructor of {{AudioWorkletProcessor}} during construction of an {{AudioWorkletNode}}. - processorCtor: A class constructor extended from {{AudioWorkletProcessor}}. -- -
IsConstructor(argument=processorCtor)
+ is false
,
+ throw a {{TypeError}}
+ .
+
+ 1. Let prototype
be the result of
+
+ Get(O=processorCtor,
+ P="prototype")
.
+
+ 1. If the result of
+ Type(argument=prototype)
+ is not Object
,
+ throw a {{TypeError}}
+ .
+
+ 1. Let parameterDescriptorsValue be the
+ result of
+ Get(O=processorCtor, P="parameterDescriptors")
.
+
+ 1. If parameterDescriptorsValue is not undefined
,
+ execute the following steps:
+
+ 1. Let parameterDescriptorSequence
+ be the result of
+
+ the conversion from
+ parameterDescriptorsValue
+ to an IDL value of type
+ sequence<AudioParamDescriptor>
.
+
+ 1. Let paramNames be an empty Array.
+
+ 1. For each descriptor of
+ parameterDescriptorSequence:
+
+ 1. Let paramName be the value of
+ the member {{AudioParamDescriptor/name}}
+ in descriptor. Throw
+ a {{NotSupportedError}} if
+ paramNames already
+ contains paramName value.
+
+ 1. Append paramName to
+ the paramNames array.
+
+ 1. Let defaultValue be the value of
+ the member
+ {{AudioParamDescriptor/defaultValue}}
+ in descriptor.
+
+ 1. Let minValue be the value of
+ the member
+ {{AudioParamDescriptor/minValue}}
+ in descriptor.
+
+ 1. Let maxValue be the value of
+ the member
+ {{AudioParamDescriptor/maxValue}}
+ in descriptor.
+
+ 1. If defaultValue is less than
+ minValue or greater than
+ maxValue,
+ throw a
+ {{InvalidStateError}}.
+
+ 1. Append the key-value pair name →
+ processorCtor to
+ node name to processor constructor map
+ of the associated {{AudioWorkletGlobalScope}}.
+
+ 1. Queue a task to the control thread to
+ append the key-value pair name →
+ parameterDescriptorSequence to
+ the node name to parameter descriptor map
+ of the associated {{BaseAudioContext}}.
+ + name: A string key that represents a class constructor to be registered. This key is used to look up the constructor of {{AudioWorkletProcessor}} during construction of an {{AudioWorkletNode}}. + processorCtor: A class constructor extended from {{AudioWorkletProcessor}}. ++ +
void
+ path: audionode.include macros: - noi: 1 - noo: 1 - cc: 2 - cc-mode: max - cc-interp: speakers - tail-time: See notes - tail-time-notes: Any tail-time is handled by the node itself + noi: 1 + noo: 1 + cc: 2 + cc-mode: max + cc-interp: speakers + tail-time: See notes + tail-time-notes: Any tail-time is handled by the node itselfEvery {{AudioWorkletProcessor}} has an associated active source flag, initially `true`. This flag causes the node to be retained in memory and perform audio processing in the absence of any connected inputs. -All tasks posted from an {{AudioWorkletNode}} are posted to the task queue of -its associated {{BaseAudioContext}}. -
[Exposed=Window, SecureContext] interface AudioWorkletNode : AudioNode { - constructor (BaseAudioContext context, DOMString name, + constructor (BaseAudioContext context, DOMString name, optional AudioWorkletNodeOptions options = {}); - readonly attribute AudioParamMap parameters; - readonly attribute MessagePort port; - attribute EventHandler onprocessorerror; + readonly attribute AudioParamMap parameters; + readonly attribute MessagePort port; + attribute EventHandler onprocessorerror; };@@ -10799,167 +10057,169 @@ interface AudioWorkletNode : AudioNode { Constructors
- context: The {{BaseAudioContext}} this new {{AudioWorkletNode}} will be associated with. - name: A string that is a key for the {{BaseAudioContext}}’s node name to parameter descriptor map. - options: Optional initial parameters value for this {{AudioWorkletNode}}. -- - When the constructor is called, the user agent MUST perform the - following steps on the control thread: - -
+ context: The {{BaseAudioContext}} this new {{AudioWorkletNode}} will be associated with. + name: A string that is a key for the {{BaseAudioContext}}’s node name to parameter descriptor map. + options: Optional initial parameters value for this {{AudioWorkletNode}}. ++ + When the constructor is called, the user agent MUST perform the + following steps on the control thread: + +
constructor
, process
method,
- or any user-defined class method, the processor will
- [=queue a media element task=] to [=fire an event=] named processorerror
- at the associated {{AudioWorkletNode}} using {{ErrorEvent}}.
-
- The ErrorEvent
is created and initialized
- appropriately with its message
,
- filename
, lineno
, colno
- attributes on the control thread.
-
- Note that once a unhandled exception is thrown, the processor
- will output silence throughout its lifetime.
-
- : parameters
- ::
- The parameters
attribute is a collection of
- {{AudioParam}} objects with associated names. This maplike
- object is populated from a list of {{AudioParamDescriptor}}s
- in the {{AudioWorkletProcessor}} class constructor at the
- instantiation.
-
- : port
- ::
- Every {{AudioWorkletNode}} has an associated
- port
which is the
- {{MessagePort}}. It is connected to the port on the
- corresponding {{AudioWorkletProcessor}} object allowing
- bidirectional communication between the
- {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}.
-
- Note: Authors that register an event listener on the "message"
- event of this {{AudioWorkletNode/port}} should call {{MessagePort/close}} on
- either end of the {{MessageChannel}} (either in the
- {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for
- resources to be [[html#ports-and-garbage-collection|collected]].
+ : onprocessorerror
+ ::
+ When an unhandled exception is thrown from the processor's
+ constructor
, process
method,
+ or any user-defined class method, the processor will
+ queue a task to
+ fire an event named processorerror
using
+
+ ErrorEvent at the associated {{AudioWorkletNode}}.
+
+ The ErrorEvent
is created and initialized
+ appropriately with its message
,
+ filename
, lineno
, colno
+ attributes on the control thread.
+
+ Note that once a unhandled exception is thrown, the processor
+ will output silence throughout its lifetime.
+
+ : parameters
+ ::
+ The parameters
attribute is a collection of
+ {{AudioParam}} objects with associated names. This maplike
+ object is populated from a list of {{AudioParamDescriptor}}s
+ in the {{AudioWorkletProcessor}} class constructor at the
+ instantiation.
+
+ : port
+ ::
+ Every {{AudioWorkletNode}} has an associated
+ port
which is the
+ {{MessagePort}}. It is connected to the port on the
+ corresponding {{AudioWorkletProcessor}} object allowing
+ bidirectional communication between the
+ {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}.
+
+ Note: Authors that register a event listener on the "message"
+ event of this {{AudioWorkletNode/port}} should call {{MessagePort/close}} on
+ either end of the {{MessageChannel}} (either in the
+ {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for
+ resources to be [[html#ports-and-garbage-collection|collected]].
[Exposed=AudioWorklet] interface AudioWorkletProcessor { - constructor (); - readonly attribute MessagePort port; + constructor (); + readonly attribute MessagePort port; }; - -callback AudioWorkletProcessCallback = - boolean (FrozenArray> inputs, - FrozenArray > outputs, - object parameters); -
processorerror
using
+ ErrorEvent
+ at nodeReference.
+
+ 1. Let processor be the
+ this value.
+
+ 1. Set processor's {{[[node reference]]}} to
+ nodeReference.
+
+ 1. Set processor's {{[[callable process]]}}
+ to `true`.
+
+ 1. Let deserializedPort be the result of
+ looking up
+ [=pending processor construction data/transferred port=]
+ from the
+ [=pending processor construction data=].
+
+ 1. Set processor’s
+ {{AudioWorkletProcessor/port}}
+ to deserializedPort.
+
+ 1. Empty the [=pending processor construction data=]
+ slot.
+ port
which is a {{MessagePort}}. It is connected to
- the port on the corresponding {{AudioWorkletNode}} object
- allowing bidirectional communication between an
- {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}.
-
- Note: Authors that register an event listener on the "message"
- event of this {{AudioWorkletProcessor/port}} should call
- {{MessagePort/close}} on either end of the {{MessageChannel}} (either in the
- {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for
- resources to be [[html#ports-and-garbage-collection|collected]].
+ : port
+ ::
+ Every {{AudioWorkletProcessor}} has an associated
+ port
which is a {{MessagePort}}. It is connected to
+ the port on the corresponding {{AudioWorkletNode}} object
+ allowing bidirectional communication between an
+ {{AudioWorkletNode}} and its {{AudioWorkletProcessor}}.
+
+ Note: Authors that register a event listener on the "message"
+ event of this {{AudioWorkletProcessor/port}} should call
+ {{MessagePort/close}} on either end of the {{MessageChannel}} (either in the
+ {{AudioWorkletProcessor}} or the {{AudioWorkletNode}} side) to allow for
+ resources to be [[html#ports-and-garbage-collection|collected]].
process()
that implements the audio processing
+{{AudioWorkletProcessor}}. The subclass MUST define a method
+named {{process()}} that implements the audio processing
algorithm and may have a static property named
parameterDescriptors
which is an iterable
of {{AudioParamDescriptor}}s.
-The [=process()=] callback function is handled as specified when rendering a graph.
-
-false
from
- [=process()=] which allows the presence or absence of
- connected inputs to determine whether the {{AudioWorkletNode}} is
- [=actively processing=].
-
- * Nodes that transform their inputs, but which remain active
- for a tail-time after their inputs are disconnected. In
- this case, [=process()=] SHOULD return
- `true` for some period of time after
- inputs
is found to contain zero channels. The
- current time may be obtained from the global scope's
- {{AudioWorkletGlobalScope/currentTime}} to
- measure the start and end of this tail-time interval, or the
- interval could be calculated dynamically depending on the
- processor's internal state.
-
- * Nodes that act as sources of output, typically with a
- lifetime. Such nodes SHOULD return `true` from
- [=process()=] until the point at which they are no
- longer producing an output.
-
- Note that the preceding definition implies that when no
- return value is provided from an implementation of
- [=process()=], the effect is identical to returning
- false
(since the effective return value is the falsy
- value {{undefined}}). This is a reasonable behavior for
- any {{AudioWorkletProcessor}} that is active only when it has
- active inputs.
-false
from
+ {{process()}} which allows the presence or absence of
+ connected inputs to determine whether the {{AudioWorkletNode}} is
+ [=actively processing=].
+
+ * Nodes that transform their inputs, but which remain active
+ for a tail-time after their inputs are disconnected. In
+ this case, {{process()}} SHOULD return
+ `true` for some period of time after
+ inputs
is found to contain zero channels. The
+ current time may be obtained from the global scope's
+ {{AudioWorkletGlobalScope/currentTime}} to
+ measure the start and end of this tail-time interval, or the
+ interval could be calculated dynamically depending on the
+ processor's internal state.
+
+ * Nodes that act as sources of output, typically with a
+ lifetime. Such nodes SHOULD return `true` from
+ {{process()}} until the point at which they are no
+ longer producing an output.
+
+ Note that the preceding definition implies that when no
+ return value is provided from an implementation of
+ {{process()}}, the effect is identical to returning
+ false
(since the effective return value is the falsy
+ value undefined
). This is a reasonable behavior for
+ any {{AudioWorkletProcessor}} that is active only when it has
+ active inputs.
+ + inputs: + The input audio buffer from the incoming connections provided by the user agent. It has type+ +sequence<sequence<Float32Array>>
.inputs[n][m]
is a {{Float32Array}} of audio samples for the \(m\)th channel of the \(n\)th input. While the number of inputs is fixed at construction, the number of channels can be changed dynamically based on [=computedNumberOfChannels=]. + + If there are no [=actively processing=] {{AudioNode}}s connected to the \(n\)th input of the {{AudioWorkletNode}} for the current render quantum, then the content ofinputs[n]
is an empty array, indicating that zero channels of input are available. This is the only circumstance under which the number of elements ofinputs[n]
can be zero. + + outputs: + The output audio buffer that is to be consumed by the user agent. It has typesequence<sequence<Float32Array>>
.outputs[n][m]
is a {{Float32Array}} object containing the audio samples for \(m\)th channel of \(n\)th output. Each of the {{Float32Array}}s are zero-filled. The number of channels in the output will match [=computedNumberOfChannels=] only when the node has a single output. + + parameters: + An [=ordered map=] of name → parameterValues.parameters["name"]
returns parameterValues, which is a {{Float32Array}} with the automation values of the name {{AudioParam}}. + + For each array, the array contains the [=computedValue=] of the parameter for all frames in the [=render quantum=]. However, if no automation is scheduled during this render quantum, the array MAY have length 1 with the array element being the constant value of the {{AudioParam}} for the [=render quantum=]. +
{{FrozenArray}}<{{FrozenArray}}<{{Float32Array}}>>
- :: The input audio buffer from the incoming connections provided by the user agent. inputs[n][m]
is a {{Float32Array}} of audio samples for the \(m\)th channel of the \(n\)th input. While the number of inputs is fixed at construction, the number of channels can be changed dynamically based on [=computedNumberOfChannels=].
-
- If there are no [=actively processing=] {{AudioNode}}s connected to the \(n\)th input of the {{AudioWorkletNode}} for the current render quantum, then the content of inputs[n]
is an empty array, indicating that zero channels of input are available. This is the only circumstance under which the number of elements of inputs[n]
can be zero.
-
- : {{AudioWorkletProcessCallback/outputs!!argument}}, of type {{FrozenArray}}<{{FrozenArray}}<{{Float32Array}}>>
- :: The output audio buffer that is to be consumed by the user agent. outputs[n][m]
is a {{Float32Array}} object containing the audio samples for \(m\)th channel of \(n\)th output. Each of the {{Float32Array}}s are zero-filled. The number of channels in the output will match [=computedNumberOfChannels=] only when the node has a single output.
-
- : {{AudioWorkletProcessCallback/parameters!!argument}}, of type {{object}}
- :: An [=ordered map=] of name → parameterValues. parameters["name"]
returns parameterValues, which is a {{FrozenArray}}<{{Float32Array}}> with the automation values of the name {{AudioParam}}.
-
- For each array, the array contains the [=computedValue=] of the parameter for all frames in the [=render quantum=]. However, if no automation is scheduled during this render quantum, the array MAY have length 1 with the array element being the constant value of the {{AudioParam}} for the [=render quantum=].
-
- This object is frozen according the the following steps
- dictionary AudioParamDescriptor { - required DOMString name; - float defaultValue = 0; - float minValue = -3.4028235e38; - float maxValue = 3.4028235e38; - AutomationRate automationRate = "a-rate"; + required DOMString name; + float defaultValue = 0; + float minValue = -3.4028235e38; + float maxValue = 3.4028235e38; + AutomationRate automationRate = "a-rate"; };
minValue
. This value is the most
+ positive finite single precision floating-point number.
+
+ : minValue
+ ::
+ Represents the minimum value. A
+ {{NotSupportedError}} exception MUST be thrown if
+ this value is out of range of float data type or it is
+ greater than maxValue
. This value is the most
+ negative finite single precision floating-point number.
+
+ : name
+ ::
+ Represents the name of the parameter. A
+ {{NotSupportedError}} exception MUST be thrown when
+ a duplicated name is found when registering the class
+ definition.
context.audioWorklet
is requested to add a script module.
+ 2. In the main scope, context.audioWorklet
is requested to add a script module.
- 2. Since none exists yet, a new {{AudioWorkletGlobalScope}} is created in association with the context. This is the global scope in which {{AudioWorkletProcessor}} class definitions will be evaluated. (On subsequent calls, this previously created scope will be used.)
+ 2. Since none exists yet, a new {{AudioWorkletGlobalScope}} is created in association with the context. This is the global scope in which {{AudioWorkletProcessor}} class definitions will be evaluated. (On subsequent calls, this previously created scope will be used.)
- 2. The imported script is run in the newly created global scope.
+ 2. The imported script is run in the newly created global scope.
- 3. As part of running the imported script, an {{AudioWorkletProcessor}} is registered under
- a key ("custom"
in the above diagram) within the {{AudioWorkletGlobalScope}}.
- This populates maps both in the global scope and in the {{AudioContext}}.
+ 3. As part of running the imported script, an {{AudioWorkletProcessor}} is registered under
+ a key ("custom"
in the above diagram) within the {{AudioWorkletGlobalScope}}.
+ This populates maps both in the global scope and in the {{AudioContext}}.
- 3. The promise for the {{addModule()}} call is resolved.
+ 3. The promise for the {{addModule()}} call is resolved.
- 6. In the main scope, an {{AudioWorkletNode}} is created using
- the user-specified key along with a
- dictionary of options.
+ 6. In the main scope, an {{AudioWorkletNode}} is created using
+ the user-specified key along with a
+ dictionary of options.
- 7. As part of the node's creation, this key is used to look up the
- correct {{AudioWorkletProcessor}} subclass for instantiation.
+ 7. As part of the node's creation, this key is used to look up the
+ correct {{AudioWorkletProcessor}} subclass for instantiation.
- 8. An instance of the {{AudioWorkletProcessor}} subclass is
- instantiated with a structured clone of the same options
- dictionary. This instance is paired with the previously created
- {{AudioWorkletNode}}.
+ 8. An instance of the {{AudioWorkletProcessor}} subclass is
+ instantiated with a structured clone of the same options
+ dictionary. This instance is paired with the previously created
+ {{AudioWorkletNode}}.
++/* vumeter-node.js: Main global scope */ export default class VUMeterNode extends AudioWorkletNode { - constructor (context, updateIntervalInMS) { - super(context, 'vumeter', { - numberOfInputs: 1, - numberOfOutputs: 0, - channelCount: 1, - processorOptions: { - updateIntervalInMS: updateIntervalInMS || 16.67 - } - }); - - // States in AudioWorkletNode - this._updateIntervalInMS = updateIntervalInMS; - this._volume = 0; - - // Handles updated values from AudioWorkletProcessor - this.port.onmessage = event => { - if (event.data.volume) - this._volume = event.data.volume; - } - this.port.start(); - } - - get updateInterval() { - return this._updateIntervalInMS; - } - - set updateInterval(updateIntervalInMS) { - this._updateIntervalInMS = updateIntervalInMS; - this.port.postMessage({updateIntervalInMS: updateIntervalInMS}); - } - - draw () { - // Draws the VU meter based on the volume value - // every |this._updateIntervalInMS| milliseconds. - } + constructor (context, updateIntervalInMS) { + super(context, 'vumeter', { + numberOfInputs: 1, + numberOfOutputs: 0, + channelCount: 1, + processorOptions: { + updateIntervalInMS: updateIntervalInMS || 16.67; + } + }); + + // States in AudioWorkletNode + this._updateIntervalInMS = updateIntervalInMS; + this._volume = 0; + + // Handles updated values from AudioWorkletProcessor + this.port.onmessage = event => { + if (event.data.volume) + this._volume = event.data.volume; + } + this.port.start(); + } + + get updateInterval() { + return this._updateIntervalInMS; + } + + set updateInterval(updateIntervalInMS) { + this._updateIntervalInMS = updateIntervalInMS; + this.port.postMessage({updateIntervalInMS: updateIntervalInMS}); + } + + draw () { + // Draws the VU meter based on the volume value + // every |this._updateIntervalInMS| milliseconds. + } }; -
false
.
-
- 2. Process the [=control message queue=].
-
- 1. Let Qrendering be an empty [=control message
- queue=]. [=Atomically=] [=swap=] Qrendering
- with the current [=control message queue=].
-
- 2. While there are messages in Qrendering, execute the
- following steps:
-
- 1. Execute the asynchronous section of the [=oldest message=] of
- Qrendering.
-
- 2. Remove the [=oldest message=] of Qrendering.
-
-
- 3. Process the {{BaseAudioContext}}'s [=associated task queue=].
-
-
- 1. Let task queue be the {{BaseAudioContext}}'s [=associated task queue=].
- 2. Let task count be the number of tasks in the in task queue
- 3. While task count is not equal to 0, execute the following steps:
- 1. Let oldest task be the first runnable task in task queue, and remove it from task queue.
- 2. Set the rendering loop's currently running task to oldest task.
- 3. Perform oldest task's steps.
- 4. Set the rendering loop currently running task back to null
.
- 6. Decrement task count
- 5. Perform a microtask checkpoint.
-
- 4. Process a render quantum.
-
- 1. If the {{[[rendering thread state]]}} of the {{BaseAudioContext}} is not
- running
, return false.
-
- 2. Order the {{AudioNode}}s of the {{BaseAudioContext}} to be processed.
-
- 1. Let ordered node list be an empty list of {{AudioNode}}s and
- {{AudioListener}}. It will contain an ordered list of {{AudioNode}}s and
- the {{AudioListener}} when this ordering algorithm terminates.
-
- 2. Let nodes be the set of all nodes created by this
- {{BaseAudioContext}}, and still alive.
-
- 3. Add the {{AudioListener}} to nodes.
-
- 4. Let cycle breakers be an empty set of {{DelayNode}}s. It will
- contain all the {{DelayNode}}s that are part of a cycle.
-
- 5. For each {{AudioNode}} node in nodes:
-
- 1. If node is a {{DelayNode}} that is part of a cycle, add it
- to cycle breakers and remove it from nodes.
-
- 6. For each {{DelayNode}} delay in cycle breakers:
+ 1. Let render result be false
.
- 1. Let delayWriter and delayReader respectively be a
- DelayWriter and a DelayReader, for delay.
- Add delayWriter and delayReader to
- nodes. Disconnect delay from all its input and
- outputs.
+ 2. Process the [=control message queue=].
- Note: This breaks the cycle: if a DelayNode
is in a
- cycle, its two ends can be considered separately, because delay lines
- cannot be smaller than one render quantum when in a cycle.
+ 1. Let Qrendering be an empty [=control message
+ queue=]. [=Atomically=] [=swap=] Qrendering
+ with the current [=control message queue=].
- 7. If nodes contains cycles, [=mute=] all the
- {{AudioNode}}s that are part of this cycle, and remove them from
- nodes.
+ 2. While there are messages in Qrendering, execute the
+ following steps:
- 8. Consider all elements in nodes to be unmarked. While there are unmarked elements in nodes:
+ 1. Execute the asynchronous section of the [=oldest message=] of
+ Qrendering.
- 1. Choose an element node in nodes.
+ 2. Remove the [=oldest message=] of Qrendering.
- 2. [=Visit=] node.
+ 3. Process a render quantum.
- running
, return false.
- 1. If node is marked, abort these steps.
+ 2. Order the {{AudioNode}}s of the {{BaseAudioContext}} to be processed.
- 2. Mark node.
+ 1. Let ordered node list be an empty list of {{AudioNode}}s and
+ {{AudioListener}}. It will contain an ordered list of {{AudioNode}}s and
+ the {{AudioListener}} when this ordering algorithm terminates.
- 3. If node is an {{AudioNode}}, [=Visit=] each
- {{AudioNode}} connected to the input of node.
+ 2. Let nodes be the set of all nodes created by this
+ {{BaseAudioContext}}, and still alive.
- 4. For each {{AudioParam}} param of node:
- 1. For each {{AudioNode}} param input node connected to param:
- 1. [=Visit=] param input node
+ 3. Add the {{AudioListener}} to nodes.
- 5. Add node to the beginning of ordered node list.
- DelayNode
is in a
+ cycle, its two ends can be considered separately, because delay lines
+ cannot be smaller than one render quantum when in a cycle.
- 2. [[#computation-of-value|Compute the value(s)]] of this
- {{AudioParam}} for this block.
+ 7. If nodes contains cycles, [=mute=] all the
+ {{AudioNode}}s that are part of this cycle, and remove them from
+ nodes.
- 3. [=Queue a control message=] to set the {{[[current value]]}} slot
- of this {{AudioParam}} according to [[#computation-of-value]].
+ 8. Consider all elements in nodes to be unmarked. While there are unmarked elements in nodes:
- 2. If this {{AudioNode}} has any {{AudioNode}}s connected to its input,
- [[#channel-up-mixing-and-down-mixing|sum]] the buffers
- [=Making a buffer available for reading|made available for reading=] by all
- {{AudioNode}}s connected to this {{AudioNode}}. The resulting buffer is
- called the input buffer.
- [[#channel-up-mixing-and-down-mixing|Up or down-mix]] it to
- match if number of input channels of this {{AudioNode}}.
+ 1. Choose an element node in nodes.
- 3. If this {{AudioNode}} is a source node,
- [=Computing a block of audio|compute a block of audio=], and
- [=Making a buffer available for reading|make it available for reading=].
+ 2. [=Visit=] node.
- 4. If this {{AudioNode}} is an {{AudioWorkletNode}}, execute these substeps:
+
+ Get(O=processor, P="process")
.
- 1. Set |processor|’s active source flag to
-
- ToBoolean(|callResult|.\[[Value]]).
+ 1. Set {{[[callable process]]}} to be the return value of
+
+ IsCallable(argument=processFunction)
.
- 1. Return: at this point |completion|
- will be set to an ECMAScript
- completion value.
+ 1. If {{[[callable process]]}} is `true`,
+ invoke processFunction to
+ [=Computing a block of audio|compute a block of audio=] with the
+ argument of [=input buffer=], output buffer and
+ [=input AudioParam buffer=]. A buffer containing copies of the
+ elements of the {{Float32Array}}s passed via the
+
+ outputs parameter to processFunction is
+ made available for reading.
- 1. [=Clean up after running a callback=] with the [=current settings object=].
+ 1. At the conclusion of processFunction,
+ ToBoolean
+ is applied to the return value and the result is
+ assigned to the associated {{AudioWorkletProcessor}}'s
+ active source flag. This in turn affects whether
+ subsequent invocations of {{process()}} occur, and has
+ an impact on the lifetime of the node.
- 1. [=Clean up after running script=] with the [=current settings object=].
+ 1. Else if {{[[callable process]]}} is `false`,
+ queue a task to the control thread
+ fire an
+ ErrorEvent
+ named processorerror
at the associated
+ {{AudioWorkletNode}}.
- 1. If |completion| is an
-
- abrupt completion:
+ 1. If {{[[callable process]]}} of processor is `false`,
+ execute the following steps:
- 1. Set {{[[callable process]]}} to `false`.
+ 1. [=Making a buffer available for reading|Make a silent output buffer available for reading=].
- 1. Set |processor|'s active source flag to `false`.
+ 1. Any {{Promise}} resolved within the execution of process method will
+ be queued into the microtask queue in the {{AudioWorkletGlobalScope}}.
- 1. [=Making a buffer available for reading|Make a silent output buffer available for reading=].
+ 5. If this {{AudioNode}} is a destination node,
+ [=Recording the input|record the input=] of this {{AudioNode}}.
- 1. Queue a task to the control thread to [=fire an event=]
- named {{AudioWorkletNode/processorerror}} at the associated
- {{AudioWorkletNode}} using {{ErrorEvent}}.
+ 6. Else, process the input buffer, and
+ [=Making a buffer available for reading|make available for reading=] the
+ resulting buffer.
- 5. If this {{AudioNode}} is a destination node,
- [=Recording the input|record the input=] of this {{AudioNode}}.
+ 6. [=Atomically=] perform the following steps:
- 6. Else, [=processing an input buffer|process=] the input buffer, and
- [=Making a buffer available for reading|make available for reading=] the
- resulting buffer.
+ 1. Increment {{[[current frame]]}} by the [=render quantum size=].
- 6. [=Atomically=] perform the following steps:
+ 2. Set {{BaseAudioContext/currentTime}} to {{[[current frame]]}} divided
+ by {{BaseAudioContext/sampleRate}}.
- 1. Increment {{[[current frame]]}} by the [=render quantum size=].
+ 7. Set render result to true
.
- 2. Set {{BaseAudioContext/currentTime}} to {{[[current frame]]}} divided
- by {{BaseAudioContext/sampleRate}}.
+ 4. [=Perform a microtask checkpoint=].
- 7. Set render result to true
.
-
- 5. [=Perform a microtask checkpoint=].
-
- 6. Return render result.
+ 5. Return render result.
running
:
-
- 1. Attempt to release system resources.
-
- 1. Set the |audioContext|'s {{[[rendering thread state]]}} to suspended
.
-
- 1. [=Queue a media element task=] to execute the following steps:
-
- 1. [=Fire an event=] named {{AudioContext/error}} at |audioContext|.
-
- 1. Set the |audioContext|'s {{[[suspended by user]]}} to false
.
-
- 1. Set the |audioContext|'s {{[[control thread state]]}} to suspended
.
-
- 1. Set the |audioContext|'s {{BaseAudioContext/state}} attribute to
- "{{AudioContextState/suspended}}".
-
- 1. [=Fire an event=] named {{BaseAudioContext/statechange}} at the |audioContext|.
-
- 1. Abort these steps.
-
-1. If the |audioContext|'s {{[[rendering thread state]]}} is suspended
:
-
- 1. [=Queue a media element task=]to execute the following steps:
-
- 1. [=Fire an event=] named {{AudioContext/error}} at |audioContext|.
-
-Note: An example of system audio resource errors would be when an external or wireless audio device
- becoming disconnected during the active rendering of the {{AudioContext}}.
-
InvalidStateError
, for each {{AudioContext}} and
- {{OfflineAudioContext}} whose relevant global object is the same as
- the document's associated Window.
+ InvalidStateError
, for each {{AudioContext}} and
+ {{OfflineAudioContext}} whose relevant global object is the same as
+ the document's associated Window.
2. Stop all {{decoding thread}}s.
3. Queue a control message to {{AudioContext/close()}} the
- {{AudioContext}} or {{OfflineAudioContext}}.
+ {{AudioContext}} or {{OfflineAudioContext}}.
Order - | Label - | Mono - | Stereo - | Quad - | 5.1 - | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
0 | SPEAKER_FRONT_LEFT | 0 | 0 | 0 | 0 - | ||||||
1 | SPEAKER_FRONT_RIGHT | 1 | 1 | 1 - | |||||||
2 | SPEAKER_FRONT_CENTER | 2 - | |||||||||
3 | SPEAKER_LOW_FREQUENCY | 3 - | |||||||||
4 | SPEAKER_BACK_LEFT | 2 | 4 - | ||||||||
5 | SPEAKER_BACK_RIGHT | 3 | 5 - | ||||||||
6 | SPEAKER_FRONT_LEFT_OF_CENTER | - | |||||||||
7 | SPEAKER_FRONT_RIGHT_OF_CENTER | - | |||||||||
8 | SPEAKER_BACK_CENTER | - | |||||||||
9 | SPEAKER_SIDE_LEFT | - | |||||||||
10 | SPEAKER_SIDE_RIGHT | - | |||||||||
11 | SPEAKER_TOP_CENTER | - | |||||||||
12 | SPEAKER_TOP_FRONT_LEFT | - | |||||||||
13 | SPEAKER_TOP_FRONT_CENTER | - | |||||||||
14 | SPEAKER_TOP_FRONT_RIGHT | - | |||||||||
15 | SPEAKER_TOP_BACK_LEFT | - | |||||||||
16 | SPEAKER_TOP_BACK_CENTER | - | |||||||||
17 | SPEAKER_TOP_BACK_RIGHT |
+
+ Order
+ | Label
+ | Mono
+ | Stereo
+ | Quad
+ | 5.1
+ | | |||
0 | SPEAKER_FRONT_LEFT | 0 | 0 | 0 | 0 + | ||||||
1 | SPEAKER_FRONT_RIGHT | 1 | 1 | 1 + | |||||||
2 | SPEAKER_FRONT_CENTER | 2 + | |||||||||
3 | SPEAKER_LOW_FREQUENCY | 3 + | |||||||||
4 | SPEAKER_BACK_LEFT | 2 | 4 + | ||||||||
5 | SPEAKER_BACK_RIGHT | 3 | 5 + | ||||||||
6 | SPEAKER_FRONT_LEFT_OF_CENTER | + | |||||||||
7 | SPEAKER_FRONT_RIGHT_OF_CENTER | + | |||||||||
8 | SPEAKER_BACK_CENTER | + | |||||||||
9 | SPEAKER_SIDE_LEFT | + | |||||||||
10 | SPEAKER_SIDE_RIGHT | + | |||||||||
11 | SPEAKER_TOP_CENTER | + | |||||||||
12 | SPEAKER_TOP_FRONT_LEFT | + | |||||||||
13 | SPEAKER_TOP_FRONT_CENTER | + | |||||||||
14 | SPEAKER_TOP_FRONT_RIGHT | + | |||||||||
15 | SPEAKER_TOP_BACK_LEFT | + | |||||||||
16 | SPEAKER_TOP_BACK_CENTER | + | |||||||||
17 | SPEAKER_TOP_BACK_RIGHT |
Mono up-mix: - 1 -> 2 : up-mix from mono to stereo - output.L = input; - output.R = input; + 1 -> 2 : up-mix from mono to stereo + output.L = input; + output.R = input; - 1 -> 4 : up-mix from mono to quad - output.L = input; - output.R = input; - output.SL = 0; - output.SR = 0; + 1 -> 4 : up-mix from mono to quad + output.L = input; + output.R = input; + output.SL = 0; + output.SR = 0; - 1 -> 5.1 : up-mix from mono to 5.1 - output.L = 0; - output.R = 0; - output.C = input; // put in center channel - output.LFE = 0; - output.SL = 0; - output.SR = 0; + 1 -> 5.1 : up-mix from mono to 5.1 + output.L = 0; + output.R = 0; + output.C = input; // put in center channel + output.LFE = 0; + output.SL = 0; + output.SR = 0; Stereo up-mix: - 2 -> 4 : up-mix from stereo to quad - output.L = input.L; - output.R = input.R; - output.SL = 0; - output.SR = 0; + 2 -> 4 : up-mix from stereo to quad + output.L = input.L; + output.R = input.R; + output.SL = 0; + output.SR = 0; - 2 -> 5.1 : up-mix from stereo to 5.1 - output.L = input.L; - output.R = input.R; - output.C = 0; - output.LFE = 0; - output.SL = 0; - output.SR = 0; + 2 -> 5.1 : up-mix from stereo to 5.1 + output.L = input.L; + output.R = input.R; + output.C = 0; + output.LFE = 0; + output.SL = 0; + output.SR = 0; Quad up-mix: - 4 -> 5.1 : up-mix from quad to 5.1 - output.L = input.L; - output.R = input.R; - output.C = 0; - output.LFE = 0; - output.SL = input.SL; - output.SR = input.SR; + 4 -> 5.1 : up-mix from quad to 5.1 + output.L = input.L; + output.R = input.R; + output.C = 0; + output.LFE = 0; + output.SL = input.SL; + output.SR = input.SR;
Mono down-mix: - 2 -> 1 : stereo to mono - output = 0.5 * (input.L + input.R); + 2 -> 1 : stereo to mono + output = 0.5 * (input.L + input.R); - 4 -> 1 : quad to mono - output = 0.25 * (input.L + input.R + input.SL + input.SR); + 4 -> 1 : quad to mono + output = 0.25 * (input.L + input.R + input.SL + input.SR); - 5.1 -> 1 : 5.1 to mono - output = sqrt(0.5) * (input.L + input.R) + input.C + 0.5 * (input.SL + input.SR) + 5.1 -> 1 : 5.1 to mono + output = sqrt(0.5) * (input.L + input.R) + input.C + 0.5 * (input.SL + input.SR) Stereo down-mix: - 4 -> 2 : quad to stereo - output.L = 0.5 * (input.L + input.SL); - output.R = 0.5 * (input.R + input.SR); + 4 -> 2 : quad to stereo + output.L = 0.5 * (input.L + input.SL); + output.R = 0.5 * (input.R + input.SR); - 5.1 -> 2 : 5.1 to stereo - output.L = L + sqrt(0.5) * (input.C + input.SL) - output.R = R + sqrt(0.5) * (input.C + input.SR) + 5.1 -> 2 : 5.1 to stereo + output.L = L + sqrt(0.5) * (input.C + input.SL) + output.R = R + sqrt(0.5) * (input.C + input.SR) Quad down-mix: - 5.1 -> 4 : 5.1 to quad - output.L = L + sqrt(0.5) * input.C - output.R = R + sqrt(0.5) * input.C - output.SL = input.SL - output.SR = input.SR + 5.1 -> 4 : 5.1 to quad + output.L = L + sqrt(0.5) * input.C + output.R = R + sqrt(0.5) * input.C + output.SL = input.SL + output.SR = input.SR
- // Set gain node to explicit 2-channels (stereo). - gain.channelCount = 2; - gain.channelCountMode = "explicit"; - gain.channelInterpretation = "speakers"; - - // Set "hardware output" to 4-channels for DJ-app with two stereo output busses. - context.destination.channelCount = 4; - context.destination.channelCountMode = "explicit"; - context.destination.channelInterpretation = "discrete"; - - // Set "hardware output" to 8-channels for custom multi-channel speaker array - // with custom matrix mixing. - context.destination.channelCount = 8; - context.destination.channelCountMode = "explicit"; - context.destination.channelInterpretation = "discrete"; - - // Set "hardware output" to 5.1 to play an HTMLAudioElement. - context.destination.channelCount = 6; - context.destination.channelCountMode = "explicit"; - context.destination.channelInterpretation = "speakers"; - - // Explicitly down-mix to mono. - gain.channelCount = 1; - gain.channelCountMode = "explicit"; - gain.channelInterpretation = "speakers"; + // Set gain node to explicit 2-channels (stereo). + gain.channelCount = 2; + gain.channelCountMode = "explicit"; + gain.channelInterpretation = "speakers"; + + // Set "hardware output" to 4-channels for DJ-app with two stereo output busses. + context.destination.channelCount = 4; + context.destination.channelCountMode = "explicit"; + context.destination.channelInterpretation = "discrete"; + + // Set "hardware output" to 8-channels for custom multi-channel speaker array + // with custom matrix mixing. + context.destination.channelCount = 8; + context.destination.channelCountMode = "explicit"; + context.destination.channelInterpretation = "discrete"; + + // Set "hardware output" to 5.1 to play an HTMLAudioElement. + context.destination.channelCount = 6; + context.destination.channelCountMode = "explicit"; + context.destination.channelInterpretation = "speakers"; + + // Explicitly down-mix to mono. + gain.channelCount = 1; + gain.channelCountMode = "explicit"; + gain.channelInterpretation = "speakers";
- // First, clamp azimuth to allowed range of [-180, 180]. - azimuth = max(-180, azimuth); - azimuth = min(180, azimuth); - - // Then wrap to range [-90, 90]. - if (azimuth < -90) - azimuth = -180 - azimuth; - else if (azimuth > 90) - azimuth = 180 - azimuth; -- - 3. A normalized value x is calculated from - azimuth for a mono input as: - -
- x = (azimuth + 90) / 180; -- - Or for a stereo input as: - -
- if (azimuth <= 0) { // -90 -> 0 - // Transform the azimuth value from [-90, 0] degrees into the range [-90, 90]. - x = (azimuth + 90) / 90; - } else { // 0 -> 90 - // Transform the azimuth value from [0, 90] degrees into the range [-90, 90]. - x = azimuth / 90; - } -- - 4. Left and right gain values are calculated as: - -
- gainL = cos(x * Math.PI / 2); - gainR = sin(x * Math.PI / 2); -- - 5. For mono input, the stereo output is calculated as: - -
- outputL = input * gainL; - outputR = input * gainR; -- - Else for stereo input, the output is calculated as: - -
- if (azimuth <= 0) { - outputL = inputL + inputR * gainL; - outputR = inputR * gainR; - } else { - outputL = inputL * gainL; - outputR = inputR + inputL * gainR; - } -- 6. Apply the distance gain and cone gain where the - computation of the distance is described in - [[#Spatialization-distance-effects|Distance - Effects]] and the cone gain is described in - [[#Spatialization-sound-cones|Sound Cones]]: - -
- let distance = distance(); - let distanceGain = distanceModel(distance); - let totalGain = coneGain() * distanceGain(); - outputL = totalGain * outputL; - outputR = totalGain * outputR; -+ 1. For each sample to be computed by this {{AudioNode}}: + + 1. Let azimuth be the value computed in the azimuth and elevation section. + + 2. The azimuth value is first contained to be within + the range [-90, 90] according to: + +
+ // First, clamp azimuth to allowed range of [-180, 180]. + azimuth = max(-180, azimuth); + azimuth = min(180, azimuth); + + // Then wrap to range [-90, 90]. + if (azimuth < -90) + azimuth = -180 - azimuth; + else if (azimuth > 90) + azimuth = 180 - azimuth; ++ + 3. A normalized value x is calculated from + azimuth for a mono input as: + +
+ x = (azimuth + 90) / 180; ++ + Or for a stereo input as: + +
+ if (azimuth <= 0) { // -90 -> 0 + // Transform the azimuth value from [-90, 0] degrees into the range [-90, 90]. + x = (azimuth + 90) / 90; + } else { // 0 -> 90 + // Transform the azimuth value from [0, 90] degrees into the range [-90, 90]. + x = azimuth / 90; + } ++ + 4. Left and right gain values are calculated as: + +
+ gainL = cos(x * Math.PI / 2); + gainR = sin(x * Math.PI / 2); ++ + 5. For mono input, the stereo output is calculated as: + +
+ outputL = input * gainL; + outputR = input * gainR; ++ + Else for stereo input, the output is calculated as: + +
+ if (azimuth <= 0) { + outputL = inputL + inputR * gainL; + outputR = inputR * gainR; + } else { + outputL = inputL * gainL; + outputR = inputR + inputL * gainR; + } ++ 6. Apply the distance gain and cone gain where the + computation of the distance is described in + [[#Spatialization-distance-effects|Distance + Effects]] and the cone gain is described in + [[#Spatialization-sound-cones|Sound Cones]]: + +
+ let distance = distance(); + let distanceGain = distanceModel(distance); + let totalGain = coneGain() * distanceGain(); + outputL = totalGain * outputL; + outputR = totalGain * outputR; +
pan
{{AudioParam}} of this
- {{StereoPannerNode}}.
-
- 2. Clamp pan to [-1, 1].
-
- - pan = max(-1, pan); - pan = min(1, pan); -- - 3. Calculate x by normalizing pan value to - [0, 1]. For mono input: - -
- x = (pan + 1) / 2; -- - For stereo input: - -
- if (pan <= 0) - x = pan + 1; - else - x = pan; -- - 4. Left and right gain values are calculated as: - -
- gainL = cos(x * Math.PI / 2); - gainR = sin(x * Math.PI / 2); -- - 5. For mono input, the stereo output is calculated as: - -
- outputL = input * gainL; - outputR = input * gainR; -- - Else for stereo input, the output is calculated as: - -
- if (pan <= 0) { - outputL = inputL + inputR * gainL; - outputR = inputR * gainR; - } else { - outputL = inputL * gainL; - outputR = inputR + inputL * gainR; - } -+ For a {{StereoPannerNode}}, the following algorithm + MUST be implemented. + + 1. For each sample to be computed by this {{AudioNode}} + 1. Let pan be the computedValue of the +
pan
{{AudioParam}} of this
+ {{StereoPannerNode}}.
+
+ 2. Clamp pan to [-1, 1].
+
+ + pan = max(-1, pan); + pan = min(1, pan); ++ + 3. Calculate x by normalizing pan value to + [0, 1]. For mono input: + +
+ x = (pan + 1) / 2; ++ + For stereo input: + +
+ if (pan <= 0) + x = pan + 1; + else + x = pan; ++ + 4. Left and right gain values are calculated as: + +
+ gainL = cos(x * Math.PI / 2); + gainR = sin(x * Math.PI / 2); ++ + 5. For mono input, the stereo output is calculated as: + +
+ outputL = input * gainL; + outputR = input * gainR; ++ + Else for stereo input, the output is calculated as: + +
+ if (pan <= 0) { + outputL = inputL + inputR * gainL; + outputR = inputR * gainR; + } else { + outputL = inputL * gainL; + outputR = inputR + inputL * gainR; + } +
// Three dimensional vector class. class Vec3 { - // Construct from 3 coordinates. - constructor(x, y, z) { - this.x = x; - this.y = y; - this.z = z; - } - - // Dot product with another vector. - dot(v) { - return (this.x * v.x) + (this.y * v.y) + (this.z * v.z); - } - - // Cross product with another vector. - cross(v) { - return new Vec3((this.y * v.z) - (this.z * v.y), - (this.z * v.x) - (this.x * v.z), - (this.x * v.y) - (this.y * v.x)); - } - - // Difference with another vector. - diff(v) { - return new Vec3(this.x - v.x, this.y - v.y, this.z - v.z); - } - - // Get the magnitude of this vector. - get magnitude() { - return Math.sqrt(dot(this)); - } - - // Get a copy of this vector multiplied by a scalar. - scale(s) { - return new Vec3(this.x * s, this.y * s, this.z * s); - } - - // Get a normalized copy of this vector. - normalize() { - const m = magnitude; - if (m == 0) { - return new Vec3(0, 0, 0); - } - return scale(1 / m); - } + // Construct from 3 coordinates. + constructor(x, y, z) { + this.x = x; + this.y = y; + this.z = z; + } + + // Dot product with another vector. + dot(v) { + return (this.x * v.x) + (this.y * v.y) + (this.z * v.z); + } + + // Cross product with another vector. + cross(v) { + return new Vec3((this.y * v.z) - (this.z * v.y), + (this.z * v.x) - (this.x * v.z), + (this.x * v.y) - (this.y * v.x)); + } + + // Difference with another vector. + diff(v) { + return new Vec3(this.x - v.x, this.y - v.y, this.z - v.z); + } + + // Get the magnitude of this vector. + get magnitude() { + return Math.sqrt(dot(this)); + } + + // Get a copy of this vector multiplied by a scalar. + scale(s) { + return new Vec3(this.x * s, this.y * s, this.z * s); + } + + // Get a normalized copy of this vector. + normalize() { + const m = magnitude; + if (m == 0) { + return new Vec3(0, 0, 0); + } + return scale(1 / m); + } }