BaseAudioContext class abstract
The BaseAudioContext
of the
Web Audio API
acts as a base definition for online and offline audio-processing graphs, as
represented by AudioContext and OfflineAudioContext respectively. You
wouldn't use BaseAudioContext
directly — you'd use its features via one of
these two inheriting interfaces.
A BaseAudioContext
can be a target of events, therefore it implements the
EventTarget interface.
API documentation sourced from MDN Web Docs.
- Implemented types
- Implementers
Constructors
Properties
- audioWorklet → AudioWorklet
-
The
audioWorklet
read-only property of the BaseAudioContext returns an instance of AudioWorklet that can be used for adding AudioWorkletProcessor-derived classes which implement custom audio processing.no setter - currentTime → TauTime
-
The
currentTime
read-only property of the BaseAudioContext returns a double representing an ever-increasing hardware timestamp in seconds that can be used for scheduling audio playback, visualizing timelines, etc. It starts at 0.no setter - destination → AudioDestinationNode
-
The
destination
property of the BaseAudioContext returns an AudioDestinationNode representing the final destination of all audio in the context. It often represents an actual audio-rendering device such as your device's speakers.no setter - hashCode → int
-
The hash code for this object.
no setterinherited
- listener → AudioListener
-
The
listener
property of the BaseAudioContext interface returns an AudioListener object that can then be used for implementing 3D audio spatialization.no setter - onstatechange ↔ EventHandler
-
getter/setter pair
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
- sampleRate → TauSampleRate
-
The
sampleRate
property of the BaseAudioContext returns a floating point number representing the sample rate, in samples per second, used by all nodes in this audio context. This limitation means that sample-rate converters are not supported.no setter - state → AudioContextState
-
The
state
read-only property of the BaseAudioContext returns the current state of theAudioContext
.no setter
Methods
-
createAnalyser(
) → AnalyserNode -
The
createAnalyser()
method of the BaseAudioContext creates an AnalyserNode, which can be used to expose audio time and frequency data and create data visualizations. -
createBiquadFilter(
) → BiquadFilterNode -
The
createBiquadFilter()
method of the BaseAudioContext creates a BiquadFilterNode, which represents a second order filter configurable as several different common filter types. -
createBuffer(
int numberOfChannels, int length, TauSampleRate sampleRate) → AudioBuffer -
The
createBuffer()
method of the BaseAudioContext is used to create a new, empty AudioBuffer object, which can then be populated by data, and played via an AudioBufferSourceNode. -
createBufferSource(
) → AudioBufferSourceNode -
The
createBufferSource()
method of the BaseAudioContext is used to create a new AudioBufferSourceNode, which can be used to play audio data contained within an AudioBuffer object. AudioBuffers are created using BaseAudioContext.createBuffer or returned by BaseAudioContext.decodeAudioData when it successfully decodes an audio track. -
createChannelMerger(
[int numberOfInputs]) → ChannelMergerNode -
The
createChannelMerger()
method of the BaseAudioContext interface creates a ChannelMergerNode, which combines channels from multiple audio streams into a single audio stream. -
createChannelSplitter(
[int numberOfOutputs]) → ChannelSplitterNode -
The
createChannelSplitter()
method of the BaseAudioContext Interface is used to create a ChannelSplitterNode, which is used to access the individual channels of an audio stream and process them separately. -
createConstantSource(
) → ConstantSourceNode -
The
createConstantSource()
property of the BaseAudioContext creates a ConstantSourceNode object, which is an audio source that continuously outputs a monaural (one-channel) sound signal whose samples all have the same value. -
createConvolver(
) → ConvolverNode -
The
createConvolver()
method of the BaseAudioContext creates a ConvolverNode, which is commonly used to apply reverb effects to your audio. See the spec definition of Convolution for more information. -
createDelay(
[TauTime maxDelayTime]) → DelayNode -
The
createDelay()
method of the BaseAudioContext is used to create a DelayNode, which is used to delay the incoming audio signal by a certain amount of time. -
createDynamicsCompressor(
) → DynamicsCompressorNode -
The
createDynamicsCompressor()
method of the BaseAudioContext is used to create a DynamicsCompressorNode, which can be used to apply compression to an audio signal. -
createGain(
) → GainNode -
The
createGain()
method of the BaseAudioContext creates a GainNode, which can be used to control the overall gain (or volume) of the audio graph. -
createIIRFilter(
TauArray< TauNumber> feedforward, TauArray<TauNumber> feedback) → IIRFilterNode -
The
createIIRFilter()
method of the BaseAudioContext interface creates an IIRFilterNode, which represents a general infinite impulse response (IIR) filter which can be configured to serve as various types of filter. -
createOscillator(
) → OscillatorNode -
The
createOscillator()
method of the BaseAudioContext creates an OscillatorNode, a source representing a periodic waveform. It basically generates a constant tone. -
createPanner(
) → PannerNode -
The
createPanner()
method of the BaseAudioContext is used to create a new PannerNode, which is used to spatialize an incoming audio stream in 3D space. -
createPeriodicWave(
TauArray< TauNumber> real, TauArray<TauNumber> imag, [PeriodicWaveConstraints constraints]) → PeriodicWave -
The
createPeriodicWave()
method of the BaseAudioContext Interface is used to create a PeriodicWave, which is used to define a periodic waveform that can be used to shape the output of an OscillatorNode. -
createScriptProcessor(
[int bufferSize, int numberOfInputChannels, int numberOfOutputChannels]) → ScriptProcessorNode -
The
createScriptProcessor()
method of the BaseAudioContext interface creates a ScriptProcessorNode used for direct audio processing. -
createStereoPanner(
) → StereoPannerNode -
The
createStereoPanner()
method of the BaseAudioContext interface creates a StereoPannerNode, which can be used to apply stereo panning to an audio source. It positions an incoming audio stream in a stereo image using a low-cost panning algorithm. -
createWaveShaper(
) → WaveShaperNode -
The
createWaveShaper()
method of the BaseAudioContext creates a WaveShaperNode, which represents a non-linear distortion. It is used to apply distortion effects to your audio. -
decodeAudioData(
TauArrayBuffer audioData, [DecodeSuccessCallback? successCallback, DecodeErrorCallback? errorCallback]) → TauPromise< AudioBuffer> -
The
decodeAudioData()
method of the BaseAudioContext is used to asynchronously decode audio file data contained in anArrayBuffer
that is loaded fromfetch
,XMLHttpRequest
, or FileReader. The decoded AudioBuffer is resampled to the AudioContext's sampling rate, then passed to a callback or promise. -
dispose(
) → void -
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
toString(
) → String -
A string representation of this object.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited