diff --git a/packages/audiodocs/static/llms-full.txt b/packages/audiodocs/static/llms-full.txt
deleted file mode 100644
index 89e821d15..000000000
--- a/packages/audiodocs/static/llms-full.txt
+++ /dev/null
@@ -1,6523 +0,0 @@
-# Documentation (Full)
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/analysis/analyser-node
-# Title: analyser-node
-
-# AnalyserNode
-
-The `AnalyserNode` interface represents a node providing two core functionalities: extracting time-domain data and frequency-domain data from audio signals.
-It is an [`AudioNode`](/docs/core/audio-node) that passes the audio data unchanged from input to output, but allows to take passed data and process it.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-#### Time domain vs Frequency domain
-
-
-
-A time-domain graph illustrates how a signal evolves over time, displaying changes in amplitude or intensity as time progresses.
-In contrast, a frequency-domain graph reveals how the signal's energy or power is distributed across different frequency bands, highlighting the presence and strength of various frequency components over a specified range.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: AnalyserOptions)
-```
-
-### `AnalyserOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `fftSize` | `number` | 2048 | Number representing size of fast fourier transform |
-| `minDecibels` | `number` | -100 | Initial minimum power in dB for FFT analysis |
-| `maxDecibels` | `number` | -30 | Initial maximum power in dB for FFT analysis |
-| `smoothingTimeConstant` | `number` | 0.8 | Initial smoothing constant for the FFT analysis |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createAnalyser()`](/docs/core/base-audio-context#createanalyser) that creates node with default values.
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `fftSize` | `number` | Integer value representing size of [Fast Fourier Transform](https://en.wikipedia.org/wiki/Fast_Fourier_transform) used to determine frequency domain. In general it is size of returning time-domain data. |
-| `minDecibels` | `number` | Float value representing the minimum value for the range of results from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata). |
-| `maxDecibels` | `number` | Float value representing the maximum value for the range of results from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata). |
-| `smoothingTimeConstant` | `number` | Float value representing averaging constant with the last analysis frame. In general the higher value the smoother is the transition between values over time. |
-| `frequencyBinCount` | `number` | Integer value representing amount of the data obtained in frequency domain, half of the `fftSize` property. | |
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `getFloatFrequencyData`
-
-Copies current frequency data into given array.
-Each value in the array represents the decibel value for a specific frequency.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Float32Array` | The array to which frequency data will be copied. |
-
-#### Returns `undefined`.
-
-### `getByteFrequencyData`
-
-Copies current frequency data into given array.
-Each value in the array is within the range 0 to 255.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Uint8Array` | The array to which frequency data will be copied. |
-
-#### Returns `undefined`.
-
-### `getFloatTimeDomainData`
-
-Copies current time-domain data into given array.
-Each value in the array is the magnitude of the signal at a particular time.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Float32Array` | The array to which time-domain data will be copied. |
-
-#### Returns `undefined`.
-
-### `getByteTimeDomainData`
-
-Copies current time-domain data into given array.
-Each value in the array is within the range 0 to 255, where value of 127 indicates silence.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Uint8Array` | The array to which time-domain data will be copied. |
-
-#### Returns `undefined`.
-
-## Remarks
-
-#### `fftSize`
-
-* Must be a power of 2 between 32 and 32768.
-* Throws `IndexSizeError` if set value is not power of 2, or is outside the allowed range.
-
-#### `minDecibels`
-
-* 0 dB([decibel](https://en.wikipedia.org/wiki/Decibel)) is the loudest possible sound, -10 dB is a 10th of that.
-* When getting data from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude lower then `minDecibels` will be returned as 0.
-* Throws `IndexSizeError` if set value is greater than or equal to `maxDecibels`.
-
-#### `maxDecibels`
-
-* 0 dB([decibel](https://en.wikipedia.org/wiki/Decibel)) is the loudest possible sound, -10 dB is a 10th of that.
-* When getting data from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude higher then `maxDecibels` will be returned as 255.
-* Throws `IndexSizeError` if set value is less then or equal to `minDecibels`.
-
-#### `smoothingTimeConstant`
-
-* Nominal range is 0 to 1.
-* 0 means no averaging, 1 means "overlap the previous and current buffer quite a lot while computing the value".
-* Throws `IndexSizeError` if set value is outside the allowed range.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/core/audio-context
-# Title: audio-context
-
-# AudioContext
-
-The `AudioContext` interface inherits from [`BaseAudioContext`](/docs/core/base-audio-context).
-It is responsible for supervising and managing audio-processing graph.
-
-## Constructor
-
-`new AudioContext(options: AudioContextOptions)`
-
-```jsx
-interface AudioContextOptions {
- sampleRate: number;
-}
-```
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `NotSupportedError` | `sampleRate` is outside the nominal range \[8000, 96000]. |
-
-## Properties
-
-`AudioContext` does not define any additional properties.
-It inherits all properties from [`BaseAudioContext`](/docs/core/base-audio-context#properties).
-
-## Methods
-
-It inherits all methods from [`BaseAudioContext`](/docs/core/base-audio-context#methods).
-
-### `close`
-
-Closes the audio context, releasing any system audio resources that it uses.
-
-#### Returns `Promise`.
-
-### `suspend`
-
-Suspends time progression in the audio context.
-It is useful when your application will not use audio for a while.
-
-#### Returns `Promise`.
-
-### `resume`
-
-Resumes a previously suspended audio context.
-
-#### Returns `Promise`.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/core/audio-node
-# Title: audio-node
-
-# AudioNode
-
-The `AudioNode` interface serves as a versatile interface for constructing an audio processing graph, representing individual units of audio processing functionality.
-Each `AudioNode` is associated with a certain number of audio channels that facilitate the transfer of audio data through processing graph.
-
-We usually represent the channels with the standard abbreviations detailed in the table below:
-
-| Name | Number of channels | Channels |
-| :----: | :------: | :-------- |
-| Mono | 1 | 0: M - mono |
-| Stereo | 2 | 0: L - left 1: R - right |
-| Quad | 4 | 0: L - left 1: R - right 2: SL - surround left 3: SR - surround right |
-| Stereo | 6 | 0: L - left 1: R - right 2: C - center 3: LFE - subwoofer 4: SL - surround left 5: SR - surround right |
-
-#### Mixing
-
-When node has more then one input or number of inputs channels differs from output up-mixing or down-mixing must be conducted.
-There are three properties involved in mixing process: `channelCount`, [`ChannelCountMode`](/docs/types/channel-count-mode), [`ChannelInterpretation`](/docs/types/channel-interpretation).
-Based on them we can obtain output's number of channels and mixing strategy.
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `context` | [`BaseAudioContext`](/docs/core/base-audio-context) | Associated context. | |
-| `numberOfInputs` | `number` | Integer value representing the number of input connections for the node. | |
-| `numberOfOutputs` | `number` | Integer value representing the number of output connections for the node. | |
-| `channelCount` | `number` | Integer used to determine how many channels are used when up-mixing or down-mixing node's inputs. | |
-| `channelCountMode` | [`ChannelCountMode`](/docs/types/channel-count-mode) | Enumerated value that specifies the method by which channels are mixed between the node's inputs and outputs. | |
-| `channelInterpretation` | [`ChannelInterpretation`](/docs/types/channel-interpretation) | Enumerated value that specifies how input channels are mapped to output channels when number of them is different. | |
-
-## Examples
-
-### Connecting node to node
-
-```tsx
-import { OscillatorNode, GainNode, AudioContext } from 'react-native-audio-api';
-
-function App() {
- const audioContext = new AudioContext();
- const oscillatorNode = audioContext.createOscillator();
- const gainNode = audioContext.createGain();
-
- gainNode.gain.value = 0.5; //lower volume to 0.5
- oscillatorNode.connect(gainNode);
- gainNode.connect(audioContext.destination);
- oscillatorNode.start(audioContext.currentTime);
-}
-```
-
-### Connecting node to audio param (LFO-controlled parameter)
-
-```tsx
-import { OscillatorNode, GainNode, AudioContext } from 'react-native-audio-api';
-
-function App() {
- const audioContext = new AudioContext();
- const oscillatorNode = audioContext.createOscillator();
- const lfo = audioContext.createOscillator();
- const gainNode = audioContext.createGain();
-
- gainNode.gain.value = 0.5; //lower volume to 0.5
- lfo.frequency.value = 2; //low frequency oscillator with 2Hz
-
- // by default oscillator wave values ranges from -1 to 1
- // connecting lfo to gain param will cause the gain param to oscillate at 2Hz and its value will range from 0.5 - 1 to 0.5 + 1
- // you can modulate amplitude by connecting lfo to another gain that would be responsible for this value
- lfo.connect(gainNode.gain)
-
- oscillatorNode.connect(gainNode);
- gainNode.connect(audioContext.destination);
- oscillatorNode.start(audioContext.currentTime);
- lfo.start(audioContext.currentTime);
-}
-```
-
-## Methods
-
-### `connect`
-
-Connects one of the node's outputs to a destination.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `destination` | [`AudioNode`](/docs/core/audio-node) or [`AudioParam`](/docs/core/audio-param) | `AudioNode` or `AudioParam` to which to connect. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `InvalidAccessError` | If `destination` is not part of the same audio context as the node. |
-
-#### Returns `undefined`.
-
-### `disconnect`
-
-Disconnects one or more nodes from the node.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `destination` | [`AudioNode`](/docs/core/audio-node) or [`AudioParam`](/docs/core/audio-param) | `AudioNode` or `AudioParam` from which to disconnect. |
-
-If no arguments provided node disconnects from all outgoing connections.
-
-#### Returns `undefined`.
-
-### `AudioNodeOptions`
-
-It is used to constructing majority of all `AudioNodes`.
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `channelCount` | `number` | 2 | Indicates number of channels used in mixing of node. |
-| `channelCountMode` | [`ChannelCountMode`](/docs/types/channel-count-mode) | `max` | Determines how the number of input channels affects the number of output channels in an audio node. |
-| `channelInterpretation` | [`ChannelInterpretation`](/docs/types/channel-interpretation) | `speakers` | Specifies how input channels are mapped out to output channels when the number of them are different. |
-
-If any of these values are not provided, default values are used.
-
-## Remarks
-
-#### `numberOfInputs`
-
-* Source nodes are characterized by having a `numberOfInputs` value of 0.
-
-#### `numberOfOutputs`
-
-* Destination nodes are characterized by having a `numberOfOutputs` value of 0.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/core/audio-param
-# Title: audio-param
-
-# AudioParam
-
-The `AudioParam` interface represents audio-related parameter (such as `gain` property of [GainNode\`](/docs/effects/gain-node)).
-It can be set to specific value or schedule value change to happen at specific time, and following specific pattern.
-
-#### a-rate vs k-rate
-
-* `a-rate` - takes the current audio parameter value for each sample frame of the audio signal.
-* `k-rate` - uses the same initial audio parameter value for the whole block processed.
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `defaultValue` | `number` | Initial value of the parameter. | |
-| `minValue` | `number` | Minimum possible value of the parameter. | |
-| `maxValue` | `number` | Maximum possible value of the parameter. | |
-| `value` | `number` | Current value of the parameter. Initially set to `defaultValue`. |
-
-## Methods
-
-### `setValueAtTime`
-
-Schedules an instant change to the `value` at given `startTime`.
-
-> **Caution**
->
-> If you need to call this function many times (especially more than 31 times), it is recommended to use the methods described below
-> (such as [`linearRampToValueAtTime`](/docs/core/audio-param#linearramptovalueattime) or [`exponentialRampToValueAtTime`](/docs/core/audio-param#exponentialramptovalueattime)),
-> as they are more efficient for continuous changes. For more specific use cases, you can schedule multiple value changes using [`setValueCurveAtTime`](/docs/core/audio-param#setvaluecurveattime).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `value` | `number` | A float representing the value the `AudioParam` will be set at given time |
-| `startTime` | `number` | The time, in seconds, at which the change in value is going to happen. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `startTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `linearRampToValueAtTime`
-
-Schedules a gradual linear change to the new value.
-The change begins at the time designated for the previous event. It follows a linear ramp to the `value`, achieving it by the specified `endTime`.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `value` | `number` | A float representing the value, the `AudioParam` will ramp to by given time. |
-| `endTime` | `number` | The time, in seconds, at which the value ramp will end. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `endTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `exponentialRampToValueAtTime`
-
-Schedules a gradual exponential change to the new value.
-The change begins at the time designated for the previous event. It follows an exponential ramp to the `value`, achieving it by the specified `endTime`.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `value` | `number` | A float representing the value the `AudioParam` will ramp to by given time. |
-| `endTime` | `number` | The time, in seconds, at which the value ramp will end. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties).|
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `endTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `setTargetAtTime`
-
-Schedules a gradual change to the new value at the start time.
-This method is useful for decay or release portions of [ADSR envelopes](/docs/effects/gain-node#envelope---adsr).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `target` | `number` | A float representing the value to which the `AudioParam` will start transitioning. |
-| `startTime` | `number` | The time, in seconds, at which exponential transition will begin. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-| `timeConstant` | `number` | A double representing the time-constant value of an exponential approach to the `target`. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `startTime` is negative number. |
-| `RangeError` | `timeConstant` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `setValueCurveAtTime`
-
-Schedules the parameters's value change following a curve defined by given array.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `values` | `Float32Array` | The array of values defining a curve, which change will follow. |
-| `startTime` | `number` | The time, in seconds, at which change will begin. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-| `duration` | `number` | A double representing total time over which the change will happen. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `startTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `cancelScheduledValues`
-
-Cancels all scheduled changes after given cancel time.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `cancelTime` | `number` | The time, in seconds, after which all scheduled changes will be cancelled. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `cancelTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `cancelAndHoldAtTime`
-
-Cancels all scheduled changes after given cancel time, but holds its value at given cancel time until further changes appear.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `cancelTime` | `number` | The time, in seconds, after which all scheduled changes will be cancelled. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties).|
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `cancelTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-## Remarks
-
-All time parameters should be in the same time coordinate system as [`BaseAudioContext.currentTime`](/docs/core/base-audio-context).
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context
-# Title: base-audio-context
-
-# BaseAudioContext
-
-The `BaseAudioContext` interface acts as a supervisor of audio-processing graphs. It provides key processing parameters such as current time, output destination or sample rate.
-Additionally, it is responsible for nodes creation and audio-processing graph's lifecycle management.
-However, `BaseAudioContext` itself cannot be directly utilized, instead its functionalities must be accessed through one of its derived interfaces: [`AudioContext`](/docs/core/audio-context), [`OfflineAudioContext`](/docs/core/offline-audio-context).
-
-#### Audio graph
-
-An audio graph is a structured representation of audio processing elements and their connections within an audio context.
-The graph consists of various types of nodes, each performing specific audio operations, connected in a network that defines the audio signal flow.
-In general we can distinguish four types of nodes:
-
-* Source nodes (e.g [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node), [`OscillatorNode`](/docs/sources/oscillator-node))
-* Effect nodes (e.g [`GainNode`](/docs/effects/gain-node), [`BiquadFilterNode`](/docs/effects/biquad-filter-node))
-* Analysis nodes (e.g [`AnalyserNode`](/docs/analysis/analyser-node))
-* Destination nodes (e.g [`AudioDestinationNode`](/docs/destinations/audio-destination-node))
-
-
-
-#### Rendering audio graph
-
-Audio graph rendering is done in blocks of sample-frames. The number of sample-frames in a block is called render quantum size, and the block itself is called a render quantum.
-By default render quantum size value is 128 and it is constant.
-
-The [`AudioContext`](/docs/core/audio-context) rendering thread is driven by a system-level audio callback.
-Each call has a system-level audio callback buffer size, which is a varying number of sample-frames that needs to be computed on time before the next system-level audio callback arrives,
-but render quantum size does not have to be a divisor of the system-level audio callback buffer size.
-
-> **Info**
->
-> Concept of system-level audio callback does not apply to [`OfflineAudioContext`](/docs/core/offline-audio-context).
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `currentTime` | `number` | Double value representing an ever-increasing hardware time in seconds, starting from 0. | |
-| `destination` | [`AudioDestinationNode`](/docs/destinations/audio-destination-node) | Final output destination associated with the context. | |
-| `sampleRate` | `number` | Float value representing the sample rate (in samples per seconds) used by all nodes in this context. | |
-| `state` | [`ContextState`](/docs/core/base-audio-context#contextstate) | Enumerated value represents the current state of the context. | |
-
-## Methods
-
-### `createAnalyser`
-
-Creates [`AnalyserNode`](/docs/analysis/analyser-node).
-
-#### Returns `AnalyserNode`.
-
-### `createBiquadFilter`
-
-Creates [`BiquadFilterNode`](/docs/effects/biquad-filter-node).
-
-#### Returns `BiquadFilterNode`.
-
-### `createBuffer`
-
-Creates [`AudioBuffer`](/docs/sources/audio-buffer).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `numOfChannels` | `number` | An integer representing the number of channels of the buffer. |
-| `length` | `number` | An integer representing the length of the buffer in sampleFrames. Two seconds buffer has length equals to `2 * sampleRate`. |
-| `sampleRate` | `number` | A float representing the sample rate of the buffer. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `NotSupportedError` | `numOfChannels` is outside the nominal range \[1, 32]. |
-| `NotSupportedError` | `sampleRate` is outside the nominal range \[8000, 96000]. |
-| `NotSupportedError` | `length` is less then 1. |
-
-#### Returns `AudioBuffer`.
-
-### `createBufferSource`
-
-Creates [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `options` | `{ pitchCorrection: boolean }` | Boolean that specifies if pitch correction has to be available. |
-
-#### Returns `AudioBufferSourceNode`.
-
-### `createBufferQueueSource`
-
-Creates [`AudioBufferQueueSourceNode`](/docs/sources/audio-buffer-queue-source-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `options` | `{ pitchCorrection: boolean }` | Boolean that specifies if pitch correction has to be available. |
-
-#### Returns `AudioBufferQueueSourceNode`.
-
-### `createConstantSource`
-
-Creates [`ConstantSourceNode`](/docs/sources/constant-source-node).
-
-#### Returns `ConstantSourceNode`.
-
-### `createConvolver`
-
-Creates [`ConvolverNode`](/docs/effects/convolver-node).
-
-#### Returns `ConvolverNode`.
-
-### `createDelay`
-
-Creates [`DelayNode`](/docs/effects/delay-node)
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `maxDelayTime` | `number` | Maximum amount of time to buffer delayed values|
-
-#### Returns `DelayNode`
-
-### `createGain`
-
-Creates [`GainNode`](/docs/effects/gain-node).
-
-#### Returns `GainNode`.
-
-### `createIIRFilter`
-
-Creates [`IIRFilterNode`](/docs/effects/iir-filter-node).
-
-#### Returns `IIRFilterNode`.
-
-### `createOscillator`
-
-Creates [`OscillatorNode`](/docs/sources/oscillator-node).
-
-#### Returns `OscillatorNode`.
-
-### `createPeriodicWave`
-
-Creates [`PeriodicWave`](/docs/effects/periodic-wave). This waveform specifies a repeating pattern that an OscillatorNode can use to generate its output sound.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `real` | `Float32Array` | An array of cosine terms. |
-| `imag` | `Float32Array` | An array of sine terms. |
-| `constraints` | [`PeriodicWaveConstraints`](/docs/core/base-audio-context#periodicwaveconstraints) | An object that specifies if normalization is disabled. If so, periodic wave will have maximum peak value of 1 and minimum peak value of -1.|
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `InvalidAccessError` | `real` and `imag` arrays do not have same length. |
-
-#### Returns `PeriodicWave`.
-
-### `createRecorderAdapter`
-
-Creates [`RecorderAdapterNode`](/docs/sources/recorder-adapter-node).
-
-#### Returns `RecorderAdapterNode`
-
-### `createStereoPanner`
-
-Creates [`StereoPannerNode`](/docs/effects/stereo-panner-node).
-
-#### Returns `StereoPannerNode`.
-
-### `createStreamer`
-
-Creates [`StreamerNode`](/docs/sources/streamer-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `options` | [`StreamerOptions`](/docs/sources/streamer-node#streameroptions) | Streamer options to initialize. |
-
-#### Returns `StreamerNode`.
-
-### `createWaveShaper`
-
-Creates [`WaveShaperNode`](/docs/effects/wave-shaper-node).
-
-#### Returns `WaveShaperNode`.
-
-### `createWorkletNode`
-
-Creates [`WorkletNode`](/docs/worklets/worklet-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `worklet` | `(Array, number) => void` | The worklet to be executed. |
-| `bufferLength` | `number` | The size of the buffer that will be passed to the worklet on each call. |
-| `inputChannelCount` | `number` | The number of channels that the node expects as input (it will get min(expected, provided)). |
-| `workletRuntime` | `AudioWorkletRuntime` | The kind of runtime to use for the worklet. See [worklet runtimes](/docs/worklets/worklets-introduction#what-kind-of-worklets-are-used-in-react-native-audio-api) for details. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `Error` | `react-native-worklet` is not found as dependency. |
-| `NotSupportedError` | `bufferLength` \< 1. |
-| `NotSupportedError` | `inputChannelCount` is not in range \[1, 32]. |
-
-#### Returns `WorkletNode`.
-
-### `createWorkletSourceNode`
-
-Creates [`WorkletSourceNode`](/docs/worklets/worklet-source-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `worklet` | `(Array, number, number, number) => void` | The worklet to be executed. |
-| `workletRuntime` | `AudioWorkletRuntime` | The kind of runtime to use for the worklet. See [worklet runtimes](/docs/worklets/worklets-introduction#what-kind-of-worklets-are-used-in-react-native-audio-api) for details. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `Error` | `react-native-worklet` is not found as dependency. |
-
-#### Returns `WorkletSourceNode`.
-
-### `createWorkletProcessingNode`
-
-Creates [`WorkletProcessingNode`](/docs/worklets/worklet-processing-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `worklet` | `(Array, Array, number, number) => void` | The worklet to be executed. |
-| `workletRuntime` | `AudioWorkletRuntime` | The kind of runtime to use for the worklet. See [worklet runtimes](/docs/worklets/worklets-introduction#what-kind-of-worklets-are-used-in-react-native-audio-api) for details. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `Error` | `react-native-worklet` is not found as dependency. |
-
-#### Returns `WorkletProcessingNode`.
-
-### `decodeAudioData`
-
-Decodes audio data from either a file path or an ArrayBuffer. The optional `sampleRate` parameter lets you resample the decoded audio.
-If not provided, the audio will be automatically resampled to match the audio context's `sampleRate`.
-
-**For the list of supported formats visit [this page](/docs/utils/decoding).**
-
-Parameter
-Type
-Description
-
-input
-ArrayBuffer
-ArrayBuffer with audio data.
-
-string
-Path to remote or local audio file.
-
-number
-Asset module id.
-
-fetchOptions
-[RequestInit](https://github.com/facebook/react-native/blob/ac06f3bdc76a9fd7c65ab899e82bff5cad9b94b6/packages/react-native/src/types/globals.d.ts#L265)
-Additional headers parameters when passing url to fetch.
-
-#### Returns `Promise`.
-
-Example decoding
-
-```tsx
-const url = ... // url to an audio
-
-const buffer = await audioContext.decodeAudioData(url);
-```
-
-### `decodePCMInBase64`
-
-Decodes base64-encoded PCM audio data.
-
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `base64String` | `string` | Base64-encoded PCM audio data. |
-| `inputSampleRate` | `number` | Sample rate of the input PCM data. |
-| `inputChannelCount` | `number` | Number of channels in the input PCM data. |
-| `isInterleaved` | `boolean` | Whether the PCM data is interleaved. Default is `true`. |
-
-#### Returns `Promise`
-
-Example decoding with data in base64 format
-
-```tsx
-const data = ... // data encoded in base64 string
-// data is not interleaved (Channel1, Channel1, ..., Channel2, Channel2, ...)
-const buffer = await this.audioContext.decodeAudioData(data, 4800, 2, false);
-```
-
-## Remarks
-
-#### `currentTime`
-
-* Timer starts when context is created, stops when context is suspended.
-
-### `ContextState`
-
-Details
-
-**Acceptable values:**
-
-* `suspended`
-
-The audio context has been suspended (with one of [`suspend`](/docs/core/audio-context#suspend) or [`OfflineAudioContext.suspend`](/docs/core/offline-audio-context#suspend)).
-
-* `running`
-
-The audio context is running normally.
-
-* `closed`
-
-The audio context has been closed (with [`close`](/docs/core/audio-context#close) method).
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/core/offline-audio-context
-# Title: offline-audio-context
-
-# OfflineAudioContext
-
-The `OfflineAudioContext` interface inherits from [`BaseAudioContext`](/docs/core/base-audio-context).
-In contrast with a standard [`AudioContext`](/docs/core/audio-context), it doesn't render audio to the device hardware.
-Instead, it processes the audio as quickly as possible and outputs the result to an [`AudioBuffer`](/docs/sources/audio-buffer).
-
-## Constructor
-
-`OfflineAudioContext(options: OfflineAudioContextOptions)`
-
-```typescript
-interface OfflineAudioContextOptions {
- numberOfChannels: number;
- length: number; // The length of the rendered AudioBuffer, in sample-frames
- sampleRate: number;
-}
-```
-
-## Properties
-
-`OfflineAudioContext` does not define any additional properties.
-It inherits all properties from [`BaseAudioContext`](/docs/core/base-audio-context#properties).
-
-## Methods
-
-It inherits all methods from [`BaseAudioContext`](/docs/core/base-audio-context#methods).
-
-### `suspend`
-
-Schedules a suspension of the time progression in audio context at the specified time.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `suspendTime` | `number` | A floating-point number specifying the suspend time, in seconds. |
-
-#### Returns `Promise`.
-
-### `resume`
-
-Resume time progression in audio context when it has been suspended.
-
-#### Returns `Promise`
-
-### `startRendering`
-
-Starts rendering the audio, taking into account the current connections and the current scheduled changes.
-
-#### Returns `Promise`.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/destinations/audio-destination-node
-# Title: audio-destination-node
-
-# AudioDestinationNode
-
-The `AudioDestinationNode` interface represents the final destination of an audio graph, where all processed audio is ultimately directed.
-
-In most cases, this means the sound is sent to the system’s default output device, such as speakers or headphones.
-When used with an [`OfflineAudioContext`](/docs/core/offline-audio-context) the rendered audio isn’t played back immediately—instead,
-it is stored in an [`AudioBuffer`](/docs/sources/audio-buffer).
-
-Each `AudioContext` has exactly one AudioDestinationNode, which can be accessed through its
-[`AudioContext.destination`](/docs/core/base-audio-context/#properties) property.
-
-#### [`AudioNode`](/docs/core/audio-node#read-only-properties) properties
-
-## Properties
-
-`AudioDestinationNode` does not define any additional properties.
-It inherits all properties from [`AudioNode`](/docs/core/audio-node), listed above.
-
-## Methods
-
-`AudioDestinationNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node).
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/biquad-filter-node
-# Title: biquad-filter-node
-
-# BiquadFilterNode
-
-The `BiquadFilterNode` interface represents a low-order filter. It is an [`AudioNode`](/docs/core/audio-node) used for tone controls, graphic equalizers, and other audio effects.
-Multiple `BiquadFilterNode` instances can be combined to create more complex filtering chains.
-
-#### [`AudioNode`](/docs/core/audio-node#read-only-properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: BiquadFilterOptions)
-```
-
-### `BiquadFilterOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `Q` | `number` | 1 | Initial value for [`Q`](/docs/effects/biquad-filter-node#properties) |
-| `detune` | `number` | 0 | Initial value for [`detune`](/docs/effects/biquad-filter-node#properties) |
-| `frequency` | `number` | 350 | Initial value for [`frequency`](/docs/effects/biquad-filter-node#properties) |
-| `gain` | `number` | 0 | Initial value for [`gain`](/docs/effects/biquad-filter-node#properties) |
-| `type` | `BiquadFilterType` | `lowpass` | Initial value for [`type`](/docs/effects/biquad-filter-node#properties) |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createBiquadFilter()`](/docs/core/base-audio-context#createbiquadfilter) that creates node with default values.
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Rate | Description |
-| :--: | :--: | :----------: | :-- |
-| `frequency` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | The filter’s cutoff or center frequency in hertz (Hz). |
-| `detune` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | Amount by which the frequency is detuned in cents. |
-| `Q` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | The filter’s Q factor (quality factor). |
-| `gain` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | Gain applied by specific filter types, in decibels (dB). |
-| `type` | [`BiquadFilterType`](#biquadfiltertype-enumeration-description) | — | Defines the kind of filtering algorithm the node applies (e.g. `"lowpass"`, `"highpass"`). |
-
-#### BiquadFilterType enumeration description
-
-Note: The detune parameter behaves the same way for all filter types, so it is not repeated below.
-| `type` | Description | `frequency` | `Q` | `gain` |
-|:------:|:-----------:|:-----------:|:---:|:------:|
-| `lowpass` | Second-order resonant lowpass filter with 12dB/octave rolloff. Frequencies below the cutoff pass through; higher frequencies are attenuated. | The cutoff frequency. | Determines how peaked the frequency is around the cutoff. Higher values result in a sharper peak. | Not used |
-| `highpass` | Second-order resonant highpass filter with 12dB/octave rolloff. Frequencies above the cutoff pass through; lower frequencies are attenuated. | The cutoff frequency. | Determines how peaked the frequency is around the cutoff. Higher values result in a sharper peak. | Not used |
-| `bandpass` | Second-order bandpass filter. Frequencies within a given range pass through; others are attenuated. | The center of the frequency band. | Controls the bandwidth. Higher values result in a narrower band. | Not used |
-| `lowshelf` | Second-order lowshelf filter. Frequencies below the cutoff are boosted or attenuated; others remain unchanged. | The upper limit of the frequencies where the boost (or attenuation) is applied. | Not used | The boost (in dB) to be applied. Negative values attenuate the frequencies.|
-| `highshelf` | Second-order highshelf filter. Frequencies above the cutoff are boosted or attenuated; others remain unchanged. | The lower limit of the frequencies where the boost (or attenuation) is applied. | Not used | The boost (in dB) to be applied. Negative values attenuate the frequencies. |
-| `peaking` | Frequencies around a center frequency are boosted or attenuated; others remain unchanged. | The center of the frequency range where the boost (or an attenuation) is applied. | Controls the bandwidth. Higher values result in a narrower band. | The boost (in dB) to be applied. Negative values attenuate the frequencies. |
-| `notch` | Notch (band-stop) filter. Opposite of a bandpass filter: frequencies around the center are attenuated; others remain unchanged. | The center of the frequency range where the notch is applied. | Controls the bandwidth. Higher values result in a narrower band. | Not used |
-| `allpass` | Second-order allpass filter. All frequencies pass through, but changes the phase relationship between the various frequencies. | The frequency where the center of the phase transition occurs (maximum group delay). | Controls how sharp the phase transition is at the center frequency. Higher values result in a sharper transition and a larger group delay. | Not used |
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `getFrequencyResponse`
-
-| Parameter | Type | Description |
-| :--------: | :--: | :---------- |
-| `frequencyArray` | `Float32Array` | Array of frequencies (in Hz), which you want to filter. |
-| `magResponseOutput` | `Float32Array` | Output array to store the computed linear magnitude values for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-| `phaseResponseOutput` | `Float32Array` | Output array to store the computed phase response values (in radians) for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-
-#### Returns `undefined`.
-
-## Remarks
-
-#### `frequency`
-
-* Range: \[10, $\frac$].
-
-#### `Q`
-
-* Range:
- * For `lowpass` and `highpass` is \[-Q, Q], where Q is the largest value for which $10^$ does not overflow the single-precision floating-point representation.
- Numerically: Q ≈ 770.63678.
- * For `bandpass`, `notch`, `allpass`, and `peaking`: Q is related to the filter’s bandwidth and should be positive.
- * Not used for `lowshelf` and `highshelf`.
-
-#### `gain`
-
-* Range: \[-40, 40].
-* Positive values correspond to amplification; negative to attenuation.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/convolver-node
-# Title: convolver-node
-
-# ConvolverNode
-
-The `ConvolverNode` interface represents a linear convolution effect, that can be applied to a signal given an impulse response.
-This is the easiest way to achieve `echo` or [`reverb`](https://en.wikipedia.org/wiki/Reverb_effect) effects.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-> **Info**
->
-> Convolver is a node with tail-time, which means, that it continues to output non-silent audio with zero input for the length of the buffer.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: ConvolverOptions)
-```
-
-### `ConvolverOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `buffer` | `number` | | Initial value for [`buffer`](/docs/effects/convolver-node#properties). |
-| `normalize` | `boolean` | true | Initial value for [`normalize`](/docs/effects/convolver-node#properties). |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createConvolver()`](/docs/core/base-audio-context#createconvolver)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `buffer` | [`AudioBuffer`](/docs/sources/audio-buffer) | Associated AudioBuffer. |
-| `normalize` | `boolean` | Whether the impulse response from the buffer will be scaled by an equal-power normalization when the buffer attribute is set. |
-
-> **Caution**
->
-> Linear convolution is a heavy computational process, so if your audio has some weird artefacts that should not be there, try to decrease the duration of impulse response buffer.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/delay-node
-# Title: delay-node
-
-# DelayNode
-
-The `DelayNode` interface represents the latency of the audio signal by given time. It is an [`AudioNode`](/docs/core/audio-node) that applies time shift to incoming signal f.e.
-if `delayTime` value is 0.5, it means that audio will be played after 0.5 seconds.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-> **Info**
->
-> Delay is a node with tail-time, which means, that it continues to output non-silent audio with zero input for the duration of `delayTime`.
-
-## Constructor
-
-[`BaseAudioContext.createDelay(maxDelayTime?: number)`](/docs/core/base-audio-context#createdelay)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `delayTime`| [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing value of time shift to apply. |
-
-> **Warning**
->
-> In web audio api specs `delayTime` is an `a-rate` param.
-
-## Methods
-
-`DelayNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-#### `maxDelayTime`
-
-* Default value is 1.0.
-* Nominal range is 0 - 180.
-
-#### `delayTime`
-
-* Default value is 0.
-* Nominal range is 0 - `maxDelayTime`.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/gain-node
-# Title: gain-node
-
-# GainNode
-
-The `GainNode` interface represents a change in volume (amplitude) of the audio signal. It is an [`AudioNode`](/docs/core/audio-node) with a single `gain` [`AudioParam`](/docs/core/audio-param) that multiplies every sample passing through it.
-
-> **Tip**
->
-> Direct, immediate gain changes often cause audible clicks. Use the scheduling methods of [`AudioParam`](/docs/core/audio-param) (e.g. `linearRampToValueAtTime`, `exponentialRampToValueAtTime`) to smoothly interpolate volume transitions.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: GainOptions)
-```
-
-### `GainOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `gain` | `number` | `1.0` | Initial value for [`gain`](/docs/effects/gain-node#properties) |
-
-You can also create a `GainNode` via the [`BaseAudioContext.createGain()`](/docs/core/base-audio-context#creategain) factory method, which uses default values.
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `gain` | [`AudioParam`](/docs/core/audio-param) | [`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing the gain value to apply. | |
-
-## Methods
-
-`GainNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Usage
-
-A common use case is controlling the master volume of an audio graph:
-
-```tsx
-const audioContext = new AudioContext();
-const gainNode = audioContext.createGain();
-
-// Set volume to 50%
-gainNode.gain.setValueAtTime(0.5, audioContext.currentTime);
-
-// Connect source → gain → output
-source.connect(gainNode);
-gainNode.connect(audioContext.destination);
-```
-
-To fade in a sound over 2 seconds:
-
-```tsx
-gainNode.gain.setValueAtTime(0, audioContext.currentTime);
-gainNode.gain.linearRampToValueAtTime(1, audioContext.currentTime + 2);
-```
-
-## Remarks
-
-#### `gain`
-
-* Nominal range is -∞ to ∞.
-* Values greater than `1.0` amplify the signal; values between `0` and `1.0` attenuate it.
-* A value of `0` silences the signal. Negative values invert the signal phase.
-
-## Advanced usage — Envelope (ADSR)
-
-`GainNode` is the key building block for implementing sound envelopes. For a practical, step-by-step walkthrough of ADSR envelopes and how to apply them in a real app, see the [Making a piano keyboard](/docs/guides/making-a-piano-keyboard#envelopes-) guide.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/iir-filter-node
-# Title: iir-filter-node
-
-# IIRFilterNode
-
-The `IIRFilterNode` interface represents a general infinite impulse response (IIR) filter.
-It is an [`AudioNode`](/docs/core/audio-node) used for tone controls, graphic equalizers, and other audio effects.
-`IIRFilterNode` lets the parameters of the filter response be specified, so that it can be tuned as needed.
-
-In general, it is recommended to use [`BiquadFilterNode`](/docs/effects/biquad-filter-node) for implementing higher-order filters,
-as it is less sensitive to numeric issues and its parameters can be automated. You can create all even-order IIR filters with `BiquadFilterNode`,
-but if odd-ordered filters are needed or automation is not needed, then `IIRFilterNode` may be appropriate.
-
-## Constructor
-
-[`BaseAudioContext.createIIRFilter(options: IIRFilterNodeOptions)`](/docs/core/base-audio-context#createiirfilter)
-
-```jsx
-interface IIRFilterNodeOptions {
- feedforward: number[]; // array of floating-point values specifying the feedforward (numerator) coefficients
- feedback: number[]; // array of floating-point values specifying the feedback (denominator) coefficients
-}
-```
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `NotSupportedError` | One or both of the input arrays exceeds 20 members. |
-| `InvalidStateError` | All of the feedforward coefficients are 0, or the first feedback coefficient is 0. |
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `getFrequencyResponse`
-
-| Parameter | Type | Description |
-| :--------: | :--: | :---------- |
-| `frequencyArray` | `Float32Array` | Array of frequencies (in Hz), which you want to filter. |
-| `magResponseOutput` | `Float32Array` | Output array to store the computed linear magnitude values for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-| `phaseResponseOutput` | `Float32Array` | Output array to store the computed phase response values (in radians) for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-
-#### Returns `undefined`.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/periodic-wave
-# Title: periodic-wave
-
-# PeriodicWave
-
-The `PeriodicWave` interface defines a periodic waveform that can be used to shape the output of an OscillatorNode.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options: PeriodicWaveOptions)
-```
-
-### `PeriodicWaveOptions`
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `real` | `Float32Array` | - | [Cosine terms](/docs/core/base-audio-context#createperiodicwave) |
-| `imag` | `Float32Array` | - | [Sine terms](/docs/core/base-audio-context#createperiodicwave) |
-| `disableNormalization` | `boolean` | false | Whether the periodic wave is [normalized](/docs/core/base-audio-context#createperiodicwave) or not. |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createPeriodicWave(real, imag, constraints?: PeriodicWaveConstraints)`](/docs/core/base-audio-context#createperiodicwave)
-
-## Properties
-
-None. `PeriodicWave` has no own or inherited properties.
-
-## Methods
-
-None. `PeriodicWave` has no own or inherited methods.
-
-## Remarks
-
-#### `real` and `imag`
-
-* if only one is specified, the other one is treated as array of 0s of the same length
-* if neither is given values are equivalent to the sine wave
-* if both given, they have to have the same length
-* to see how values corresponds to the output wave [see](https://webaudio.github.io/web-audio-api/#waveform-generation) for more information
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/stereo-panner-node
-# Title: stereo-panner-node
-
-# StereoPannerNode
-
-The `StereoPannerNode` interface represents the change in ratio between two output channels (f. e. left and right speaker).
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, stereoPannerOptions?: StereoPannerOptions)
-```
-
-### `StereoPannerOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `pan` | `number` | - | Number representing pan value |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createStereoPanner()`](/docs/core/base-audio-context#createstereopanner)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :--: | :--: | :---------- |
-| `pan` | [`AudioParam`](/docs/core/audio-param) | [`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing how the audio signal is distributed between the left and right channels. |
-
-## Methods
-
-`StereoPannerNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-#### `pan`
-
-* Default value is 0
-* Nominal range is -1 (only left channel) to 1 (only right channel).
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/effects/wave-shaper-node
-# Title: wave-shaper-node
-
-# WaveShaperNode
-
-The `WaveShaperNode` interface represents non-linear signal distortion effects.
-Non-linear distortion is commonly used for both subtle non-linear warming, or more obvious distortion effects.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, waveShaperOptions?: WaveShaperOptions)
-```
-
-### `WaveShaperOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `curve` | `Float32Array` | - | Array representing curve values |
-| `oversample` | [`OverSampleType`](/docs/effects/wave-shaper-node#oversampletype) | - | Value representing oversample property |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createStereoPanner()`](/docs/core/base-audio-context#createwaveshaper)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :--: | :--: | :---------- |
-| `curve` | `Float32Array \| null` | The shaping curve used for waveshaping effect. |
-| `oversample` | [`OverSampleType`](/docs/effects/wave-shaper-node#oversampletype) | Specifies what type of oversampling should be used when applying shaping curve. |
-
-## Methods
-
-`WaveShaperNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-#### `curve`
-
-* Default value is null
-* Contains at least two values.
-* Subsequent modifications of curve have no effects. To change the curve, assign a new Float32Array object to this property.
-
-#### `oversample`
-
-* Default value `none`
-* Value of `2x` or `4x` can increases quality of the effect, but in some cases it is better not to use oversampling for very accurate shaping curve.
-
-### `OverSampleType`
-
-Type definitions
-
-```typescript
-// Do not oversample | Oversample two times | Oversample four times
-type OverSampleType = 'none' | '2x' | '4x';
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/fundamentals/best-practices
-# Title: best-practices
-
-# Best Practices
-
-When working with audio in a web or mobile application, following best practices ensures optimal performance,
-user experience, and maintainability. Here are some key best practices to consider when using the React Native Audio API:
-
-## [**AudioContext**](/docs/core/audio-context) Management
-
-* **Single Audio Context**: Create one instance of `AudioContext` in order to easily and efficiently manage the audio layer's state in your application.
- Creating many instances could lead to undefined behavior. Same of them could still be in [`running`](/docs/core/base-audio-context#contextstate) state while others could be
- [`suspended`](/docs/core/base-audio-context#contextstate) or [`closed`](/docs/core/base-audio-context#contextstate), if you do not manage them by yourself.
-
-* **Clean up**: Always close the `AudioContext` using the [`close()`](/docs/core/audio-context#close) method when it is no longer needed.
- This releases system audio resources and prevents memory leaks.
-
-* **Suspend when not in use**: Suspend the `AudioContext` when audio is not needed to save system resources and battery life, especially on mobile devices.
- Running `AudioContext` is still playing silence even if there is no playing source node connected to the [`destination`](/docs/core/base-audio-context#properties).
- Additionally, on iOS devices, the state of the `AudioContext` is directly related with state of the lock screen. If running `AudioContext` exists, it is impossible to set lock screen state to [`state_paused`](/docs/system/audio-manager#lockscreeninfo).
-
-## React hooks vs React Native Audio API
-
-* **Create singleton class to manage audio layer**: Instead of storing `AudioContext` or nodes directly in your React components using `useState` or `useRef`,
- consider creating a singleton class that encapsulates the audio layer logic using React Native Audio API.
- This class can manage the lifecycle of the `AudioContext`, handle audio nodes, and provide methods for playing, pausing, and stopping audio.
- This approach promotes separation of concerns and makes it easier to manage audio state across your application.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/fundamentals/getting-started
-# Title: getting-started
-
-# Getting started
-
-The goal of *Fundamentals* is to guide you through the setup process of the Audio API, as well as to show the basic concepts behind audio programming using a web audio framework, giving you the confidence to explore more advanced use cases on your own. This section is packed with interactive examples, code snippets, and explanations. Are you ready? Let's make some noise!
-
-## Installation
-
-It takes only a few steps to add Audio API to your project:
-
-### Step 1: Install the package
-
-Install the `react-native-audio-api` package from npm:
-
-```sh
-npx expo install react-native-audio-api
-```
-
-```sh
-npm install react-native-audio-api
-```
-
-```sh
-yarn add react-native-audio-api
-```
-
-### Step 2: Add Audio API expo plugin (optional)
-
-Add `react-native-audio-api` expo plugin to your `app.json` or `app.config.js`.
-
-app.json
-
-```javascript
-{
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-}
-```
-
-app.config.js
-
-```javascript
-export default {
- ...
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-};
-```
-
-#### Special permissions
-
-If you plan to use [`AudioRecorder`](/docs/inputs/audio-recorder) entry `iosMicrophonePermission` and `android.permission.RECORD_AUDIO` in `androidPermissions` section is **MANDATORY**.
-
-> **Info**
->
-> If your app is not managed by expo, see [non-expo-permissions page](/docs/other/non-expo-permissions) how to handle permissions.
-
-Read more about plugin [here](/docs/other/audio-api-plugin)!
-
-### Step 3: Install system-wide bash (only Windows OS)
-
-There are many ways to do that f.e. using git bash. To make sure just test if any unix command works.
-
-```bash
-bash -c 'echo Hello World!'
-```
-
-### Possible additional dependencies
-
-If you plan to use any of [`WorkletNode`](/docs/worklets/worklet-node), [`WorkletSourceNode`](/docs/worklets/worklet-source-node), [`WorkletProcessingNode`](/docs/worklets/worklet-processing-node), it is required to have
-`react-native-worklets` library set up with version 0.6.0 or higher. See [worklets getting-started page](https://docs.swmansion.com/react-native-worklets/docs/) for info how to do it.
-
-> **Info**
->
-> If you are not planning to use any of mentioned nodes, `react-native-worklets` dependency is **OPTIONAL** and your app will build successfully without them.
-
-### Usage with expo
-
-`react-native-audio-api` contains native custom code and isn't part of the Expo Go application. In order to be available in expo managed builds, you have to use Expo development build. Simplest way on starting local expo dev builds, is to use:
-
-```sh
-npx expo run:ios
-```
-
-```sh
-npx expo run:android
-```
-
-To learn more about expo development builds, please check out [Development Builds Documentation](https://docs.expo.dev/develop/development-builds/introduction/).
-
-#### Android
-
-No further steps are necessary.
-
-#### iOS
-
-While developing for iOS, make sure to install [pods](https://cocoapods.org) first before running the app:
-
-```sh
-cd ios && pod install && cd ..
-```
-
-#### Web
-
-No further steps are necessary.
-
-> **Caution**
->
-> `react-native-audio-api` on the web exposes the browser's built-in Web Audio API, but for compatibility between platforms, it limits the available interfaces to APIs that are implemented on iOS and Android.
-
-### Clear Metro bundler cache (recommended)
-
-```sh
-npx expo start -c
-```
-
-```sh
-npm start -- --reset-cache
-```
-
-```sh
-yarn start --reset-cache
-```
-
-## What's next?
-
-In [the next section](/docs/guides/lets-make-some-noise), we will learn how to prepare Audio API and to play some sound!.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/fundamentals/introduction
-# Title: introduction
-
-# Introduction
-
-React Native Audio API is an imperative, high-level API for processing and synthesizing audio in React Native Applications. React Native Audio API follows the [Web Audio Specification](https://www.w3.org/TR/webaudio-1.1/) making it easier to write audio-heavy applications for iOS, Android and Web with just one codebase.
-
-## Highlights
-
-* Supports react-native, react-native-web or any web react based project
-* API strictly follows the Web Audio API standard
-* Blazingly fast, all of the Audio API core is written in C++ to deliver the best performance possible
-* Truly native, we use most up-to-date native apis such as AVFoundation, CoreAudio or Oboe
-* Modular routing architecture to fit simple (and complex) use-cases
-* Sample-accurate scheduled sound playback with low-latency for musical applications requiring the highest degree of rhythmic precision.
-* Efficient real-time time-domain and frequency-domain analysis / visualization
-* Efficient biQuad filters for most common filtering methods.
-* Support for computational audio synthesis
-
-## Motivation
-
-By aligning with the Web Audio specification, we're creating a single API that works seamlessly across native iOS, Android, browsers, and even standalone desktop applications. The React Native ecosystem currently lacks a high-performance API for creating audio, adding effects, or controlling basic parameters like volume for each audio separately - and we're here to bridge that gap!
-
-## Alternatives
-
-### Expo Audio
-
-[Expo Audio](https://docs.expo.dev/versions/latest/sdk/audio/) might be a better fit for you, if you are looking for simple playback functionality, as its simple and well documented API makes it easy to use.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/guides/create-your-own-effect
-# Title: create-your-own-effect
-
-# Create your own effect
-
-In this section, we will create our own [`pure C++ turbo-module`](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules) and use it to create custom processing node that can change sound whatever you want.
-
-### Prerequisites
-
-We highly encourage you to get familiar with [this guide](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules), since we will be using many similar concepts that are explained here.
-
-## Generate files
-
-We prepared a script that generates all of the boiler plate code for you.
-Only parts that will be needed by you, are:
-
-* customizing processor to your tasks
-* configuring [`codegen`](https://reactnative.dev/docs/the-new-architecture/what-is-codegen) with your project
-* writing native specific code to compile those files
-
-```bash
-npx rn-audioapi-custom-node-generator create -o # path where you want files to be generated, usually same level as android/ and ios/
-```
-
-## Analyzing generated files
-
-You should see two directories:
-
-* `shared/` - it contains c++ files (source code for custom effect and JSI layer - Host Objects, needed to communicate with JavaScript)
-* `specs/` - defines typescript interface that will invoke c++ code in JavaScript
-
-> **Caution**
->
-> Name of the file in `specs/` has to start with `Native` to be seen by codegen.
-
-The most important file is `MyProcessorNode.cpp`, it contains main processing part that directly manipulates raw data.
-
-In this guide, we will edit files in order to achieve [`GainNode`](/docs/effects/gain-node) functionality.
-For the sake of a simplicity, we will use value as a raw `double` type, not wrapped in [`AudioParam`](/docs/core/audio-param).
-
-MyProcessorNode.h
-
-```cpp
-#pragma once
-#include
-
-namespace audioapi {
-class AudioBuffer;
-
-class MyProcessorNode : public AudioNode {
-public:
- explicit MyProcessorNode(const std::shared_ptr &context, );
-
-protected:
- std::shared_ptr
- processNode(const std::shared_ptr &buffer,
- int framesToProcess) override;
-
-// highlight-start
-private:
- double gain; // value responsible for gain value
-// highlight-end
-};
-} // namespace audioapi
-```
-
-MyProcessorNode.cpp
-
-```cpp
-#include "MyProcessorNode.h"
-#include
-#include
-
-namespace audioapi {
- MyProcessorNode::MyProcessorNode(const std::shared_ptr &context)
- //highlight-next-line
- : AudioNode(context), gain(0.5) {
- isInitialized_.store(true, std::memory_order_release);
- }
-
- std::shared_ptr MyProcessorNode::processNode(const std::shared_ptr &buffer,
- int framesToProcess) {
- // highlight-start
- for (int channel = 0; channel < buffer->getNumberOfChannels(); ++channel) {
- auto *audioArray = bus->getChannel(channel);
- for (size_t i = 0; i < framesToProcess; ++i) {
- // Apply gain to each sample in the audio array
- (*audioArray)[i] *= gain;
- }
- }
- // highlight-end
- }
-} // namespace audioapi
-```
-
-MyProcessorNodeHostObject.h
-
-```cpp
-#pragma once
-
-#include "MyProcessorNode.h"
-#include
-
-#include
-#include
-
-namespace audioapi {
-using namespace facebook;
-
-class MyProcessorNodeHostObject : public AudioNodeHostObject {
-public:
- explicit MyProcessorNodeHostObject(
- const std::shared_ptr &node)
- : AudioNodeHostObject(node) {
- // highlight-start
- addGetters(JSI_EXPORT_PROPERTY_GETTER(MyProcessorNodeHostObject, getter));
- addSetters(JSI_EXPORT_PROPERTY_SETTER(MyProcessorNodeHostObject, setter));
- // highlight-end
- }
-
- // highlight-start
- JSI_PROPERTY_GETTER(getter) {
- auto processorNode = std::static_pointer_cast(node_);
- return {processorNode->someGetter()};
- }
- // highlight-end
-
- // highlight-start
- JSI_PROPERTY_SETTER(setter) {
- auto processorNode = std::static_pointer_cast(node_);
- processorNode->someSetter(value.getNumber());
- }
- // highlight-end
-};
-} // namespace audioapi
-```
-
-## Codegen
-
-Onboarding codegen doesn't require anything special in regards to basic [react-native tutorial](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules#2-configure-codegen)
-
-## Native files
-
-### iOS
-
-When it comes to iOS there is also nothing more than following [react-native tutorial](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules#ios)
-
-### Android
-
-Case with android is much different, because of the way android is compiled we need to compile our library with whole turbo-module.
-Firstly, follow [the guide](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules#android), but replace `CmakeLists.txt` with this content:
-
-```cmake
-cmake_minimum_required(VERSION 3.13)
-
-project(appmodules)
-
-set(ROOT ${CMAKE_SOURCE_DIR}/../../../../..)
-set(AUDIO_API_DIR ${ROOT}/node_modules/react-native-audio-api)
-
-include(${REACT_ANDROID_DIR}/cmake-utils/ReactNative-application.cmake)
-
-target_sources(${CMAKE_PROJECT_NAME} PRIVATE
- ${ROOT}/shared/NativeAudioProcessingModule.cpp
- ${ROOT}/shared/MyProcessorNode.cpp
- ${ROOT}/shared/MyProcessorNodeHostObject.cpp
-)
-
-target_include_directories(${CMAKE_PROJECT_NAME} PUBLIC
- ${ROOT}/shared
- ${AUDIO_API_DIR}/common/cpp
-)
-
-add_library(react-native-audio-api SHARED IMPORTED)
-string(TOLOWER ${CMAKE_BUILD_TYPE} BUILD_TYPE_LOWER)
-# we need to import built library from android directory
-set_target_properties(react-native-audio-api PROPERTIES IMPORTED_LOCATION
- ${AUDIO_API_DIR}/android/build/intermediates/merged_native_libs/${BUILD_TYPE_LOWER}/merge${CMAKE_BUILD_TYPE}NativeLibs/out/lib/${CMAKE_ANDROID_ARCH_ABI}/libreact-native-audio-api.so
-)
-target_link_libraries(${CMAKE_PROJECT_NAME} react-native-audio-api android log)
-```
-
-Last part that is required for you to do, is to add following lines to `build.gradle` file located in `android/app` directory.
-
-```Cmake
-evaluationDependsOn(":react-native-audio-api")
-
-afterEvaluate {
- tasks.getByName("buildCMakeDebug").dependsOn(findProject(":react-native-audio-api").tasks.getByName("mergeDebugNativeLibs"))
- tasks.getByName("buildCMakeRelWithDebInfo").dependsOn(findProject(":react-native-audio-api").tasks.getByName("mergeReleaseNativeLibs"))
-}
-```
-
-Since in `CmakeLists.txt` we depend on libreact-native-audio-api.so, we need to make sure that building an app will be invoked after library is existing.
-
-## Final touches
-
-Last part is to finally onboard your custom module to your app, by creating typescript interface that would map c++ layer.
-
-```typescript
-// types.ts
-import { AudioNode, BaseAudioContext } from "react-native-audio-api";
-import { IAudioNode, IBaseAudioContext } from "react-native-audio-api/lib/typescript/interfaces";
-
-export interface IMyProcessorNode extends IAudioNode {
- gain: number;
-}
-
-export class MyProcessorNode extends AudioNode {
- constructor(context: BaseAudioContext, node: IMyProcessorNode) {
- super(context, node);
- }
-
- public set gain(value: number) {
- (this.node as IMyProcessorNode).gain = value;
- }
-
- public get gain(): number {
- return (this.node as IMyProcessorNode).gain;
- }
-}
-
-declare global {
- var createCustomProcessorNode: (context: IBaseAudioContext) => IMyProcessorNode;
-}
-```
-
-## Example
-
-```tsx
-import {
- AudioContext,
- OscillatorNode,
-} from 'react-native-audio-api';
-import { MyProcessorNode } from './types';
-
-function App() {
- const audioContext = new AudioContext();
- const oscillator = audioContext.createOscillator();
- // constructor is put in global scope
- const processor = new MyProcessorNode(audioContext, global.createCustomProcessorNode(audioContext.context));
- oscillator.connect(processor);
- processor.connect(audioContext.destination);
- oscillator.start(audioContext.currentTime);
-}
-```
-
-**Check out fully working [demo app](https://github.com/software-mansion-labs/custom-processor-node-example)**
-
-## What's next?
-
-I’m not sure, but give yourself a pat on the back – you’ve earned it! More guides are on the way, so stay tuned! 🎼
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/guides/lets-make-some-noise
-# Title: lets-make-some-noise
-
-# Let's make some noise!
-
-In this section, we will guide you through the basic concepts of Audio API. We are going to use core audio components such as [`AudioContext`](/docs/core/audio-context) and [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) to simply play sound from a file, which will help you develop a basic understanding of the library.
-
-## Using audio context
-
-Let's start by bootstrapping a simple application with a play button and creating our first instance of `AudioContext` object.
-
-```jsx
-import React from 'react';
-import { View, Button } from 'react-native';
-// highlight-next-line
-import { AudioContext } from 'react-native-audio-api';
-
-export default function App() {
- const handlePlay = async () => {
- // highlight-next-line
- const audioContext = new AudioContext();
- };
-
- return (
-
-
-
- );
-}
-```
-
-`AudioContext` is an object that controls both the creation of the nodes and the execution of the audio processing or decoding.
-
-## Loading an audio file
-
-Before we can play anything, we need to gain access to some audio data. For the purpose of this guide, we will first download it from a remote source using.
-
-```jsx
-import React from 'react';
-import { View, Button } from 'react-native';
-import { AudioContext } from 'react-native-audio-api';
-
-export default function App() {
- const handlePlay = async () => {
- const audioContext = new AudioContext();
- // highlight-start
- const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
- // highlight-end
- };
-
- return (
-
-
-
- );
-}
-```
-
-We have used the [`decodeAudioData`](/docs/core/base-audio-context#decodeaudiodata) method of the [`BaseAudioContext`](/docs/core/base-audio-context), which takes a URL to a local file or bundled audio asset and decodes it into raw audio data that can be used within our system.
-
-## Play the audio
-
-The last and final step is to create an [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node), connect it to the `AudioContext's` destination, and start playing the sound. For the purpose of this guide, we will play the sound for just 10 seconds.
-
-```jsx {10-11,13-15}
-import React from 'react';
-import { View, Button } from 'react-native';
-import { AudioContext } from 'react-native-audio-api';
-
-export default function App() {
- const handlePlay = async () => {
- const audioContext = new AudioContext();
- const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
-
- const playerNode = audioContext.createBufferSource();
- playerNode.buffer = audioBuffer;
-
- playerNode.connect(audioContext.destination);
- playerNode.start(audioContext.currentTime);
- playerNode.stop(audioContext.currentTime + 10);
- };
-
- return (
-
-
-
- );
-}
-```
-
-And that's it! you have just played your first sound using react-native-audio-api. you can hear how it works in the live example below:
-
-## Summary
-
-In this guide, we have learned how to create a simple audio player using [`AudioContext`](/docs/core/audio-context) and [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) as well as how we can load audio data from a remote source. To sum up:
-
-* `AudioContext` is the main object that controls the audio graph.
-* the [`decodeAudioData`](/docs/core/base-audio-context#decodeaudiodata) method can be used to load audio data from a remote resource in the form of an [`AudioBuffer`](/docs/sources/audio-buffer).
-* `AudioBufferSourceNode` can be used with any `AudioBuffer`.
-* In order to hear the sounds, we need to connect the source node to the destination node exposed by `AudioContext`.
-* We can control the playback of the sound using [`start`](/docs/sources/audio-buffer-source-node#start) and [`stop`](/docs/sources/audio-scheduled-source-node#stop) methods of the `AudioBufferSourceNode` (and other source nodes, which we will show later).
-
-## What's next?
-
-In [the next section](/docs/guides/making-a-piano-keyboard), we will learn more about how the audio graph works, what audio parameters are, and how we can use them to create a simple piano keyboard.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/guides/making-a-piano-keyboard
-# Title: making-a-piano-keyboard
-
-# Making a piano keyboard
-
-In this section, we will use some of the core Audio API interfaces to create a simple piano keyboard. We will learn what an [`AudioParam`](/docs/core/audio-param) is and how to use it to change the pitch of the sound.
-
-## Base application
-
-Like in the previous example, we will start with a simple app with a couple of buttons so we don't need to worry about the UI later.
-You can just copy and paste the code below to your project.
-
-```tsx
-import React from 'react';
-import { View, Text, Pressable } from 'react-native';
-
-type KeyName = 'A' | 'B' | 'C' | 'D' | 'E';
-
-interface ButtonProps {
- keyName: KeyName;
- onPressIn: (key: KeyName) => void;
- onPressOut: (key: KeyName) => void;
-}
-
-const Button = ({ onPressIn, onPressOut, keyName }: ButtonProps) => (
- onPressIn(keyName)}
- onPressOut={() => onPressOut(keyName)}
- style={({ pressed }) => ({
- margin: 4,
- padding: 12,
- borderRadius: 2,
- backgroundColor: pressed ? '#d2e6ff' : '#abcdef',
- })}
- >
- {`${keyName}`}
-
-);
-
-export default function SimplePiano() {
- const onKeyPressIn = (which: KeyName) => {};
- const onKeyPressOut = (which: KeyName) => {};
-
- return (
-
- {Keys.map((key) => (
-
- ))}
-
- );
-}
-```
-
-## Create audio context and preload the data
-
-Like previously, we will need to preload the audio files in order to be able to play them. Using the interfaces we already know, we will download them and store in the memory using the good old `useRef` hook.
-
-First, we have the import section and the list of sources we will be using. Let’s also make things easier by using type shorthand for the partial record:
-
-```tsx
-import { AudioBuffer, AudioContext } from 'react-native-audio-api';
-
-/* ... */
-
-type PR = Partial>;
-
-const sourceList: PR = {
- A: 'https://software-mansion.github.io/react-native-audio-api/audio/sounds/C4.mp3',
- C: 'https://software-mansion.github.io/react-native-audio-api/audio/sounds/Ds4.mp3',
- E: 'https://software-mansion.github.io/react-native-audio-api/audio/sounds/Fs4.mp3',
-};
-```
-
-Then, we will want to fetch the audio files and store them. We want the audio data to be available to play as soon as possible, so we will use the `useEffect` hook to download them and store them in the `useRef` hook for simplicity.
-
-```tsx
-export default function SimplePiano() {
- const audioContextRef = useRef(null);
- const bufferMapRef = useRef>({});
-
- useEffect(() => {
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
-
- Object.entries(sourceList).forEach(async ([key, url]) => {
- bufferListRef.current[key as KeyName] = await audioContextRef.current!.decodeAudioData(url);
- });
- }, []);
-}
-```
-
-## Playing the sounds
-
-Now it is finally time to play the sounds. We will use the [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) and simply play the buffers.
-
-```tsx
-export default function SimplePiano() {
- const onKeyPressIn = (which: KeyName) => {
- const audioContext = audioContextRef.current;
- const buffer = bufferMapRef.current[which];
-
- if (!audioContext || !buffer) {
- return;
- }
-
- const source = new AudioBufferSourceNode(audioContext, {
- buffer,
- });
-
- source.connect(audioContext.destination);
- source.start();
- };
-}
-```
-
-When we put everything together, we will get something like this:
-
-Great! But there are a few things off here:
-
-* We are not stopping the sound when the button is released, which is how a piano should work, right? 🙃
-* You have probably noticed in the previous section, but we are missing sounds for keys 'B' and 'D'.
-
-Let’s see how we can address these issues using the Audio API. We will go through them one by one. Ready?
-
-## Key release
-
-To stop the sound when keys are released, we will need to store somewhere source nodes, in order to be able to call [`stop`](/docs/sources/audio-scheduled-source-node#stop) on them later. Just like with the audio context, let's use the `useRef` hook for this.
-
-```tsx
-const playingNotesRef = useRef>({});
-```
-
-Now we need to modify the `onKeyPressIn` function a bit
-
-```tsx
-const onKeyPressIn = (which: KeyName) => {
- const audioContext = audioContextRef.current!;
- const buffer = bufferMapRef.current[which];
-
- const source = new AudioBufferSourceNode(audioContext, {
- buffer,
- });
-
- source.connect(audioContext.destination);
- source.start();
-
- playingNotesRef.current[which] = source;
-};
-```
-
-And finally, we can implement the `onKeyPressOut` function
-
-```tsx
-const onKeyPressOut = (which: KeyName) => {
- const source = playingNotesRef.current[which];
-
- if (source) {
- source.stop();
- }
-};
-```
-
-Putting it all together again, we get:
-
-And they stop on release, just as we wanted. But if we hold the keys for a short time, it sounds a bit strange. Also, have you noticed that the sound is simply cut off when we release the key? 🤔
-It leaves a bit of an unpleasant feeling, right? So let’s try to make it a bit smoother.
-
-## Envelopes ✉️
-
-We will start from the end this time, and finally, we will use new type of audio node - [`GainNode`](/docs/effects/gain-node) :tada:
-`GainNode` is a simple node that can change the volume of any node (or nodes) connected to it. The `GainNode` has a single element called [`AudioParam`](/docs/core/audio-param), which is also named `gain`.
-
-## What is an AudioParam?
-
-An `AudioParam` is an interface that controls various aspects of most audio nodes, like volume (in the `GainNode` described above), pan or frequency. It allows us to control these aspects over time, enabling smooth transitions and complex audio effects.
-For our use case, we are interested in two methods of an AudioParam:
-
-* [`setValueAtTime`](/docs/core/audio-param/#setvalueattime)
-* [`exponentialRampToValueAtTime`](/docs/core/audio-param/#exponentialramptovalueattime).
-
-## What is an Envelope?
-
-An envelope describes how a sound's amplitude changes over time. The most widely used model is **ADSR**, which stands for **Attack**, **Decay**, **Sustain**, and **Release**:
-
-* **Attack** — time to ramp from silence to peak volume.
-* **Decay** — time to fall from peak down to the sustain level.
-* **Sustain** — volume level held while the note is active.
-* **Release** — time to fade out after the note ends.
-
-You can read more about envelopes and ADSR on [Wikipedia](https://en.wikipedia.org/wiki/Envelope_\(music\)).
-
-## Implementing the envelope
-
-With all the knowledge we have gathered, let's get back to the code. In our `onKeyPressIn` function, besides creating the source node, we will create a [`GainNode`](/docs/effects/gain-node) which will stand in the middle between the source and destination nodes, acting as our envelope.
-We want to implement the **attack** in `onKeyPressIn` function, and **release** in `onKeyPressOut`. In order to be able to access the envelope in both functions we will have to store it somewhere, so let's modify the `playingNotesRef` introduced earlier.
-Also, let’s not forget about the issue with short key presses. We will address that by enforcing a minimal duration of the sound to one second (as it works nicely with the samples we have 😉).
-
-Let’s start with the types:
-
-```tsx
-interface PlayingNote {
- source: AudioBufferSourceNode;
- envelope: GainNode;
- startedAt: number;
-}
-```
-
-and the `useRef` hook:
-
-```tsx
-const playingNotesRef = useRef>({});
-```
-
-Now we can modify the `onKeyPressIn` function:
-
-```tsx
-const onKeyPressIn = (which: KeyName) => {
- const audioContext = audioContextRef.current!;
- const buffer = bufferMapRef.current[which];
- const tNow = audioContext.currentTime;
-
- if (!audioContext || !buffer) {
- return;
- }
-
- const source = new AudioBufferSourceNode(audioContext, {
- buffer,
- });
-
- const envelope = audioContext.createGain();
-
- source.connect(envelope);
- envelope.connect(audioContext.destination);
-
- envelope.gain.setValueAtTime(0.001, tNow);
- envelope.gain.exponentialRampToValueAtTime(1, tNow + 0.1);
-
- source.start(tNow);
- playingNotesRef.current[which] = { source, envelope, startedAt: tNow };
-};
-```
-
-and the `onKeyPressOut` function:
-
-```tsx
-const onKeyPressOut = (which: KeyName) => {
- const audioContext = audioContextRef.current!;
- const playingNote = playingNotesRef.current[which];
-
- if (!playingNote || !audioContext) {
- return;
- }
-
- const { source, envelope, startedAt } = playingNote;
-
- const tStop = Math.max(audioContext.currentTime, startedAt + 5);
-
- envelope.gain.exponentialRampToValueAtTime(0.0001, tStop + 0.08);
- envelope.gain.setValueAtTime(0, tStop + 0.09);
- source.stop(tStop + 0.1);
-
- playingNotesRef.current[which] = undefined;
-};
-```
-
-As a result, we can hear something like this:
-
-And it finally sounds smooth and nice. But what about decay and sustain phases? Both are handled by the audio samples themselves, so we do not need to worry about them. To be honest, same goes for the attack phase, but we have implemented it for the sake of this guide. 🙂
-So, the only piece left is addressing the missing sample files for the 'B' and 'D' keys. What can we do about that?
-
-## Tampering with the playback rate
-
-The [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) also has its own [`AudioParam`](/docs/core/audio-param), called `playbackRate` as the title suggests. It allows us to change the speed of the playback of the audio buffer.
-Yay! Nice. But how can we use that to make the missing keys sound? I will keep this short, as this guide is already quite long, so let’s wrap up!
-
-When we change the speed of a sound, it will also change its pitch (frequency). So, we can use that to make the missing keys sound.
-Each piano key has its own dominant frequency (e.g., the frequency of the `A4` key is `440Hz`). We can check the frequency of each key, calculate the ratio between them, and use that ratio to adjust the playback rate of the buffers we have.
-
-
-
-For our example, let's use these frequencies as the base for our calculations:
-
-```tsx
-const noteToFrequency = {
- A: 261.626, // real piano middle C
- B: 277.193, // Db
- C: 311.127, // Eb
- D: 329.628, // E
- E: 369.994, // Gb
-};
-```
-
-First, we need to find the closest key to the missing one. We can do this in simple for loop:
-
-```tsx
-function getClosest(key: KeyName) {
- let closestKey = 'A';
- let minDiff = noteToFrequency.A - noteToFrequency[key];
-
- for (const sourcedKey of Object.keys(sourceList)) {
- const diff = noteToFrequency[sourcedKey] - noteToFrequency[key];
-
- if (Math.abs(diff) < Math.abs(minDiff)) {
- minDiff = diff;
- closestKey = sourcedKey;
- }
- }
-
- return closestKey;
-}
-```
-
-Now, we simply use the function in `onKeyPressIn` when the buffer is not found and adjust the playback rate for the source node accordingly:
-
-```tsx
-const onKeyPressIn = (which: KeyName) => {
- let buffer = bufferListRef.current[which];
- const aCtx = audioContextRef.current;
- let playbackRate = 1;
-
- if (!buffer) {
- const closestKey = getClosest(which);
- const closestBuffer = bufferMapRef.current[closestKey];
- playbackRate = noteToFrequency[closestKey] / noteToFrequency[which];
- }
-
- const source = aCtx.createBufferSource();
- const envelope = aCtx.createGain();
- source.buffer = buffer;
-};
-```
-
-## Final effects
-
-As before, you can see the final results in the live example below, along with the full source code.
-
-## Summary
-
-In this guide, we have learned how to create a simple piano keyboard with the help of the GainNode and AudioParams. To sum up:
-
-* [`AudioParam`](/docs/core/audio-param) is an interface that provides ways to control various aspects of audio nodes over time.
-* [`GainNode`](/docs/effects/gain-node) is a simple node that can change the volume of any node connected to it.
-* [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) has a parameter called `playbackRate` that allows to change the speed of the audio buffer's playback, thereby altering the pitch of the sound.
-* We can use `GainNode` to create envelopes, making the sound transitions smoother and more pleasant.
-* We have learned how to use the Audio API in the React environment, simulating a more production-like scenario.
-
-## What's next?
-
-In [the next section](/docs/guides/noise-generation), we will learn how we can generate noise using the audio buffer source node.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/guides/noise-generation
-# Title: noise-generation
-
-
-import InteractiveExample from '@site/src/components/InteractiveExample';
-
-# Noise generation
-
-Noise is one of the most basic and common tools in digital audio processing, in this guide, we will go through most common noise types and how to implement them using web audio api.
-
-## White noise
-
-The most used type of noise. White is a random signal having equal intensity at different frequencies, giving it a constant [power spectral density. (Wikipedia)](https://en.wikipedia.org/wiki/Spectral_density#Power_spectral_density).
-
-To produce white noise, we simply create an [`AudioBuffer`](/docs/sources/audio-buffer) containing random samples in range of `[-1; 1]` (in which audio api operates),
-which can be used by [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) for playback, further filtering or modification
-
-```tsx
-function createWhiteNoise() {
- const aCtx = = new AudioContext();
- const bufferSize = aCtx.sampleRate * 2;
- const output = new Float32Array(bufferSize);
-
- for (let i = 0; i < bufferSize; i += 1) {
- output[i] = Math.random() * 2 - 1;
- }
-
- const noiseBuffer = aCtx.createBuffer(1, bufferSize, aCtx.sampleRate);
- noiseBuffer.copyToChannel(output, 0, 0);
-
- return noiseBuffer;
-}
-```
-
-Usually we want the noise to be able to be played constantly. To achieve this we are generating 2 seconds of the noise sound, which we will later loop using the `AudioBufferSourceNode` properties. In audio processing `sampleRate` means number of samples that will be played during one second, thus we simply multiply this value by `2` to achieve desired length of the buffer.
-
-import WhiteNoise from '@site/src/examples/NoiseGeneration/WhiteNoiseComponent';
-import WhiteNoiseSrc from '!!raw-loader!@site/src/examples/NoiseGeneration/WhiteNoiseSource';
-
-
-
-## Pink noise
-
-Pink noise, also known as 1/f noise (where "f" stands for frequency), is a type of signal or sound that has equal energy per octave. This means that the power spectral density (PSD) decreases inversely with frequency. In simpler terms, pink noise has more energy at lower frequencies and less energy at higher frequencies, which makes it sound softer and more balanced to the human ear than white noise.
-
-To generate pink noise, we will use the effects of a $\frac{-3dB}{octave}$ filter using the [Paul Kellet's refined method](https://www.musicdsp.org/en/latest/Filters/76-pink-noise-filter.html)
-
-```tsx
-const createPinkNoise = () => {
- const aCtx = new AudioContext();
-
- const bufferSize = 2 * aCtx.sampleRate;
- const output = new Float32Array(bufferSize);
-
- let b0, b1, b2, b3, b4, b5, b6;
- b0 = b1 = b2 = b3 = b4 = b5 = b6 = 0.0;
-
- for (let i = 0; i < bufferSize; i += 1) {
- const white = Math.random() * 2 - 1;
-
- b0 = 0.99886 * b0 + white * 0.0555179;
- b1 = 0.99332 * b1 + white * 0.0750759;
- b2 = 0.969 * b2 + white * 0.153852;
- b3 = 0.8665 * b3 + white * 0.3104856;
- b4 = 0.55 * b4 + white * 0.5329522;
- b5 = -0.7616 * b5 - white * 0.016898;
-
- output[i] = 0.11 * (b0 + b1 + b2 + b3 + b4 + b5 + b6 + white * 0.5362);
- b6 = white * 0.115926;
- }
-
- const noiseBuffer = aCtx.createBuffer(1, bufferSize, aCtx.sampleRate);
- noiseBuffer.copyToChannel(output, 0, 0);
-
- return noiseBuffer;
-}
-```
-
-You can find more information about pink noise generation here: [https://www.firstpr.com.au/dsp/pink-noise/](https://www.firstpr.com.au/dsp/pink-noise/)
-
-import PinkNoise from '@site/src/examples/NoiseGeneration/PinkNoiseComponent';
-import PinkNoiseSrc from '!!raw-loader!@site/src/examples/NoiseGeneration/PinkNoiseSource';
-
-
-
-## Brownian noise
-
-The last noise type I would like to describe is brownian noise (also known as Brown or red noise). Brownian noise is named after the Brownian motion phenomenon, where particles inside a fluid move randomly due to collisions with other particles. It relates to its sonic counterpart in that Brownian noise is characterized by a significant presence of low frequencies, with energy decreasing as the frequency increases. Brownian noise is believed to sound like waterfall.
-
-Brownian noise, similarly to pink one, decreases in power by $\frac{12dB}{octave}$ and sounds similar to waterfall. The implementation is taken from article by Zach Denton, [How to Generate Noise with the Web Audio API](https://noisehack.com/generate-noise-web-audio-api/):
-
-
-```tsx
- const createBrownianNoise = () => {
- const aCtx = new AudioContext();
-
- const bufferSize = 2 * aCtx.sampleRate;
- const output = new Float32Array(bufferSize);
- let lastOut = 0.0;
-
- for (let i = 0; i < bufferSize; i += 1) {
- const white = Math.random() * 2 - 1;
- output[i] = (lastOut + 0.02 * white) / 1.02;
- lastOut = output[i];
- output[i] *= 3.5;
- }
-
- const noiseBuffer = aCtx.createBuffer(1, bufferSize, aCtx.sampleRate);
- noiseBuffer.copyToChannel(output, 0, 0);
-
- return noiseBuffer;
- }
-```
-
-import BrownianNoise from '@site/src/examples/NoiseGeneration/BrownianNoiseComponent';
-import BrownianNoiseSrc from '!!raw-loader!@site/src/examples/NoiseGeneration/BrownianNoiseSource';
-
-
-
-## What's next?
-
-In [the next section](/docs/guides/see-your-sound), we will explore how to capture audio data, visualize this data effectively, and utilize it to create basic animations.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/guides/see-your-sound
-# Title: see-your-sound
-
-# See your sound
-
-In this section, we will get familiar with capabilities of the [`AnalyserNode`](/docs/analysis/analyser-node) interface,
-focusing on how to extract audio data in order to create a simple real-time visualization of the sounds.
-
-## Base application
-
-To kick-start things a bit, lets use code based on previous tutorials.
-It is a simple application that can load and play a sound from file.
-As previously if you would like to code along the tutorial, copy and paste the code provided below into your project.
-
-```tsx
-import React, {
- useState,
- useEffect,
- useRef,
- useMemo,
-} from 'react';
-import {
- AudioContext,
- AudioBuffer,
- AudioBufferSourceNode,
-} from 'react-native-audio-api';
-import { ActivityIndicator, View, Button, LayoutChangeEvent } from 'react-native';
-
-const AudioVisualizer: React.FC = () => {
- const [isPlaying, setIsPlaying] = useState(false);
- const [isLoading, setIsLoading] = useState(false);
-
- const audioContextRef = useRef(null);
- const bufferSourceRef = useRef(null);
- const audioBufferRef = useRef(null);
-
- const handlePlayPause = () => {
- if (isPlaying) {
- bufferSourceRef.current?.stop();
- } else {
- if (!audioContextRef.current) {
- return
- }
-
- bufferSourceRef.current = audioContextRef.current.createBufferSource();
- bufferSourceRef.current.buffer = audioBufferRef.current;
- bufferSourceRef.current.connect(audioContextRef.current.destination);
-
- bufferSourceRef.current.start();
- }
-
- setIsPlaying((prev) => !prev);
- };
-
- useEffect(() => {
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
-
- const fetchBuffer = async () => {
- setIsLoading(true);
- const url = 'https://software-mansion.github.io/react-native-audio-api/audio/music/example-music-02.mp3';
- audioBufferRef.current = await audioContextRef.current!.decodeAudioData(url);
- setIsLoading(false);
- };
-
- fetchBuffer();
-
- return () => {
- audioContextRef.current?.close();
- };
- }, []);
-
- return (
-
-
-
- {isLoading && }
-
-
-
-
-
- );
-};
-
-export default AudioVisualizer;
-```
-
-## Create an analyzer to capture and process audio data
-
-To obtain frequency and time-domain data, we need to utilize the [`AnalyserNode`](/docs/analysis/analyser-node).
-It is an [`AudioNode`](/docs/core/audio-node) that passes data unchanged from input to output while enabling the extraction of this data in two domains: time and frequency.
-
-We will use two specific `AnalyserNode's` methods:
-
-* [`getByteTimeDomainData`](/docs/analysis/analyser-node#getbytetimedomaindata)
-* [`getByteFrequencyData`](/docs/analysis/analyser-node#getbytefrequencydata)
-
-These methods will allow us to acquire the necessary data for our analysis.
-
-```jsx {7,12,17-22,27,33,39,43,49-66,73-79}
-/* ... */
-
-import {
- AudioContext,
- AudioBuffer,
- AudioBufferSourceNode,
- AnalyserNode,
-} from 'react-native-audio-api';
-
-/* ... */
-
-const FFT_SIZE = 512;
-
-const AudioVisualizer: React.FC = () => {
- const [isPlaying, setIsPlaying] = useState(false);
- const [isLoading, setIsLoading] = useState(false);
- const [times, setTimes] = useState(
- new Uint8Array(FFT_SIZE).fill(127)
- );
- const [freqs, setFreqs] = useState(
- new Uint8Array(FFT_SIZE / 2).fill(0)
- );
-
- const audioContextRef = useRef(null);
- const bufferSourceRef = useRef(null);
- const audioBufferRef = useRef(null);
- const analyserRef = useRef(null);
-
- const handlePlayPause = () => {
- if (isPlaying) {
- bufferSourceRef.current?.stop();
- } else {
- if (!audioContextRef.current || !analyserRef.current) {
- return
- }
-
- bufferSourceRef.current = audioContextRef.current.createBufferSource();
- bufferSourceRef.current.buffer = audioBufferRef.current;
- bufferSourceRef.current.connect(analyserRef.current);
-
- bufferSourceRef.current.start();
-
- requestAnimationFrame(draw);
- }
-
- setIsPlaying((prev) => !prev);
- };
-
- const draw = () => {
- if (!analyserRef.current) {
- return;
- }
-
- const timesArrayLength = analyserRef.current.fftSize;
- const frequencyArrayLength = analyserRef.current.frequencyBinCount;
-
- const timesArray = new Uint8Array(timesArrayLength);
- analyserRef.current.getByteTimeDomainData(timesArray);
- setTimes(timesArray);
-
- const freqsArray = new Uint8Array(frequencyArrayLength);
- analyserRef.current.getByteFrequencyData(freqsArray);
- setFreqs(freqsArray);
-
- requestAnimationFrame(draw);
- };
-
- useEffect(() => {
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
-
- if (!analyserRef.current) {
- analyserRef.current = audioContextRef.current.createAnalyser();
- analyserRef.current.fftSize = FFT_SIZE;
- analyserRef.current.smoothingTimeConstant = 0.8;
-
- analyserRef.current.connect(audioContextRef.current.destination);
- }
-
- const fetchBuffer = async () => {
- setIsLoading(true);
- const url = 'https://software-mansion.github.io/react-native-audio-api/audio/music/example-music-02.mp3';
- audioBufferRef.current = await audioContextRef.current!.decodeAudioData(url);
- setIsLoading(false);
- };
-
- fetchBuffer();
-
- return () => {
- audioContextRef.current?.close();
- };
- }, []);
-
- return (
-
-
-
- {isLoading && }
-
-
-
-
-
- );
-};
-
-export default AudioVisualizer;
-```
-
-We utilize the [`requestAnimationFrame`](https://reactnative.dev/docs/timers) method to continuously fetch and update real-time audio visualization data.
-
-## Visualize time-domain and frequency data
-
-To render both the time as well as frequency domain visualizations, we will use our beloved graphic library - [`react-native-skia`](https://shopify.github.io/react-native-skia/).
-
-If you would like to know more what are time and frequency domains, have at look at [Time domain vs Frequency domain](/docs/analysis/analyser-node#time-domain-vs-frequency-domain) section of the AnalyserNode documentation,
-which explains those terms in details, but otherwise here is the code:
-
-**Time domain**
-
-**Frequency domain**
-
-## Summary
-
-In this guide, we have learned how to extract audio data using [`AnalyserNode`](/docs/analysis/analyser-node), what types of data we can obtain and how to visualize them. To sum up:
-
-* `AnalyserNode` is sniffer node that extracting audio data.
-* There are two domains of audio data: `frequency` and `time`.
-* We have learned how to use those data to create simple animation.
-
-## What's next?
-
-In [the next section](/docs/guides/create-your-own-effect), we will learn how to create our own processing node, utilizing react native turbo-modules.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/inputs/audio-recorder
-# Title: audio-recorder
-
-# AudioRecorder
-
-AudioRecorder is a primary interface for capturing audio. It supports three main modes of operations:
-
-* **File recording:** Writing audio data directly to the filesystem.
-* **Data callback:** Emitting raw audio buffers, that can be used in either further processing or streamed.
-* **Graph processing:** Connect the recorder with either `AudioContext` or `OfflineAudioContext` for further more advanced and/or realtime processing
-
-## Configuration
-
-To access microphone you need to make sure your app has required permission configuration - check [getting started permission section](/docs/fundamentals/getting-started#special-permissions) for more information.
-
-Additionally to be able to record audio while application is in the background, you need to enable background mode on iOS and configure foreground service on android.
-
-In an Expo application you can do so through `react-native-audio-api` expo plugin, e.g.
-
-```json
-{
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "[YOUR_APP_NAME] requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.RECORD_AUDIO",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MICROPHONE",
- ],
- "androidForegroundService": true,
- "androidFSTypes": ["microphone"]
- }
- ]
- ]
-}
-```
-
-For more configuration options, check out the [Expo plugin section](/docs/other/audio-api-plugin).
-
-For bare react-native applications, background mode is configurable through `Signing & Capabilities` section of your app target config using XCode
-
-
-
-Microphone permission can be created or modified through the `Info.plist` file
-
-
-
-Alternatively you can modify the `Info.plist` file directly in your editor of choice by adding those lines:
-
-```xml
-NSMicrophoneUsageDescription
-$(PRODUCT_NAME) wants to access your microphone in order to use voice memo recording
-UIBackgroundModes
-
- audio
-
-```
-
-To enable required permissions or foreground service you have to manually edit the `AndroidManifest.xml` file
-
-```xml
-
-
-
-
-
-
-
-
-
-
-
-
-
-```
-
-## Usage
-
-```tsx
-import React, { useState } from 'react';
-import { View, Pressable, Text } from 'react-native';
-import { AudioRecorder, AudioManager } from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: 'record',
- iosMode: 'default',
- iosOptions: [],
-});
-
-const audioRecorder = new AudioRecorder();
-
-// Enables recording to file with default configuration
-audioRecorder.enableFileOutput();
-
-const MyRecorder: React.FC = () => {
- const [isRecording, setIsRecording] = useState(false);
-
- const onStart = async () => {
- if (isRecording) {
- return;
- }
-
- // Make sure the permissions are granted
- const permissions = await AudioManager.requestRecordingPermissions();
-
- if (permissions !== 'Granted') {
- console.warn('Permissions are not granted');
- return;
- }
-
- // Activate audio session
- const success = await AudioManager.setAudioSessionActivity(true);
-
- if (!success) {
- console.warn('Could not activate the audio session');
- return;
- }
-
- const result = audioRecorder.start();
- if (result.status === 'error') {
- console.warn(result.message);
- return;
- }
-
- console.log('Recording started to file:', result.path);
- setIsRecording(true);
- };
-
- const onStop = () => {
- if (!isRecording) {
- return;
- }
-
- const result = audioRecorder.stop();
- console.log(result);
- setIsRecording(false);
- AudioManager.setAudioSessionActivity(false);
- };
-
- return (
-
-
- {isRecording ? 'Stop' : 'Record'}
-
-
- );
-};
-
-export default MyRecorder;
-```
-
-```tsx
-import React, { useState, useEffect } from 'react';
-import { View, Pressable, Text } from 'react-native';
-import { AudioRecorder, AudioManager } from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: 'record',
- iosMode: 'default',
- iosOptions: [],
-});
-
-const audioRecorder = new AudioRecorder();
-const sampleRate = 16000;
-
-const MyRecorder: React.FC = () => {
- const [isRecording, setIsRecording] = useState(false);
-
- useEffect(() => {
- audioRecorder.onAudioReady(
- {
- sampleRate,
- bufferLength: sampleRate * 0.1, // 0.1s of audio each batch
- channelCount: 1,
- },
- ({ buffer, numFrames, when }) => {
- // do something with the data, i.e. stream it
- }
- );
-
- return () => {
- audioRecorder.clearOnAudioReady();
- };
- }, []);
-
- const onStart = async () => {
- if (isRecording) {
- return;
- }
-
- // Make sure the permissions are granted
- const permissions = await AudioManager.requestRecordingPermissions();
-
- if (permissions !== 'Granted') {
- console.warn('Permissions are not granted');
- return;
- }
-
- // Activate audio session
- const success = await AudioManager.setAudioSessionActivity(true);
-
- if (!success) {
- console.warn('Could not activate the audio session');
- return;
- }
-
- const result = audioRecorder.start();
-
- if (result.status === 'error') {
- console.warn(result.message);
- return;
- }
-
- setIsRecording(true);
- };
-
- const onStop = () => {
- if (!isRecording) {
- return;
- }
-
- audioRecorder.stop();
- setIsRecording(false);
- AudioManager.setAudioSessionActivity(false);
- };
-
- return (
-
-
- {isRecording ? 'Stop' : 'Record'}
-
-
- );
-};
-
-export default MyRecorder;
-```
-
-```tsx
-import React, { useState } from 'react';
-import { View, Pressable, Text } from 'react-native';
-import {
- AudioRecorder,
- AudioContext,
- AudioManager,
-} from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: 'playAndRecord',
- iosMode: 'default',
- iosOptions: [],
-});
-
-const audioRecorder = new AudioRecorder();
-const audioContext = new AudioContext();
-
-const MyRecorder: React.FC = () => {
- const [isRecording, setIsRecording] = useState(false);
-
- const onStart = async () => {
- if (isRecording) {
- return;
- }
-
- // Make sure the permissions are granted
- const permissions = await AudioManager.requestRecordingPermissions();
-
- if (permissions !== 'Granted') {
- console.warn('Permissions are not granted');
- return;
- }
-
- // Activate audio session
- const success = await AudioManager.setAudioSessionActivity(true);
-
- if (!success) {
- console.warn('Could not activate the audio session');
- return;
- }
-
- const adapter = audioContext.createRecorderAdapter();
- adapter.connect(audioContext.destination);
- audioRecorder.connect(adapter);
-
- if (audioContext.state === 'suspended') {
- await audioContext.resume();
- }
-
- const result = audioRecorder.start();
-
- if (result.status === 'error') {
- console.warn(result.message);
- return;
- }
-
- setIsRecording(true);
- };
-
- const onStop = () => {
- if (!isRecording) {
- return;
- }
-
- audioRecorder.stop();
- audioContext.suspend();
- setIsRecording(false);
- AudioManager.setAudioSessionActivity(false);
- };
-
- return (
-
-
- {isRecording ? 'Stop' : 'Record'}
-
-
- );
-};
-
-export default MyRecorder;
-```
-
-## API
-
-MethodDescription
-
-##### Constructor
-
-Creates new instance of AudioRecorder. It is preferred to create only a single instance of the AudioRecorder class for the best performance, memory and battery consumption reasons. While the idle recorder has minimal impact on anything mentioned, switching between separate recorder instances might have a noticeable impact on the device.
-
-```tsx
-import { AudioRecorder } from 'react-native-audio-api';
-
-const audioRecorder = new AudioRecorder();
-```
-
-##### start
-
-Starts the stream from system audio input device.
-You can pass optional object with `fileNameOverride` string, to provide your own fileName generation.
-
-```tsx
-const result = audioRecorder.start({
- fileNameOverride: `my_audio_${mySessionId}`
-});
-
-if (result.status === 'success') {
- const openedFilePath = result.path;
-} else if (result.status === 'error') {
- console.error(result.message);
-}
-```
-
-##### stop
-
-Stops the input stream and cleans up each input access method.
-
-```tsx
-const result = audioRecorder.stop();
-
-if (result.status === 'success') {
- const { path, duration, size } = result;
-} else if (result.status === 'error') {
- console.error(result.message);
-}
-```
-
-##### pause
-
-Pauses the recording. This is useful when recording to file is active, but you don't want to finalize the file.
-
-```tsx
- audioRecorder.pause();
-```
-
-##### resume
-
-Resumes the recording if it was previously paused, otherwise does nothing.
-
-```tsx
- audioRecorder.resume();
-```
-
-##### isRecording
-
-Returns `true` if the recorder is in active/recording state
-
-```tsx
- const isRecording = audioRecorder.isRecording();
-```
-
-##### isPaused
-
-Returns `true` if the recorder is in paused state.
-
-```tsx
- const isPaused = audioRecorder.isPaused();
-```
-
-##### onError
-
-Sets an error callback for any possible internal error that might happen during file writing, callback invocation or adapter access.
-
-For details check: [OnRecorderErrorEventType](#onrecordererroreventtype)
-
-```tsx
- audioRecorder.onError((error: OnRecorderErrorEventType) => {
- console.log(error);
- });
-```
-
-##### clearOnError
-
-Removes the error callback.
-
-```tsx
- audioRecorder.clearOnError();
-```
-
-### Recording to file
-
-MethodDescription
-
-##### enableFileOutput
-
-Configures and enables the file output with defined options and stream properties. Options property allows for configuration of the output file structure and quality. By default the recorder writes to cache directory using high-quality `M4A` file.
-
-For further information check: [AudioRecorderFileOptions](#audiorecorderfileoptions)
-
-```tsx
- audioRecorder.enableFileOutput();
-```
-
-##### disableFileOutput
-
-Disables the file output and finalizes the currently recorded file if the recorder is active.
-
-```tsx
- audioRecorder.disableFileOutput();
-```
-
-##### getCurrentDuration
-
-Returns current recording duration if recording to file is enabled.
-
-```tsx
- const duration = audioRecorder.getCurrentDuration();
-```
-
-### Data callback
-
-MethodDescription
-
-##### onAudioReady
-
-The callback is periodically invoked with audio buffers that match the preferred configuration provided in `options`. These parameters (sample rate, buffer length, and channel count) guide how audio data is chunked and delivered, though the exact values may vary depending on device capabilities.
-
-For further information check:
-
-* [AudioRecorderCallbackOptions](#audiorecordercallbackoptions)
-* [OnAudioReadyEventType](#onaudioreadyeventtype)
-
-```tsx
- const sampleRate = 16000;
-
- audioRecorder.onAudioReady(
- {
- sampleRate,
- bufferLength: 0.1 * sampleRate, // 0.1s of data
- channelCount: 1,
- },
- ({ buffer, numFrames, when }) => {
- // do something with the data
- });
-```
-
-##### clearOnAudioReady
-
-Disables and flushes the remaining audio data through `onAudioReady` callback as explained above.
-
-```tsx
- audioRecorder.clearOnAudioReady();
-```
-
-#### Graph processing
-
-MethodDescription
-
-##### connect
-
-Connects AudioRecorder with [RecorderAdapterNode](/docs/sources/recorder-adapter-node) instance that can be used for further audio processing.
-
-```tsx
- const adapter = audioContext.createRecorderAdapter();
- audioRecorder.connect(adapter);
-```
-
-##### disconnect
-
-Disconnects AudioRecorder from the audio graph.
-
-```tsx
- audioRecorder.disconnect();
-```
-
-## Types
-
-#### AudioRecorderCallbackOptions
-
-```tsx
-interface AudioRecorderCallbackOptions {
- sampleRate: number;
- bufferLength: number;
- channelCount: number;
-}
-```
-
-* `sampleRate` - The desired sample rate (in Hz) for audio buffers delivered to the
- recording callback. Common values include 44100 or 48000 Hz. The actual
- sample rate may differ depending on hardware and system capabilities.
-
-* `bufferLength` - The preferred size of each audio buffer, expressed as the number of samples per channel. Smaller buffers reduce latency but increase CPU load, while larger buffers improve efficiency at the cost of higher latency.
-
-* `channelCount` - The desired number of audio channels per buffer. Typically 1 for mono or 2 for stereo recordings.
-
-#### OnRecorderErrorEventType
-
-```tsx
-interface OnRecorderErrorEventType {
- message: string;
-}
-```
-
-#### OnAudioReadyEventType
-
-Represents the data payload received by the audio recorder callback each time a new audio buffer becomes available during recording.
-
-```tsx
-interface OnAudioReadyEventType {
- buffer: AudioBuffer;
- numFrames: number;
- when: number;
-}
-```
-
-* `buffer` - The audio buffer containing the recorded PCM data. This buffer includes one or more channels of floating-point samples in the range of -1.0 to 1.0.
-* `numFrames` - The number of audio frames contained in this buffer. A frame represents a single sample across all channels.
-* `when` - The timestamp (in seconds) indicating when this buffer was captured, relative to the start of the recording session.
-
-### File handling
-
-#### AudioRecorderFileOptions
-
-```tsx
-interface AudioRecorderFileOptions {
- channelCount?: number;
-
- format?: FileFormat;
- preset?: FilePresetType;
-
- directory?: FileDirectory;
- subDirectory?: string;
- fileNamePrefix?: string;
- androidFlushIntervalMs?: number;
-}
-```
-
-* `channelCount` - The desired channel count in the resulting file. not all file formats supports all possible channel counts.
-* `format` - The desired extension and file format of the recorder file. Check: [FileFormat](#fileformat) below.
-* `preset` - The desired recorder file properties, you can use either one of built-in properties or tweak low-level parameters yourself. Check [FilePresetType](#filepresettype) for more details.
-* `directory` - Either `FileDirectory.Cache` or `FileDirectory.Document` (default: `FileDirectory.Cache`). Determines the system directory that the file will be saved to.
-* `subDirectory` - If configured it will create the recording inside requested directory (default: `undefined`).
-* `fileNamePrefix` - Prefix of the recording files without the unique ID (default: `recording`).
-* `androidFlushIntervalMs` - How often the recorder should force the system to write data to the device storage (default: `500`).
- * Lower values are good for crash-resilience and are more memory friendly.
- * Higher values are more battery- and storage-efficient.
-
-#### FileFormat
-
-Describes desired file extension as well as codecs, containers (and muxers!) used to encode the file.
-
-```tsx
-enum FileFormat {
- Wav,
- Caf,
- M4A,
- Flac,
-}
-```
-
-#### FilePresetType
-
-Describes audio format that is used during writing to file as well as encoded final file properties. You can use one of predefined presets, or fully customize the result file, but be aware that the properties aren't limited to only valid configurations, you may find property pairs that will result in error result during recording start (or when enabling the file output during active input session)!
-
-##### Built-in file presets
-
-For convenience we have provided set of most basic file configurations that should cover most of the cases (or at least we hope they will, please raise an issue if you find something lacking or misconfigured!).
-
-###### Usage
-
-```tsx
-import { AudioRecorder, FileFormat, FilePreset } from 'react-native-audio-api';
-
-const audioRecorder = new AudioRecorder();
-
-audioRecorder.enableFileOutput({
- format: FileFormat.M4A,
- preset: FilePreset.High,
-});
-```
-
-PresetDescription
-
-##### Lossless
-
-Writes audio data directly to file without encoding, preserving the maximum audio quality supported by the device. This results in large file sizes, particularly for longer recordings. Available only when using WAV or CAF file formats.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.CAF,
- preset: FilePreset.Lossless,
-});
-```
-
-##### High Quality
-
-Uses high-fidelity audio parameters with efficient encoding to deliver near-lossless perceptual quality while producing smaller files than fully uncompressed recordings. Suitable for music and high-quality voice capture.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.Flac,
- preset: FilePreset.High,
-});
-```
-
-##### Medium Quality
-
-Uses balanced audio parameters that provide good perceptual quality while keeping file sizes moderate. Intended for everyday recording scenarios such as voice notes, podcasts, and general in-app audio, where efficiency and compatibility outweigh maximum fidelity.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.M4A,
- preset: FilePreset.Medium,
-});
-```
-
-##### Low Quality
-
-Uses reduced audio parameters to minimize file size and processing overhead. Designed for cases where speech intelligibility is sufficient and audio fidelity is not critical, such as quick voice notes, background recording, or diagnostic capture.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.M4A,
- preset: FilePreset.Low,
-});
-```
-
-#### Preset customization
-
-In addition to the predefined presets, you may supply a custom FilePresetType to fine-tune how audio data is written and encoded. This allows you to optimize for specific use cases such as speech-only recording, reduced storage footprint, or faster encoding.
-
-```tsx
-export interface FilePresetType {
- bitRate: number;
- sampleRate: number;
- bitDepth: BitDepth;
- iosQuality: IOSAudioQuality;
- flacCompressionLevel: FlacCompressionLevel;
-}
-```
-
-PropertyDescription
-
-##### bitRate
-
-Defines the target bitrate for lossy encoders (for example AAC or M4A). Higher values generally improve perceptual quality at the cost of larger file sizes. This value may be ignored when using lossless formats.
-
-| Use case | Bitrate (bps) | Notes |
-| :- | - | :- |
-| Very low quality / telemetry | 32000 | Bare minimum for speech intelligibility |
-| Low quality voice notes | 48000 | Optimized for small files and fast encoding |
-| Standard speech / podcasts | 64000 – 96000 | Good balance of clarity and size |
-| Medium quality general audio | 128000 | Common default for consumer audio |
-| High quality music / voice | 160000 – 192000 | Near-transparent for most listeners |
-| Very high quality | 256000 – 320000 | Large files, minimal perceptual loss |
-
-##### sampleRate
-
-Specifies the sampling frequency used during recording. Higher sample rates capture a wider frequency range but increase processing and storage requirements.
-
-##### bitDepth
-
-Controls the PCM bit depth of the recorded audio. Higher bit depths increase dynamic range and precision, primarily affecting uncompressed or lossless output formats.
-
-##### iosQuality
-
-Maps the preset to the closest matching quality level provided by iOS native audio APIs, ensuring consistent behavior across Apple devices.
-
-```tsx
-enum IOSAudioQuality {
- Min,
- Low,
- Medium,
- High,
- Max,
-}
-```
-
-##### flacCompressionLevel
-
-Determines the compression level used when encoding FLAC files. Higher levels reduce file size at the cost of increased CPU usage, without affecting audio quality.
-
-```tsx
-enum FlacCompressionLevel {
- L0,
- L1,
- L2,
- L3,
- L4,
- L5,
- L6,
- L7,
- L8,
-}
-```
-
-## Remarks & known issues
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/other/audio-api-plugin
-# Title: audio-api-plugin
-
-# Audio API Expo plugin
-
-## What is Audio API Expo plugin
-
-The Audio API Expo plugin allows to set certain permissions and
-background audio related settings in developer friendly way.
-
-Type definitions
-
-```typescript
-interface Options {
- iosMicrophonePermission?: string;
- iosBackgroundMode: boolean;
- androidPermissions: string[];
- androidForegroundService: boolean;
- androidFSTypes: string[];
-}
-```
-
-## How to use it?
-
-Add `react-native-audio-api` expo plugin to your `app.json` or `app.config.js`.
-
-app.json
-
-```javascript
-{
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-}
-```
-
-app.config.js
-
-```javascript
-export default {
- ...
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-};
-```
-
-## Options
-
-### iosBackgroundMode
-
-Defaults to `true`.
-
-Allows app to play audio in the background on iOS.
-
-### iosMicrophonePermission
-
-Defaults to `undefined`.
-
-Allows to specify a custom microphone permission message for iOS. If not specified it will be omitted in the `Info.plist`.
-
-### androidPermissions
-
-Defaults to
-
-```
-[
- 'android.permission.FOREGROUND_SERVICE',
- 'android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK'
-]
-```
-
-Allows to specify certain android app permissions to apply.
-
-##### Permissions:
-
-* `android.permission.POST_NOTIFICATIONS` - Required by Foreground Services on Android 13+ to post notifications.
-
-* `android.permission.FOREGROUND_SERVICE` - Allows an app to run a Foreground Service
-
-* `android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK` - Allows an app to run a Foreground Service specifically for continues audio or video playback.
-
-* `android.permission.FOREGROUND_SERVICE_MICROPHONE` - Allows an app to run a Foreground Service specifically for continues microphone capture from the background.
-
-* `android.permission.MODIFY_AUDIO_SETTINGS` - Allows an app to modify global audio settings.
-
-* `android.permission.INTERNET` - Allows applications to open network sockets.
-
-* `android.permission.RECORD_AUDIO` - Allows an application to record audio.
-
-### androidForegroundService
-
-Defaults to true
-
-Allows app to use Foreground Service options specified by user,
-it permits app to play audio in the background on Android.
-
-### androidFSTypes
-
-Allows user to specify appropriate Foreground Service type.
-
-##### Types description
-
-* `mediaPlayback` - Continue audio or video playback from the background.
-
-* `microphone` - Continue microphone capture from the background, such as voice recorders or communication apps.
-
- Runtime prerequisites:
-
- * Request and be granted the RECORD\_AUDIO runtime permission.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/other/compatibility
-# Title: compatibility
-
-# React Native compatibility table
-
-### Supported React Native versions on [the New Architecture](https://reactnative.dev/docs/the-new-architecture/landing-page) (Fabric)
-
-| | 0.74 | 0.75 | 0.76 | 0.77 | 0.78 | 0.79 | 0.80 | 0.81 | 0.82 | 0.83 | 0.84 |
-| ----------------------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-
-### Supported React Native versions on the Old Architecture (Paper)
-
-| | 0.74 | 0.75 | 0.76 | 0.77 | 0.78 | 0.79 | 0.80 | 0.81 |
-| ----------------------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/other/ffmpeg-info
-# Title: ffmpeg-info
-
-# FFmpeg additional information
-
-We use [`ffmpeg`](https://github.com/FFmpeg/FFmpeg) for few components:
-
-* [`StreamerNode`](/docs/sources/streamer-node)
-* decoding `aac`, `mp4`, `m4a` files
-
-## Disabling FFmpeg
-
-The ffmpeg usage is enabled by default, however if you would like not to use it, f.e. there are some name clashes with other ffmpeg
-binaries in your project, you can easily disable it. Just add one flag in corresponding file.
-
-> **Info**
->
-> FFmpeg is enabled by default
-
-Add entry in [expo plugin](/docs/fundamentals/getting-started#step-2-add-audio-api-expo-plugin-optional).
-
-```
-"disableFFmpeg": true
-```
-
-Podfile
-
-```
-ENV['DISABLE_AUDIOAPI_FFMPEG'] = '1'
-```
-
-gradle.properties
-
-```
-disableAudioapiFFmpeg=true
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/other/non-expo-permissions
-# Title: non-expo-permissions
-
-# Non-expo app permissions
-
-If your app needs to access non-trivial resources such as microphone or running in the background, there has to be explicit entries about it in special places.
-
-## iOS
-
-On iOS the file that handles special permissions is named [`Info.plist`](https://developer.apple.com/documentation/bundleresources/information-property-list?language=objc).
-This file is placed in `ios/YourAppName` directory.
-For example to tell system that our app wants to use a microphone, we would need to add this entry to the file.
-
-```
-NSMicrophoneUsageDescription
-App wants to access your microphone in order to use voice memo recording
-```
-
-## Android
-
-On Android the file that handles special permissions is named [`AndroidManifest.xml`](https://developer.android.com/guide/topics/manifest/manifest-intro).
-This file is placed in `android/app/src/main` directory.
-For example to tell system that our app wants to use a microphone, we would need to add this entry to the file.
-
-```
-
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/other/running_with_mac_catalyst
-# Title: running_with_mac_catalyst
-
-# Running with Mac Catalyst
-
-Mac Catalyst allows you to run your iOS apps natively on macOS. This guide covers the necessary changes to your Podfile to enable Mac Catalyst support for your React Native app with `react-native-audio-api`.
-
-## Podfile Configuration
-
-To build your app for Mac Catalyst, you need to make several changes to your `ios/Podfile`:
-
-### 1. Enable building React Native from source
-
-Add this environment variable at the top of your Podfile:
-
-```ruby
-ENV['RCT_BUILD_FROM_SOURCE'] = '1'
-```
-
-### 2. Enable static frameworks
-
-Add `use_frameworks!` with static linkage inside your target block:
-
-```ruby
-target 'YourApp' do
- config = use_native_modules!
- use_frameworks! :linkage => :static
-
- # ... rest of your configuration
-end
-```
-
-### 3. Update post\_install with Mac Catalyst support
-
-Replace your existing `post_install` block with one that enables Mac Catalyst:
-
-```ruby
-post_install do |installer|
- react_native_post_install(
- installer,
- config[:reactNativePath],
- :mac_catalyst_enabled => true,
- )
-end
-```
-
-### 4. Hermes Framework Fix (RN 0.83.x only)
-
-> **Note**
->
-> This step is only required for React Native 0.83.x. There's a [known issue](https://github.com/facebook/react-native/issues/55540) where the Hermes framework bundle structure is ambiguous on Mac Catalyst. If you're on a different version, you can skip this step.
-
-If you're using React Native 0.83.x, extend your `post_install` block with the following fix that restructures the Hermes framework to follow the correct macOS bundle layout:
-
-```ruby
-post_install do |installer|
- react_native_post_install(
- installer,
- config[:reactNativePath],
- :mac_catalyst_enabled => true,
- )
-
- # Hermes Mac Catalyst framework layout fix (RN 0.83.x)
- require 'fileutils'
-
- hermes_fw = File.join(__dir__,
- 'Pods/hermes-engine/destroot/Library/Frameworks/universal/hermesvm.xcframework',
- 'ios-arm64_x86_64-maccatalyst/hermesvm.framework'
- )
-
- if File.directory?(hermes_fw)
- Dir.chdir(hermes_fw) do
- FileUtils.mkdir_p('Versions/A')
- File.symlink('A', 'Versions/Current') unless File.exist?('Versions/Current')
-
- if File.exist?('hermesvm') && !File.symlink?('hermesvm')
- FileUtils.mkdir_p('Versions/Current')
- FileUtils.mv('hermesvm', 'Versions/Current/hermesvm')
- File.symlink('Versions/Current/hermesvm', 'hermesvm')
- end
-
- FileUtils.mkdir_p('Versions/Current/Resources')
- if File.exist?('Resources') && !File.symlink?('Resources')
- FileUtils.rm_rf('Resources')
- end
- File.symlink('Versions/Current/Resources', 'Resources') unless File.exist?('Resources')
- end
- end
- # ⬆️ End of Hermes fix ⬆️
- end
-end
-```
-
-## Complete Example
-
-Here's a complete Podfile configured for Mac Catalyst (includes Hermes fix for RN 0.83.x — remove the Hermes section if you're on a different version):
-
-```ruby
-ENV['RCT_NEW_ARCH_ENABLED'] = '1'
-ENV['RCT_BUILD_FROM_SOURCE'] = '1'
-
-require Pod::Executable.execute_command('node', ['-p',
- 'require.resolve(
- "react-native/scripts/react_native_pods.rb",
- {paths: [process.argv[1]]},
- )', __dir__]).strip
-
-platform :ios, min_ios_version_supported
-prepare_react_native_project!
-
-target 'YourApp' do
- config = use_native_modules!
- use_frameworks! :linkage => :static
-
- use_react_native!(
- :path => config[:reactNativePath],
- :hermes_enabled => true,
- :app_path => "#{Pod::Config.instance.installation_root}/..",
- :privacy_file_aggregation_enabled => true
- )
-
- post_install do |installer|
- react_native_post_install(
- installer,
- config[:reactNativePath],
- :mac_catalyst_enabled => true,
- )
-
- # ⬇️ Hermes fix for RN 0.83.x only - remove if using different version ⬇️
- require 'fileutils'
-
- hermes_fw = File.join(__dir__,
- 'Pods/hermes-engine/destroot/Library/Frameworks/universal/hermesvm.xcframework',
- 'ios-arm64_x86_64-maccatalyst/hermesvm.framework'
- )
-
- if File.directory?(hermes_fw)
- Dir.chdir(hermes_fw) do
- FileUtils.mkdir_p('Versions/A')
- File.symlink('A', 'Versions/Current') unless File.exist?('Versions/Current')
-
- if File.exist?('hermesvm') && !File.symlink?('hermesvm')
- FileUtils.mkdir_p('Versions/Current')
- FileUtils.mv('hermesvm', 'Versions/Current/hermesvm')
- File.symlink('Versions/Current/hermesvm', 'hermesvm')
- end
-
- FileUtils.mkdir_p('Versions/Current/Resources')
- if File.exist?('Resources') && !File.symlink?('Resources')
- FileUtils.rm_rf('Resources')
- end
- File.symlink('Versions/Current/Resources', 'Resources') unless File.exist?('Resources')
- end
- end
- # ⬆️ End of Hermes fix ⬆️
- end
-end
-```
-
-## Building for Mac Catalyst
-
-After updating your Podfile:
-
-1. Run `pod install` to regenerate the Pods project
-2. Open your `.xcworkspace` in Xcode
-3. Select your target and go to **General** → **Deployment Info**
-4. Check **Mac (Designed for iPad)** or **Mac Catalyst** depending on your Xcode version
-5. Build and run targeting "My Mac"
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/other/testing
-# Title: testing
-
-# Testing
-
-React Native Audio API provides a comprehensive mock implementation to help you test your audio-related code without requiring actual audio hardware or platform-specific implementations.
-
-## Mock Implementation
-
-The mock implementation provides the same API surface as the real library but with no-op or simplified implementations that are perfect for unit testing.
-
-### Importing Mocks
-
-```typescript
-import * as MockAudioAPI from 'react-native-audio-api/mock';
-
-// Or import specific components
-import { AudioContext, AudioRecorder } from 'react-native-audio-api/mock';
-```
-
-```typescript
-// In your test setup file
-jest.mock('react-native-audio-api', () =>
- require('react-native-audio-api/mock')
-);
-
-// Then in your tests
-import { AudioContext, AudioRecorder } from 'react-native-audio-api';
-```
-
-## Basic Usage
-
-### Audio Context Testing
-
-```typescript
-import { AudioContext } from 'react-native-audio-api/mock';
-
-describe('Audio Graph Tests', () => {
- it('should create and connect audio nodes', () => {
- const context = new AudioContext();
-
- // Create nodes
- const oscillator = context.createOscillator();
- const gainNode = context.createGain();
-
- // Configure properties
- oscillator.frequency.value = 440; // A4 note
- gainNode.gain.value = 0.5; // 50% volume
-
- // Connect the audio graph
- oscillator.connect(gainNode);
- gainNode.connect(context.destination);
-
- // Test the configuration
- expect(oscillator.frequency.value).toBe(440);
- expect(gainNode.gain.value).toBe(0.5);
- });
-
- it('should support context state management', async () => {
- const context = new AudioContext();
- expect(context.state).toBe('running');
-
- await context.suspend();
- expect(context.state).toBe('suspended');
-
- await context.resume();
- expect(context.state).toBe('running');
- });
-});
-```
-
-### Audio Recording Testing
-
-```typescript
-import { AudioContext, AudioRecorder, FileFormat, FileDirectory } from 'react-native-audio-api/mock';
-
-describe('Audio Recording Tests', () => {
- it('should configure and control recording', () => {
- const context = new AudioContext();
- const recorder = new AudioRecorder();
-
- // Configure file output
- const result = recorder.enableFileOutput({
- format: FileFormat.M4A,
- channelCount: 2,
- directory: FileDirectory.Document,
- });
-
- expect(result.status).toBe('success');
-
- // Set up recording chain
- const oscillator = context.createOscillator();
- const recorderAdapter = context.createRecorderAdapter();
-
- oscillator.connect(recorderAdapter);
- recorder.connect(recorderAdapter);
-
- // Test recording workflow
- const startResult = recorder.start();
- expect(startResult.status).toBe('success');
- expect(recorder.isRecording()).toBe(true);
-
- const stopResult = recorder.stop();
- expect(stopResult.status).toBe('success');
- expect(recorder.isRecording()).toBe(false);
- });
-});
-```
-
-### Offline Audio Processing
-
-```typescript
-import { OfflineAudioContext } from 'react-native-audio-api/mock';
-
-describe('Offline Processing Tests', () => {
- it('should render offline audio', async () => {
- const offlineContext = new OfflineAudioContext({
- numberOfChannels: 2,
- length: 44100, // 1 second at 44.1kHz
- sampleRate: 44100,
- });
-
- // Create a simple tone
- const oscillator = offlineContext.createOscillator();
- oscillator.frequency.value = 440;
- oscillator.connect(offlineContext.destination);
-
- // Render the audio
- const renderedBuffer = await offlineContext.startRendering();
-
- expect(renderedBuffer.numberOfChannels).toBe(2);
- expect(renderedBuffer.length).toBe(44100);
- expect(renderedBuffer.sampleRate).toBe(44100);
- });
-});
-```
-
-## Advanced Testing Scenarios
-
-### Custom Worklet Testing
-
-```typescript
-import { AudioContext, WorkletProcessingNode } from 'react-native-audio-api/mock';
-
-describe('Worklet Tests', () => {
- it('should create custom audio processing', () => {
- const context = new AudioContext();
-
- const processingCallback = jest.fn((inputData, outputData, framesToProcess) => {
- // Mock audio processing logic
- for (let channel = 0; channel < outputData.length; channel++) {
- for (let i = 0; i < framesToProcess; i++) {
- outputData[channel][i] = inputData[channel][i] * 0.5; // Simple gain
- }
- }
- });
-
- const workletNode = new WorkletProcessingNode(
- context,
- 'AudioRuntime',
- processingCallback
- );
-
- expect(workletNode.context).toBe(context);
- });
-});
-```
-
-### Audio Streaming Testing
-
-```typescript
-import { AudioContext } from 'react-native-audio-api/mock';
-
-describe('Streaming Tests', () => {
- it('should handle audio streaming', () => {
- const context = new AudioContext();
-
- const streamer = context.createStreamer({
- streamPath: 'https://example.com/audio-stream',
- });
-
- expect(streamer.streamPath).toBe('https://example.com/audio-stream');
-
- // Test streaming controls
- streamer.start();
- streamer.pause();
- streamer.resume();
- streamer.stop();
- });
-});
-```
-
-### Error Handling Testing
-
-```typescript
-import {
- AudioRecorder,
- NotSupportedError,
- InvalidStateError
-} from 'react-native-audio-api/mock';
-
-describe('Error Handling Tests', () => {
- it('should handle various error conditions', () => {
- // Test error creation
- expect(() => {
- throw new NotSupportedError('Feature not supported');
- }).toThrow('Feature not supported');
-
- // Test recorder connection errors
- const recorder = new AudioRecorder();
- const context = new AudioContext();
- const adapter = context.createRecorderAdapter();
-
- // First connection should work
- recorder.connect(adapter);
-
- // Second connection should throw
- expect(() => recorder.connect(adapter)).toThrow();
- });
-});
-```
-
-## Mock Configuration
-
-### System Volume Testing
-
-```typescript
-import { useSystemVolume, setMockSystemVolume, AudioManager } from 'react-native-audio-api/mock';
-
-describe('System Integration Tests', () => {
- it('should mock system audio management', () => {
- // Test system sample rate
- const preferredRate = AudioManager.getDevicePreferredSampleRate();
- expect(preferredRate).toBe(44100);
-
- // Test volume management
- setMockSystemVolume(0.7);
- const currentVolume = useSystemVolume();
- expect(currentVolume).toBe(0.7);
-
- // Test event listeners
- const volumeCallback = jest.fn();
- const listener = AudioManager.addSystemEventListener(
- 'volumeChange',
- volumeCallback
- );
-
- expect(listener.remove).toBeDefined();
- listener.remove();
- });
-});
-```
-
-### Audio Callback Testing
-
-```typescript
-import { AudioRecorder } from 'react-native-audio-api/mock';
-
-describe('Callback Tests', () => {
- it('should handle audio data callbacks', () => {
- const recorder = new AudioRecorder();
- const audioDataCallback = jest.fn();
-
- const result = recorder.onAudioReady(
- {
- sampleRate: 44100,
- bufferLength: 1024,
- channelCount: 2,
- },
- audioDataCallback
- );
-
- expect(result.status).toBe('success');
-
- // Test callback cleanup
- recorder.clearOnAudioReady();
- expect(() => recorder.clearOnAudioReady()).not.toThrow();
- });
-});
-```
-
-## Type Safety
-
-The mock implementation provides full TypeScript support with the same types as the real library:
-
-```typescript
-import type { AudioContext, AudioParam, GainNode } from 'react-native-audio-api/mock';
-
-// All types are available and identical to the real implementation
-function processAudioNode(node: GainNode): void {
- node.gain.value = 0.5;
-}
-```
-
-## Testing Best Practices
-
-1. **Isolate Audio Logic**: Test audio processing logic separately from UI components
-2. **Mock External Dependencies**: Use mocks for file system, network, and platform-specific operations
-3. **Test Error Scenarios**: Verify your code handles various error conditions gracefully
-4. **Validate Audio Graph Structure**: Ensure nodes are connected correctly
-5. **Test Async Operations**: Use proper async/await patterns for operations like rendering
-
-## Example Test Suite
-
-```typescript
-import {
- AudioContext,
- AudioRecorder,
- FileFormat,
- decodeAudioData
-} from 'react-native-audio-api/mock';
-
-describe('Audio Application Tests', () => {
- let context: AudioContext;
-
- beforeEach(() => {
- context = new AudioContext();
- });
-
- afterEach(() => {
- // Clean up if needed
- context.close();
- });
-
- describe('Audio Graph', () => {
- it('should create complex audio processing chain', () => {
- const oscillator = context.createOscillator();
- const filter = context.createBiquadFilter();
- const delay = context.createDelay();
- const gainNode = context.createGain();
-
- // Configure effects chain
- filter.type = 'lowpass';
- filter.frequency.value = 2000;
- delay.delayTime.value = 0.3;
- gainNode.gain.value = 0.8;
-
- // Connect the chain
- oscillator.connect(filter);
- filter.connect(delay);
- delay.connect(gainNode);
- gainNode.connect(context.destination);
-
- // Verify configuration
- expect(filter.type).toBe('lowpass');
- expect(delay.delayTime.value).toBe(0.3);
- expect(gainNode.gain.value).toBe(0.8);
- });
- });
-
- describe('File Operations', () => {
- it('should handle audio file processing', async () => {
- const mockAudioData = new ArrayBuffer(1024);
-
- // Test audio decoding
- const decodedBuffer = await decodeAudioData(mockAudioData);
- expect(decodedBuffer.numberOfChannels).toBe(2);
- expect(decodedBuffer.sampleRate).toBe(44100);
- });
- });
-});
-```
-
-The mock implementation provides a complete testing environment that allows you to thoroughly test your audio applications without requiring real audio hardware or complex setup.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/other/web-audio-api-coverage
-# Title: web-audio-api-coverage
-
-# [Web Audio API coverage](https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API)
-
-### Coverage table
-
-| Interface | Status | Remarks |
-| :-------: | :----: | :------ |
-| AnalyserNode | ✅ |
-| AudioBuffer | ✅ |
-| AudioBufferSourceNode | ✅ |
-| AudioDestinationNode | ✅ |
-| AudioNode | ✅ |
-| AudioParam | ✅ |
-| AudioScheduledSourceNode | ✅ |
-| BiquadFilterNode | ✅ |
-| ConstantSourceNode | ✅ |
-| ConvolverNode | ✅ |
-| DelayNode | ✅ |
-| GainNode | ✅ |
-| IIRFilterNode | ✅ |
-| OfflineAudioContext | ✅ |
-| OscillatorNode | ✅ |
-| PeriodicWave | ✅ |
-| StereoPannerNode | ✅ |
-| WaveShaperNode | ✅ |
-| AudioContext | 🚧 | Available props and methods: `close`, `suspend`, `resume` |
-| BaseAudioContext | 🚧 | Available props and methods: `currentTime`, `destination`, `sampleRate`, `state`, `decodeAudioData`, all create methods for available or partially implemented nodes |
-| AudioListener | ❌ |
-| AudioSinkInfo | ❌ |
-| AudioWorklet | ❌ |
-| AudioWorkletGlobalScope | ❌ |
-| AudioWorkletNode | ❌ |
-| AudioWorkletProcessor | ❌ |
-| ChannelMergerNode | ❌ |
-| ChannelSplitterNode | ❌ |
-| DynamicsCompressorNode | ❌ |
-| MediaElementAudioSourceNode | ❌ |
-| MediaStreamAudioDestinationNode | ❌ |
-| MediaStreamAudioSourceNode | ❌ |
-| PannerNode | ❌ |
-
-### Description
-
-✅ - Completed
-
-🚧 - Partially implemented
-
-❌ - Not yet available
-
-> **Info**
->
-> If you have a use case for any of not yet available interfaces,
-> contact us or [create issue](https://github.com/software-mansion/react-native-audio-api).
-> We will do our best to ship it as soon as possible!
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/react/select-input
-# Title: select-input
-
-# useAudioInput
-
-React hook for managing audio input device selection and monitoring available audio input devices. Current input will be available after first activation of the audio session. Not all connected devices might be listed as available inputs, some might be filtered out as incompatible with current session configuration.
-
-The `useAudioInput` hook provides an interface for:
-
-* Retrieving all available audio input devices
-* Getting the currently active input device
-* Switching between different input devices
-
- **Platform support:** Input device selection is currently only supported on iOS. On Android, `useAudioInput` is implemented as a no-op: the hook will not list or switch input devices, and any selection calls will effectively be ignored.
-
-## Usage
-
-```tsx
-import React from 'react';
-import { View, Text, Button } from 'react-native';
-import { useAudioInput } from 'react-native-audio-api';
-
-function AudioInputSelector() {
- const { availableInputs, currentInput, onSelectInput } = useAudioInput();
-
- return (
-
- Current Input: {currentInput?.name || 'None'}
-
- {availableInputs.map((input) => (
-
- );
-}
-```
-
-## Return Value
-
-The hook returns an object with the following properties:
-
-### `availableInputs: AudioDeviceInfo[]`
-
-An array of all available audio input devices. Each device contains:
-
-* `id: string` - Unique device identifier
-* `name: string` - Human-readable device name
-* `category: string` - Device category (e.g., "Built-In Microphone", "Bluetooth")
-
-### `currentInput: AudioDeviceInfo | null`
-
-The currently active audio input device, or `null` if no device is selected.
-
-### `onSelectInput: (device: AudioDeviceInfo) => Promise`
-
-Function to programmatically select an audio input device. Takes an `AudioDeviceInfo` object and attempts to set it as the active input device.
-
-## Related
-
-* [AudioManager](/docs/system/audio-manager) - For managing audio sessions and permissions
-* [AudioRecorder](/docs/inputs/audio-recorder) - For capturing audio from the selected input device
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer-base-source-node
-# Title: audio-buffer-base-source-node
-
-# AudioBufferBaseSourceNode
-
-The `AudioBufferBaseSourceNode` interface is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which aggregates behavior of nodes that requires [`AudioBuffer`](/docs/sources/audio-buffer).
-
-Child classes:
-
-* [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node)
-* [`AudioBufferQueueSourceNode`](/docs/sources/audio-buffer-queue-source-node)
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `detune` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing detuning of oscillation in cents. |
-| `playbackRate` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` defining speed factor at which the audio will be played. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
-
-### `getLatency`
-
-Returns the playback latency introduced by the pitch correction algorithm, in seconds.
-When scheduling precise playback times, start input samples this many seconds earlier to compensate for processing delay.
-Typically around `0.06s` when pitch correction is enabled, and `0` otherwise.
-
-#### Returns `number`.
-
-Example usage
-
-```tsx
-const source = audioContext.createBufferSource({ pitchCorrection: true });
-source.buffer = buffer;
-source.connect(audioContext.destination);
-
-const latency = source.getLatency();
-
-// Schedule playback slightly earlier to compensate for latency
-const startTime = audioContext.currentTime + 1.0; // play in 1 second
-source.start(startTime - latency);
-```
-
-## Events
-
-### `onPositionChanged`
-
-Allow to set (or remove) callback that will be fired after processing certain part of an audio.
-Frequency is defined by `onPositionChangedInterval`. By setting this callback you can achieve pause functionality.
-You can remove callback by passing `null`.
-
-### `onPositionChangedInterval`
-
-Allow to set frequency for `onPositionChanged` event. Value that can be set is around `1000/x` Hz.
-
-```ts
-import { AudioContext, AudioBufferSourceNode } from 'react-native-audio-api';
-
-function App() {
- const ctx = new AudioContext();
- const sourceNode = ctx.createBufferSource();
- sourceNode.buffer = null; //set your buffer
- let offset = 0;
-
- sourceNode.onPositionChanged = (event) => { //setting callback
- this.offset = event.value;
- };
-
- sourceNode.onPositionChangedInterval = 100; //setting frequency to ~10Hz
-
- sourceNode.start();
-}
-```
-
-## Remarks
-
-#### `detune`
-
-* Default value is 0.0.
-* Nominal range is -∞ to ∞.
-* For example value of 100 detune the source up by one semitone, whereas -1200 down by one octave.
-* When `createBufferSource(true)` it is clamped to range -1200 to 1200.
-
-#### `playbackRate`
-
-* Default value is 1.0.
-* Nominal range is -∞ to ∞.
-* For example value of 1.0 plays audio at normal speed, whereas value of 2.0 plays audio twice as fast as normal speed.
-* When created with `createBufferSource(true)` it is clamped to range 0 to 3 and uses pitch correction algorithm.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer-queue-source-node
-# Title: audio-buffer-queue-source-node
-
-
-import { Optional, Experimental, Overridden, MobileOnly } from '@site/src/components/Badges';
-
-# AudioBufferQueueSourceNode
-
-The `AudioBufferQueueSourceNode` is an [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node) which represents player that consists of many short buffers.
-
-## Constructor
-
-[`BaseAudioContext.createBufferQueueSource(options: AudioBufferBaseSourceNodeOptions)`](/docs/core/base-audio-context#createbufferqueuesource)
-
-```jsx
-interface AudioBufferBaseSourceNodeOptions {
- pitchCorrection: boolean // specifies if pitch correction algorithm has to be available
-}
-```
-
-:::caution
-The pitch correction algorithm introduces processing latency.
-As a result, when scheduling precise playback times, you should start input samples slightly ahead of the intended playback time.
-For more details, see [getLatency()](/docs/sources/audio-buffer-base-source-node#getlatency).
-:::
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import {
- AudioContext,
- AudioBufferQueueSourceNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const audioBufferQueue = audioContextRef.current.createBufferQueueSource();
- const buffer1 = ...; // Load your audio buffer here
- const buffer2 = ...; // Load another audio buffer if needed
- audioBufferQueue.enqueueBuffer(buffer1);
- audioBufferQueue.enqueueBuffer(buffer2);
- audioBufferQueue.connect(audioContextRef.current.destination);
- audioBufferQueue.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-`AudioBufferQueueSourceNode` does not define any additional properties.
-It inherits all properties from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#properties).
-
-## Methods
-
-It inherits all methods from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#methods).
-
-### `enqueueBuffer`
-
-Adds another buffer to queue. Returns `bufferId` that can be used to identify the buffer in [`onBufferEnded`](audio-buffer-queue-source-node#onbufferended) event.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `buffer` | [`AudioBuffer`](/docs/sources/audio-buffer) | Buffer with next data. |
-
-#### Returns `string`.
-
-### `dequeueBuffer`
-
-Removes a buffer from the queue. Note that [`onBufferEnded`](audio-buffer-queue-source-node#onbufferended) event will not be fired for buffers that were removed.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `bufferId` | `string` | ID of the buffer to remove from the queue. It should be valid id provided by `enqueueBuffer` method. |
-
-#### Returns `undefined`.
-
-### `clearBuffers`
-
-Removes all buffers from the queue. Note that [`onBufferEnded`](audio-buffer-queue-source-node#onbufferended) event will not be fired for buffers that were removed.
-
-#### Returns `undefined`.
-
-### `start` {#start}
-
-Schedules the `AudioBufferQueueSourceNode` to start playback of enqueued [`AudioBuffers`](/docs/sources/audio-buffer), or starts to play immediately.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which playback is scheduled to start. If `when` is less than [`AudioContext.currentTime`](/docs/core/base-audio-context#properties) or set to 0, the node starts playing immediately. Default: `0`. |
-| `offset` | `number` | The position, in seconds, within the first enqueued audio buffer where playback begins. The default value is `0`, which starts playback from the beginning of the first enqueued buffer. If the offset exceeds the buffer’s [`duration`](/docs/sources/audio-buffer#properties), it’s automatically clamped to the valid range. |
-
-
-### `pause`
-
-Stops audio immediately. Unlike [`stop()`](/docs/sources/audio-scheduled-source-node#stop), which fully stops playback and clears the queued buffers,
-pause() halts the audio while keeping the current playback position, allowing you to resume from the same point later.
-
-#### Returns `undefined`.
-
-## Events
-
-### `onBufferEnded`
-
-Sets (or remove) callback that will be fired when a specific buffer has ended with payload of type [`OnBufferEndEventType`](audio-buffer-queue-source-node#onbufferendeventtype)
-
-You can remove callback by passing `null`.
-
-```ts
-audioBufferQueueSourceNode.onBufferEnded = (event) => { //setting callback
- console.log(`buffer with id {event.bufferId} ended`);
-
- if (event.isLastBufferInQueue) {
- console.log('That was the last buffer in the queue');
- }
-};
-```
-
-## Remarks
-
-### `OnBufferEndEventType`
-
-
-Type definitions
-```typescript
-interface OnBufferEndEventType {
- bufferId: string; // the ID of the buffer that has ended
- isLastBufferInQueue: boolean; // a boolean indicating whether it was the last buffer in the queue
-}
-```
-
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer-source-node
-# Title: audio-buffer-source-node
-
-
-import AudioNodePropsTable from "@site/src/components/AudioNodePropsTable"
-import { Optional, Overridden } from '@site/src/components/Badges';
-import AudioApiExample from '@site/src/components/AudioApiExample'
-import InteractivePlayground from '@site/src/components/InteractivePlayground';
-import { useAudioBufferSourcePlayground } from '@site/src/components/InteractivePlayground/AudioBufferSourceExample/useAudioBufferSourcePlayground';
-import { useGainAdsrPlayground } from '@site/src/components/InteractivePlayground/GainAdsrExample/useGainAdsrPlayground';
-
-
-# AudioBufferSourceNode
-
-The `AudioBufferSourceNode` is an [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node) which represents audio source with in-memory audio data, stored in
-[`AudioBuffer`](/docs/sources/audio-buffer). You can use it for audio playback, including standard pause and resume functionalities.
-
-An `AudioBufferSourceNode` can be started only once, so if you want to play the same sound again you have to create a new one.
-However, this node is very inexpensive to create, and what is crucial you can reuse same [`AudioBuffer`](/docs/sources/audio-buffer).
-
-
-AudioBufferSourceNode interactive playground
-
-
-
-
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-
-
-## Constructor
-
-[`BaseAudioContext.createBufferSource(options: AudioBufferBaseSourceNodeOptions)`](/docs/core/base-audio-context#createbuffersource)
-
-```jsx
-interface AudioBufferBaseSourceNodeOptions {
- pitchCorrection: boolean // specifies if pitch correction algorithm has to be available
-}
-```
-
-:::caution
-The pitch correction algorithm introduces processing latency.
-As a result, when scheduling precise playback times, you should start input samples slightly ahead of the intended playback time.
-For more details, see [getLatency()](/docs/sources/audio-buffer-base-source-node#getlatency).
-
-If you plan to play multiple buffers one after another, consider using [AudioBufferQueueSourceNode](/docs/sources/audio-buffer-queue-source-node)
-:::
-
-## Example
-
-```tsx
-import React, { useEffect, useRef, FC } from 'react';
-import {
- AudioContext,
- AudioBufferSourceNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const audioBufferSource = audioContextRef.current.createBufferSource();
- const buffer = ...; // Load your audio buffer here
- audioBufferSource.buffer = buffer;
- audioBufferSource.connect(audioContextRef.current.destination);
- audioBufferSource.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `buffer` | [`AudioBuffer`](/docs/sources/audio-buffer) | Associated `AudioBuffer`. |
-| `loop` | `boolean` | Boolean indicating if audio data must be replayed after when end of the associated `AudioBuffer` is reached. |
-| `loopSkip` | `boolean` | Boolean indicating if upon setting up `loopStart` we want to skip immediately to the loop start. |
-| `loopStart` | `number` | Float value indicating the time, in seconds, at which playback of the audio must begin, if loop is true. |
-| `loopEnd` | `number` | Float value indicating the time, in seconds, at which playback of the audio must end and loop back to `loopStart`, if loop is true. |
-
-## Methods
-
-It inherits all methods from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#methods).
-
-### `start` {#start}
-
-Schedules the `AudioBufferSourceNode` to start playback of audio data contained in the associated [`AudioBuffer`](/docs/sources/audio-buffer), or starts to play immediately.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which playback is scheduled to start. If `when` is less than [`AudioContext.currentTime`](/docs/core/base-audio-context#properties) or set to 0, the node starts playing immediately. Default: `0`. |
-| `offset` | `number` | The position, in seconds, within the audio buffer where playback begins. The default value is `0`, which starts playback from the beginning of the buffer. If the offset exceeds the buffer’s [`duration`](/docs/sources/audio-buffer#properties) (or the defined [`loopEnd`](/docs/sources/audio-buffer-source-node#properties) value), it’s automatically clamped to the valid range. Offsets are calculated using the buffer’s natural sample rate rather than the current playback rate — so even if the sound is played at double speed, halfway through a 10-second buffer is still 5 seconds. |
-| `duration` | `number` | The playback duration, in seconds. If not provided, playback continues until the sound ends naturally or is manually stopped with [`stop() method`](/docs/sources/audio-scheduled-source-node#stop). Equivalent to calling `start(when, offset)` followed by `stop(when + duration)`. |
-
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `when` is negative number. |
-| `RangeError` | `offset` is negative number. |
-| `RangeError` | `duration` is negative number. |
-| `InvalidStateError` | If node has already been started once. |
-
-#### Returns `undefined`.
-
-
-## Events
-
-### `onLoopEnded`
-
-Sets (or remove) callback that will be fired when buffer source node reached the end of the loop and is looping back to `loopStart`.
-You can remove callback either by passing `null` or calling `remove` on the returned subscription.
-
-```ts
-const subscription = audioBufferSourceNode.onLoopEnded = () => { // setting the callback
- console.log("loop ended");
-};
-
-subscription.remove(); // removal of the subscription
-```
-
-## Remarks
-
-#### `buffer`
-- If is null, it outputs a single channel of silence (all samples are equal to 0).
-
-#### `loop`
-- Default value is false.
-
-#### `loopStart`
-- Default value is 0.
-
-#### `loopEnd`
-- Default value is `buffer.duration`.
-
-#### `playbackRate`
-- Default value is 1.0.
-- Nominal range is -∞ to ∞.
-- For example value of 1.0 plays audio at normal speed, whereas value of 2.0 plays audio twice as fast as normal speed.
-- When created with `createBufferSource(true)` it is clamped to range 0 to 3 and uses pitch correction algorithm.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer
-# Title: audio-buffer
-
-# AudioBuffer
-
-The `AudioBuffer` interface represents a short audio asset, commonly shorter then one minute.
-It can consists of one or more channels, each one appearing to be 32-bit floating-point linear [PCM](https://en.wikipedia.org/wiki/Pulse-code_modulation) values with a nominal range of \[−1, 1] (but not limited to that range),
-specific sample rate which is the quantity of frames that will play in one second and length.
-
-
-
-It can be created from audio file using [`decodeAudioData`](/docs/utils/decoding#decodeaudiodata) or from raw data using `constructor`.
-Once you have data in `AudioBuffer`, audio can be played by passing it to [`AudioBufferSourceNode`](audio-buffer-source-node).
-
-## Constructor
-
-```tsx
-constructor(options: AudioBufferOptions)
-```
-
-### `AudioBufferOptions`
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `length` | `number` | - | [`Length`](/docs/sources/audio-buffer#properties) of the buffer |
-| `numberOfChannels` | `number` | 1.0 | Number of [`channels`](/docs/sources/audio-buffer#properties) in buffer |
-| `sampleRate` | `number` | - | [`Sample rate`](/docs/sources/audio-buffer#properties) of the buffer in Hz |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createBuffer(numChannels, length, sampleRate)`](/docs/core/base-audio-context#createbuffer) that creates buffer with default values.
-
-## Decoding
-
-See example implementations in [`BaseAudioContext`](/docs/core/base-audio-context#decodeaudiodata) on how to decode data in various ways.
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `sampleRate` | `number` | Float value representing sample rate of the PCM data stored in the buffer. | |
-| `length` | `number` | Integer value representing length of the PCM data stored in the buffer. | |
-| `duration` | `number` | Double value representing duration, in seconds, of the PCM data stored in the buffer. | |
-| `numberOfChannels` | `number` | Integer value representing the number of audio channels of the PCM data stored in the buffer. | |
-
-## Methods
-
-### `getChannelData`
-
-Gets modifiable array with PCM data from given channel.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `channel` | `number` | Index of the `AudioBuffer's` channel, from which data will be returned. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `IndexSizeError` | `channel` specifies unexisting audio channel. |
-
-#### Returns `Float32Array`.
-
-### `copyFromChannel`
-
-Copies data from given channel of the `AudioBuffer` to an array.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `destination` | `Float32Array` | The array to which data will be copied. |
-| `channelNumber` | `number` | Index of the `AudioBuffer's` channel, from which data will be copied. |
-| `startInChannel` | `number` | Channel's offset from which to start copying data. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `IndexSizeError` | `channelNumber` specifies unexisting audio channel. |
-| `IndexSizeError` | `startInChannel` is greater then the `AudioBuffer` length. |
-
-#### Returns `undefined`.
-
-### `copyToChannel`
-
-Copies data from given array to specified channel of the `AudioBuffer`.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `source` | `Float32Array` | The array from which data will be copied. |
-| `channelNumber` | `number` | Index of the `AudioBuffer's` channel to which data will be copied. |
-| `startInChannel` | `number` | Channel's offset from which to start copying data. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `IndexSizeError` | `channelNumber` specifies unexisting audio channel. |
-| `IndexSizeError` | `startInChannel` is greater then the `AudioBuffer` length. |
-
-#### Returns `undefined`.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-scheduled-source-node
-# Title: audio-scheduled-source-node
-
-# AudioScheduledSourceNode
-
-The `AudioScheduledSourceNode` interface is an [`AudioNode`](/docs/core/audio-node) which serves as a parent interface for several types of audio source nodes.
-It provides ability to start and stop audio playback.
-
-Child classes:
-
-* [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node)
-* [`OscillatorNode`](/docs/sources/oscillator-node)
-* [`StreamerNode`](/docs/sources/streamer-node)
-
-## Properties
-
-`AudioScheduledSourceNode` does not define any additional properties.
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `start`
-
-Schedules the node to start audio playback at specified time. If no time is given, it starts immediately.
-You can invoke this method only once in node's life.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which the node will start to play. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `when` is negative number. |
-| `InvalidStateError` | If node has already been started once. |
-
-#### Returns `undefined`.
-
-### `stop`
-
-Schedules the node to stop audio playback at specified time. If no time is given, it stops immediately.
-If you invoke this method multiple times on the same node before the designated stop time, the most recent call overwrites previous one.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which the node will stop playing. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `when` is negative number. |
-| `InvalidStateError` | If node has not been started yet. |
-
-#### Returns `undefined`.
-
-## Events
-
-### `onEnded`
-
-Sets (or remove) callback that will be fired when source node has stopped playing,
-either because it's reached a predetermined stop time, the full duration of the audio has been performed, or because the entire buffer has been played.
-You can remove callback either by passing `null` or calling `remove` on the returned subscription.
-
-```ts
-const subscription = audioBufferSourceNode.onEnded = () => { // setting the callback
- console.log("audio ended");
-};
-
-subscription.remove(); // removal of the subscription
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/constant-source-node
-# Title: constant-source-node
-
-# ConstantSourceNode
-
-The `ConstantSourceNode` is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which represents an audio source, that outputs a single constant value.
-The `offset` parameter controls this value. Although the node is called "constant" its `offset` value can be automated to change over time, which makes it powerful tool
-for controlling multiple other [`AudioParam`](/docs/core/audio-param) values in an audio graph.
-Just like `AudioScheduledSourceNode`, it can be started only once.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: ConstantSourceOptions)
-```
-
-### `ConstantSourceOptions`
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `offset` | `number` | 1 | Initial value for [`offset`](/docs/sources/constant-source-node#properties) |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createConstantSource()`](/docs/core/base-audio-context#createconstantsource) that creates node with default values.
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import { Text } from 'react-native';
-import {
- AudioContext,
- OscillatorNode,
- GainNode,
- ConstantSourceNode
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const audioContext = audioContextRef.current;
-
- const oscillator1 = audioContext.createOscillator();
- const oscillator2 = audioContext.createOscillator();
- const gainNode1 = audioContext.createGain();
- const gainNode2 = audioContext.createGain();
- const constantSource = audioContext.createConstantSource();
-
- oscillator1.frequency.value = 440;
- oscillator2.frequency.value = 392;
- constantSource.offset.value = 0.5;
-
- oscillator1.connect(gainNode1);
- gainNode1.connect(audioContext.destination);
-
- oscillator2.connect(gainNode2);
- gainNode2.connect(audioContext.destination);
-
- // We connect the constant source to the gain nodes gain AudioParams
- // to control both of them at the same time
- constantSource.connect(gainNode1.gain);
- constantSource.connect(gainNode2.gain);
-
- oscillator1.start(audioContext.currentTime);
- oscillator2.start(audioContext.currentTime);
- constantSource.start(audioContext.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Default value | Description |
-| :----: | :----: | :--------: | :------- |
-| `offset` | [`AudioParam`](/docs/core/audio-param) | 1.0 |[`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing the value which the node constantly outputs. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/oscillator-node
-# Title: oscillator-node
-
-
-import AudioNodePropsTable from "@site/src/components/AudioNodePropsTable"
-import { Optional, ReadOnly } from '@site/src/components/Badges';
-import InteractivePlayground from '@site/src/components/InteractivePlayground';
-import { useOscillatorPlayground } from '@site/src/components/InteractivePlayground/OscillatorExample/useOscilatorPlayground';
-
-# OscillatorNode
-
-The `OscillatorNode` is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which represents a simple periodic wave signal.
-Similar to all of `AudioScheduledSourceNodes`, it can be started only once. If you want to play the same sound again you have to create a new one.
-
-
-OscillatorNode interactive playground
-
-
-
-
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: OscillatorOptions)
-```
-
-### `OscillatorOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `type` | [`OscillatorType`](/docs/types/oscillator-type) | `sine` | Initial value for [`type`](/docs/sources/oscillator-node#properties). |
-| `frequency` | `number` | 440 | Initial value for [`frequency`](/docs/sources/oscillator-node#properties). |
-| `detune` | `number` | 0 | Initial value for [`detune`](/docs/sources/oscillator-node#properties). |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createOscillator()`](/docs/core/base-audio-context#createoscillator)
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import {
- AudioContext,
- OscillatorNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const oscillator = audioContextRef.current.createOscillator();
- oscillator.connect(audioContextRef.current.destination);
- oscillator.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Default value | Description |
-| :----: | :----: | :-------- | :------- |
-| `detune` | [`AudioParam`](/docs/core/audio-param) | 0 |[`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing detuning of oscillation in cents. |
-| `frequency` | [`AudioParam`](/docs/core/audio-param) | 440 | [`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing frequency of wave in herzs. |
-| `type` | [`OscillatorType`](/docs/types/oscillator-type)| `sine` | String value represening type of wave. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
-
-### `setPeriodicWave`
-
-Sets any periodic wave.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `wave` | [`PeriodicWave`](/docs/effects/periodic-wave) | Data representing custom wave. [`See for reference`](/docs/core/base-audio-context#createperiodicwave) |
-
-#### Returns `undefined`.
-
-## Remarks
-
-#### `detune`
-- Nominal range is: -∞ to ∞.
-- For example value of 100 detune the source up by one semitone, whereas -1200 down by one octave.
-
-#### `frequency`
-- 440 Hz is equivalent to piano note A4.
-- Nominal range is: $-\frac{\text{sampleRate}}{2}$ to $\frac{\text{sampleRate}}{2}$
-(`sampleRate` value is taken from [`AudioContext`](/docs/core/base-audio-context#properties))
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/recorder-adapter-node
-# Title: recorder-adapter-node
-
-# RecorderAdapterNode
-
-The `RecorderAdapterNode` is an [`AudioNode`](/docs/core/audio-node) which is an adapter for [`AudioRecorder`](/docs/inputs/audio-recorder).
-It lets you compose audio input from recorder into an audio graph.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createRecorderAdapter()`](/docs/core/base-audio-context#createrecorderadapter)
-
-## Example
-
-```tsx
-const recorder = new AudioRecorder({
- sampleRate: 48000,
- bufferLengthInSamples: 48000,
-});
-const audioContext = new AudioContext({ sampleRate: 48000 });
-const recorderAdapterNode = aCtxRef.current.createRecorderAdapter();
-
-recorder.connect(recorderAdapterNode);
-recorderAdapterNode.connect(audioContext.destination)
-```
-
-## Properties
-
-`RecorderAdapterNode` does not define any additional properties.
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-## Methods
-
-`RecorderAdapterNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-* Adapter without a connected recorder will produce silence.
-* Adapter connected only to a recorder will function correctly and keep a small buffer of recorded data.
-* Adapter will not be garbage collected as long as it remains connected to either a destination or a recorder.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/sources/streamer-node
-# Title: streamer-node
-
-# StreamerNode
-
-> **Caution**
->
-> Mobile only.
-
-The `StreamerNode` is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which represents a node that can decode and play [Http Live Streaming](https://developer.apple.com/streaming/) data.
-Similar to all of `AudioScheduledSourceNodes`, it can be started only once. If you want to play the same sound again you have to create a new one.
-
-#### [`AudioNode`](/docs/core/audio-node#read-only-properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options: StreamerOptions)
-```
-
-### `StreamerOptions`
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `streamPath` | `string` | - | Value for [`streamPath`](/docs/sources/streamer-node#properties) |
-
-Or by using `BaseAudioContext` factory method:
-
-[`BaseAudioContext.createStreamer()`](/docs/core/base-audio-context#createstreamer).
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import {
- AudioContext,
- StreamerNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const streamer = audioContextRef.current.createStreamer();
- streamer.initialize('link/to/your/hls/source');
- streamer.connect(audioContextRef.current.destination);
- streamer.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :------- |
-| `streamPath` | `string` | String value representing url to stream. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/system/audio-manager
-# Title: audio-manager
-
-# AudioManager
-
-The `AudioManager` is a layer of an abstraction between user and a system.
-It provides a set of system-specific functions that are invoked directly in native code, by related system.
-
-## Example
-
-```tsx
-import { AudioManager } from 'react-native-audio-api';
-import { useEffect } from 'react';
-
-function App() {
- // set AVAudioSession example options (iOS)
- AudioManager.setAudioSessionOptions({
- iosCategory: 'playback',
- iosMode: 'default',
- iosOptions: ['defaultToSpeaker', 'allowBluetoothA2DP'],
- })
- // enabling emission of events
- AudioManager.observeAudioInterruptions(true);
- AudioManager.getDevicesInfo().then(console.log);
-
- useEffect(() => {
- // callback to be invoked on 'interruption' event
- const interruptionSubscription = AudioManager.addSystemEventListener(
- 'interruption',
- (event) => {
- console.log('Interruption event:', event);
- }
- );
-
- return () => {
- interruptionSubscription?.remove();
- };
- }, []);
-}
-```
-
-## Methods
-
-### `setAudioSessionOptions`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| options | [`SessionOptions`](/docs/system/audio-manager#sessionoptions) | Options to be set for [AVAudioSession](https://developer.apple.com/documentation/avfaudio/avaudiosession?language=objc#Configuring-standard-audio-behaviors) |
-
-#### Returns `undefined`
-
-### `setAudioSessionActivity`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| enabled | `boolean` | It is used to set/unset [AVAudioSession](https://developer.apple.com/documentation/avfaudio/avaudiosession?language=objc#Activating-the-audio-configuration) activity |
-
-#### Returns promise of `boolean` type, which is resolved to `true` if invokation ended with success, `false` otherwise.'
-
-### `disableSessionManagement`
-
-#### Returns `undefined`.
-
-Disables all internal default [AVAudioSession](https://developer.apple.com/documentation/avfaudio/avaudiosession) configurations and management done by the `react-native-audio-api` package. After calling this method, user is responsible for managing audio session entirely on their own.
-Typical use-case for this method is when user wants to fully control audio session outside of `react-native-audio-api` package,
-commonly when using another audio library along `react-native-audio-api`. The method has to be called before `AudioContext` is created, for example in app initialization code.
-Any later call to `setAudioSessionOptions` or `setAudioSessionActivity` will re-enable internal audio session management.
-
-### `getDevicePreferredSampleRate`
-
-#### Returns `number`.
-
-### `observeAudioInterruptions`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `param` | [`AudioFocusType`](audio-manager#audiofocustype) | `boolean` | `null` | It is used to enable/disable observing audio interruptions. Passing `false` or `null` disables the observation, otherwise it is enabled. |
-
-> **Info**
->
-> On android passing the audio focus type set the native [audio focus](https://developer.android.com/media/optimize/audio-focus) accordingly.
-> It is recommended for apps to respect the rules for good user experience.
-> On iOS it just enables/disables event emission and has no additional effects.
-
-#### Returns `undefined`
-
-### `activelyReclaimSession`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `enabled` | `boolean` | It is used to enable/disable session spoofing |
-
-#### Returns `undefined`
-
-More aggressively try to reactivate the audio session during interruptions.
-
-In some cases (depends on app session settings and other apps using audio) system may never
-send the `interruption ended` event. This method will check if any other audio is playing
-and try to reactivate the audio session, as soon as there is "silence".
-Although this might change the expected behavior.
-
-Internally method uses `AVAudioSessionSilenceSecondaryAudioHintNotification` as well as
-interval polling to check if other audio is playing.
-
-### `observeVolumeChanges`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `enabled` | `boolean` | It is used to enable/disable observing volume changes |
-
-#### Returns `undefined`
-
-### `addSystemEventListener`
-
-Adds callback to be invoked upon hearing an event.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `name` | [`SystemEventName`](audio-manager#systemeventname) | Name of an event listener |
-| `callback` | [`SystemEventCallback`](audio-manager#systemeventname) | Callback that will be invoked upon hearing an event |
-
-#### Returns [`AudioEventSubscription`](/docs/system/audio-manager#audioeventsubscription) if `enabled` is set to true, `undefined` otherwise
-
-### `requestRecordingPermissions`
-
-Brings up the system microphone permissions pop-up on demand. The pop-up automatically shows if microphone data
-is directly requested, but sometimes it is better to ask beforehand.
-
-#### Throws an `error` if there is no NSMicrophoneUsageDescription entry in `Info.plist`
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `checkRecordingPermissions`
-
-Checks if permissions were previously granted.
-
-#### Throws an `error` if there is no NSMicrophoneUsageDescription entry in `Info.plist`
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `requestNotificationPermissions`
-
-Brings up the system notification permissions pop-up on demand. The pop-up automatically shows if notification data
-is directly requested, but sometimes it is better to ask beforehand.
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `checkRecordingPermissions`
-
-Checks if permissions were previously granted.
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `getDevicesInfo`
-
-Checks currently used and available devices.
-
-#### Returns promise of [`AudioDevicesInfo`](/docs/system/audio-manager#audiodevicesinfo) type, which is resolved after receiving answer from the system.
-
-## Remarks
-
-### `AudioFocusType`
-
-Type definitions
-
-```typescript
-type AudioFocusType =
- | 'gain'
- | 'gainTransient'
- | 'gainTransientExclusive'
- | 'gainTransientMayDuck';
-```
-
-### `SessionOptions`
-
-Type definitions
-
-```typescript
-type IOSCategory =
- | 'record'
- | 'ambient'
- | 'playback'
- | 'multiRoute'
- | 'soloAmbient'
- | 'playAndRecord';
-
-type IOSMode =
- | 'default'
- | 'gameChat'
- | 'videoChat'
- | 'voiceChat'
- | 'measurement'
- | 'voicePrompt'
- | 'spokenAudio'
- | 'moviePlayback'
- | 'videoRecording';
-
-type IOSOption =
- | 'duckOthers'
- | 'allowAirPlay'
- | 'mixWithOthers'
- | 'allowBluetoothHFP'
- | 'defaultToSpeaker'
- | 'allowBluetoothA2DP'
- | 'overrideMutedMicrophoneInterruption'
- | 'interruptSpokenAudioAndMixWithOthers';
-
-interface SessionOptions {
- iosMode?: IOSMode;
- iosOptions?: IOSOption[];
- iosCategory?: IOSCategory;
- iosAllowHaptics?: boolean;
- // Has no effect when using PlaybackNotificationManager as it takes over the "Now playing" controls
- iosNotifyOthersOnDeactivation?: boolean;
-}
-```
-
-### `SystemEventName`
-
-Type definitions
-
-```typescript
-interface EventEmptyType {}
-
-interface EventTypeWithValue {
- value: number;
-}
-
-interface OnInterruptionEventType {
- type: 'ended' | 'began'; // if the interruption event has started or ended
- shouldResume: boolean; // if the interruption was temporary and we can resume the playback/recording
-}
-
-interface OnRouteChangeEventType {
- reason:
- | 'Unknown'
- | 'Override'
- | 'CategoryChange'
- | 'WakeFromSleep'
- | 'NewDeviceAvailable'
- | 'OldDeviceUnavailable'
- | 'ConfigurationChange'
- | 'NoSuitableRouteForCategory';
-}
-
-type SystemEvents = {
- volumeChange: EventTypeWithValue;
- interruption: OnInterruptionEventType;
- duck: EventEmptyType;
- routeChange: OnRouteChangeEventType;
-};
-
-type SystemEventName = keyof SystemEvents;
-type SystemEventCallback = (
- event: SystemEvents[Name]
-) => void;
-```
-
-### `AudioEventSubscription`
-
-Type definitions
-
-```typescript
-interface AudioEventSubscription {
- /** @internal */
- public readonly subscriptionId: string;
-
- public remove(): void; // used to remove the subscription
-}
-```
-
-### `PermissionStatus`
-
-Type definitions
-
-```typescript
-type PermissionStatus = 'Undetermined' | 'Denied' | 'Granted';
-```
-
-### `AudioDevicesInfo`
-
-Type definitions
-
-```typescript
-export interface AudioDeviceInfo {
- name: string;
- category: string;
-}
-
-export type AudioDeviceList = AudioDeviceInfo[];
-
-export interface AudioDevicesInfo {
- availableInputs: AudioDeviceList;
- availableOutputs: AudioDeviceList;
- currentInputs: AudioDeviceList; // iOS
- currentOutputs: AudioDeviceList; // iOS
-}
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/system/playback-notification-manager
-# Title: playback-notification-manager
-
-# PlaybackNotificationManager
-
-The `PlaybackNotificationManager` provides media session integration and playback controls for your audio application. It manages system-level media notifications with controls like play, pause, next, previous, and seek functionality.
-
-:::info Platform Differences
-
-**iOS Requirements:**
-
-* Notification controls only appear when an active `AudioContext` is running
-* `show()` or `hide()` only update metadata - they don't control notification visibility
-* The notification automatically appears/disappears based on audio session state
-* To show: create and resume an AudioContext
-* To hide: suspend or close the AudioContext
-
-**Android:**
-
-* Notification visibility is directly controlled by `show()` and `hide()` methods
-* Works independently of AudioContext state
-
-> ## Example
->
-> ```tsx
-> // show notification
-> await PlaybackNotificationManager.show({
-> title: 'My Song',
-> artist: 'My Artist',
-> duration: 180,
-> state: 'paused',
-> });
->
-> // Listen for notification controls
-> const playListener = PlaybackNotificationManager.addEventListener(
-> 'playbackNotificationPlay',
-> () => {
-> // Handle play action
-> PlaybackNotificationManager.show({ state: 'playing' });
-> }
-> );
->
-> const pauseListener = PlaybackNotificationManager.addEventListener(
-> 'playbackNotificationPause',
-> () => {
-> // Handle pause action
-> PlaybackNotificationManager.show({ state: 'paused' });
-> }
-> );
->
-> const seekToListener = PlaybackNotificationManager.addEventListener(
-> 'playbackNotificationSeekTo',
-> (event) => {
-> // Handle seek to position (event.value is in seconds)
-> PlaybackNotificationManager.show({ elapsedTime: event.value });
-> }
-> );
->
-> // Update progress
-> PlaybackNotificationManager.show({ elapsedTime: 60 });
->
-> // Cleanup
-> playListener.remove();
-> pauseListener.remove();
-> seekToListener.remove();
-> PlaybackNotificationManager.hide();
-> ```
->
-> ## Methods
->
-> ### `show`
->
-> Display the notification with initial metadata.note iOS Behavior
-> On iOS, this method only sets the metadata. The notification controls will only appear when an `AudioContext` is actively running. Make sure to create and resume an AudioContext before calling `show()`.
-> info
-> Metadata is remembered between calls, so after initial passing the metadata to show function, you can only call it with elements that are supposed to change.
-> | Parameter | Type | Description |
-> | :-------: | :----------: | :----- |
-> | `info` | [`PlaybackNotificationInfo`](playback-notification-manager#playbacknotificationinfo) | Initial notification metadata |
->
-> #### Returns `Promise`.
->
-> ### `hide`
->
-> Hide the notification. Can be shown again later by calling `show()`.note iOS Behavior
-> On iOS, this method clears the metadata but does not hide the notification controls. To completely hide controls on iOS, you must suspend or close the AudioContext.
-> :::
-
-#### Returns `Promise`.
-
-### `enableControl`
-
-Enable or disable specific playback controls.
-
-| Parameter | Type | Description |
-| :-------: | :-----: | :------ |
-| `control` | [`PlaybackControlName`](playback-notification-manager#playbackcontrolname) | The control to enable/disable |
-| `enabled` | `boolean` | Whether the control should be enabled |
-
-#### Returns `Promise`.
-
-### `isActive`
-
-Check if the notification is currently active and visible.
-
-#### Returns `Promise`.
-
-### `addEventListener`
-
-Add an event listener for notification actions.
-
-| Parameter | Type | Description |
-| :---------: | :------: | :------- |
-| `eventName` | [`PlaybackNotificationEventName`](playback-notification-manager#playbacknotificationeventname) | The event to listen for |
-| `callback` | [`SystemEventCallback`](/docs/system/audio-manager#systemeventname--remotecommandeventname) | Callback function |
-
-#### Returns [`AudioEventSubscription`](/docs/system/audio-manager#audioeventsubscription).
-
-## Remarks
-
-### `PlaybackNotificationInfo`
-
-Type definitions
-
-```typescript
-interface PlaybackNotificationInfo {
- title?: string;
- artist?: string;
- album?: string;
-
- // Can be a URL or a local file path relative to drawable resources (Android) or bundle resources (iOS)
- artwork?: string | { uri: string };
- // ANDROID: small icon shown in the status bar
- androidSmallIcon?: string | { uri: string };
- duration?: number;
-
- // IOS: elapsed time does not update automatically, must be set manually on each state change
- elapsedTime?: number;
- speed?: number;
- state?: 'playing' | 'paused';
-}
-```
-
-### `PlaybackControlName`
-
-Type definitions
-
-```typescript
-type PlaybackControlName =
- | 'play'
- | 'pause'
- | 'stop'
- | 'nextTrack'
- | 'previousTrack'
- | 'skipForward'
- | 'skipBackward'
- | 'seekTo';
-```
-
-### `PlaybackNotificationEventName`
-
-Type definitions
-
-```typescript
-interface EventTypeWithValue {
- value: number;
-}
-
-interface PlaybackNotificationEvent {
- playbackNotificationPlay: EventEmptyType;
- playbackNotificationPause: EventEmptyType;
- playbackNotificationStop: EventEmptyType;
- playbackNotificationNextTrack: EventEmptyType;
- playbackNotificationPreviousTrack: EventEmptyType;
- playbackNotificationSkipForward: EventTypeWithValue;
- playbackNotificationSkipBackward: EventTypeWithValue;
- playbackNotificationSeekTo: EventTypeWithValue;
- playbackNotificationDismissed: EventEmptyType;
-}
-
-type PlaybackNotificationEventName = keyof PlaybackNotificationEvent;
-```
-
-```
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/system/recording-notification-manager
-# Title: recording-notification-manager
-
-# RecordingNotificationManager
-
-The `RecordingNotificationManager` provides system integration with [`Recorder`](/docs/inputs/audio-recorder).
-It can send events about pausing and resuming to your application.
-
-## Example
-
-```typescript
-RecordingNotificationManager.show({
- title: 'Recording app',
- contentText: 'Recording...',
- paused: false,
- smallIconResourceName: 'icon_to_display',
- pauseIconResourceName: 'pause_icon',
- resumeIconResourceName: 'resume_icon',
- color: 0xff6200,
-});
-
-const pauseEventListener = RecordingNotificationManager.addEventListener('recordingNotificationPause', () => {
- console.log('Notification pause action received');
-});
-const resumeEventListener = RecordingNotificationManager.addEventListener('recordingNotificationResume', () => {
- console.log('Notification resume action received');
-});
-
-pauseEventListener.remove();
-resumeEventListener.remove();
-RecordingNotificationManager.hide();
-```
-
-## Methods
-
-### `show`
-
-Shows the recording notification with the parameters.
-
-> **Info**
->
-> Metadata is saved between calls, so after the initial pass to the show method, you need only call it with elements that are supposed to change.
-
-| Parameter |Type| Description|
-| :-------: | :--: | :----|
-| `info` | [`RecordingNotificationInfo`](recording-notification-manager#recordingnotificationinfo) | Initial notification metadata |
-
-#### Returns `Promise`.
-
-> **Info**
->
-> For more details, go to [android developer page](https://developer.android.com/develop/ui/views/notifications#Templates).
-> Resource name is a path to resource plased in res/drawable folder. It has to be either .png file or .xml file, name is indicated without file extenstion. (photo.png -> photo).
-
-> **Caution**
->
-> If nothing is displayed, even though your name is correct, try decreasing size of your resource.
-> Notification can look vastly different on different android devices.
-
-### `hide`
-
-Hides the recording notification.
-
-#### Returns `Promise`.
-
-### `isActive`
-
-Checks if the notification is displayed.
-
-#### Returns `Promise`.
-
-### `addEventListener`
-
-Add an event listener for notification actions.
-
-| Parameter | Type | Description |
-| :---------: | :----: | :---------------------- |
-| `eventName` | [`RecordingNotificationEvent`](recording-notification-manager#recordingnotificationevent) | The event to listen for |
-| `callback` | ([\`RecordingNotificationEvent](recording-notification-manager#recordingnotificationevent)) => void | Callback function |
-
-#### Returns [`AudioEventSubscription`](/docs/system/audio-manager#audioeventsubscription).
-
-## Remarks
-
-### `RecordingNotificationInfo`
-
-Type definitions
-
-```typescript
-interface RecordingNotificationInfo {
- title?: string;
- contentText?: string;
- paused?: boolean; // flag indicating whether to display pauseIcon or resumeIcon
- smallIconResourceName?: string;
- largeIconResourceName?: string;
- pauseIconResourceName?: string;
- resumeIconResourceName?: string;
- color?: number; //
-}
-```
-
-### `RecordingNotificationEvent`
-
-Type definitions
-
-```typescript
-interface RecordingNotificationEvent {
- recordingNotificationPause: EventEmptyType;
- recordingNotificationResume: EventEmptyType;
-}
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/types/channel-count-mode
-# Title: channel-count-mode
-
-# ChannelCountMode
-
-`ChannelCountMode` type determines how the number of input channels affects the number of output channels in an audio node.
-
-**Acceptable values:**
-
-* `max`
-
- The number of channels is equal to the maximum number of channels of all connections. In this case, `channelCount` is ignored and only up-mixing happens.
-
-* `clamped-max`
-
-The number of channels is equal to the maximum number of channels of all connections, clamped to the value of `channelCount`(serves as the maximum permissible value).
-
-* `explicit`
-
- The number of channels is defined by the value of `channelCount`.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/types/channel-interpretation
-# Title: channel-interpretation
-
-# ChannelInterpretation
-
-`ChannelInterpretation` type specifies how input channels are mapped out to output channels when the number of them are different.
-
-**Acceptable values:**
-
-* `speakers`
-
-Use set of standard mapping rules for all combinations of common input and output setups.
-
-* `discrete`
-
-Covers all other cases. Mapping depends on relationship between number of input channels and number of output channels.
-
-## Channels mapping table
-
-### `speakers`
-
-| Number of input channels | Number of output channels | Mixing rules |
-| :------------------------: | :------------------------- | :------------ |
-| 1 (Mono) | 2 (Stereo) | output.L = input.M output.R = input.M |
-| 1 (Mono) | 4 (Quad) | output.L = input.M output.R = input.M output.SL = 0 output.SR = 0 |
-| 1 (Mono) | 6 (5.1) | output.L = 0 output.R = 0 output.C = input.M output.LFE = 0 output.SL = 0 output.SR = 0 |
-| 2 (Stereo) | 1 (Mono) | output.M = 0.5 \* (input.L + input.R) |
-| 2 (Stereo) | 4 (Quad) | output.L = input.L output.R = input.R output.SL = 0 output.SR = 0 |
-| 2 (Stereo) | 6 (5.1) | output.L = input.L output.R = input.R output.C = 0 output.LFE = 0 output.SL = 0 output.SR = 0 |
-| 4 (Quad) | 1 (Mono) | output.M = 0.25 \* (input.L + input.R + input.SL + input.SR) |
-| 4 (Quad) | 2 (Stereo) | output.L = 0.5 \* (input.L + input.SL) output.R = 0.5 \* (input.R + input.SR) |
-| 4 (Quad) | 6 (5.1) | output.L = input.L output.R = input.R output.C = 0 output.LFE = 0 output.SL = input.SL output.SR = input.SR |
-| 6 (5.1) | 1 (Mono) | output.M = 0.7071 \* (input.L + input.R) + input.C + 0.5 \* (input.SL + input.SR) |
-| 6 (5.1) | 2 (Stereo) | output.L = input.L + 0.7071 \* (input.C + input.SL) output.R = input.R + 0.7071 \* (input.C + input.SR) |
-| 6 (5.1) | 4 (Quad) | output.L = input.L + 0.7071 \* input.C output.R = input.R + 0.7071 \* input.C output.SL = input.SL output.SR = input.SR |
-
-### `discrete`
-
-| Number of input channels | Number of output channels | Mixing rules |
-| :------------------------: | :------------------------- | :------------ |
-| x | y where y > x | Fill each output channel with its counterpart(channel with same number), rest of output channels are silent channels |
-| x | y where y \< x | Fill each output channel with its counterpart(channel with same number), rest of input channels are skipped |
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/types/oscillator-type
-# Title: oscillator-type
-
-# OscillatorType
-
-`OscillatorType` is a string that specifies shape of an oscillator wave
-
-```jsx
-type OscillatorType =
- | 'sine'
- | 'square'
- | 'sawtooth'
- | 'triangle'
- | 'custom';
-```
-
-Below you can see possible names with shapes corresponding to them.
-
-
-## `custom`
-
-This value can't be set explicitly, but it allows user to set any shape. See [`setPeriodicWave`](/docs/sources/oscillator-node#setperiodicwave) for reference.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/utils/decoding
-# Title: decoding
-
-# Decoding
-
-You can decode audio data independently, without creating an AudioContext, using the exported functions [`decodeAudioData`](/docs/utils/decoding#decodeaudiodata) and
-[`decodePCMInBase64`](/docs/utils/decoding#decodepcminbase64).
-
-> **Warning**
->
-> Decoding on the web has to be done via `AudioContext` only.
-
-If you already have an audio context, you can decode audio data directly using its [`decodeAudioData`](/docs/core/base-audio-context#decodeaudiodata) function;
-the decoded audio will then be automatically resampled to match the context's `sampleRate`.
-
-> **Caution**
->
-> Supported file formats:
->
-> * flac
-> * mp3
-> * ogg
-> * opus
-> * wav
-> * aac
-> * m4a
-> * mp4
->
-> Last three formats are decoded with ffmpeg on the mobile, [see for more info](/docs/other/ffmpeg-info).
-
-### `decodeAudioData`
-
-Decodes audio data from either a file path or an ArrayBuffer. The optional `sampleRate` parameter lets you resample the decoded audio;
-if not provided, the original sample rate from the file is used.
-
-Parameter
-Type
-Description
-
-input
-ArrayBuffer
-ArrayBuffer with audio data.
-
-string
-Path to remote or local audio file.
-
-number
-Asset module id.
-
-sampleRate
-number
-Target sample rate for the decoded audio.
-
-fetchOptions
-[RequestInit](https://github.com/facebook/react-native/blob/ac06f3bdc76a9fd7c65ab899e82bff5cad9b94b6/packages/react-native/src/types/globals.d.ts#L265)
-Additional headers parameters when passing url to fetch.
-
-#### Returns `Promise`.
-
-> **Caution**
->
-> If you are passing number to decode function, bear in mind that it uses Image component provided
-> by React Native internally. By default only support .mp3, .wav, .mp4, .m4a, .aac audio file formats.
-> If you want to use other types, refer to [this section](https://reactnative.dev/docs/images#static-non-image-resources) for more info.
-
-Example decoding remote URL
-
-```tsx
-import { decodeAudioData } from 'react-native-audio-api';
-
-const url = ... // url to an audio
-
-const buffer = await decodeAudioData(url);
-```
-
-### `decodePCMInBase64`
-
-Decodes base64-encoded PCM audio data.
-
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `base64String` | `string` | Base64-encoded PCM audio data. |
-| `inputSampleRate` | `number` | Sample rate of the input PCM data. |
-| `inputChannelCount` | `number` | Number of channels in the input PCM data. |
-| `isInterleaved` | `boolean` | Whether the PCM data is interleaved. Default is `true`. |
-
-#### Returns `Promise`
-
-Example decoding with data in base64 format
-
-```tsx
-const data = ... // data encoded in base64 string
-// data is interleaved (Channel1, Channel2, Channel1, Channel2, ...)
-const buffer = await decodeAudioData(data, 4800, 2, true);
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/utils/time-stretching
-# Title: time-stretching
-
-# Time stretching
-
-You can change the playback speed of an audio buffer independently, without creating an AudioContext, using the exported function [`changePlaybackSpeed`](/docs/utils/decoding#decodeaudiodata).
-
-### `changePlaybackSpeed`
-
-Changes the playback speed of an audio buffer.
-
-| Parameter | Type | Description |
-| :----: | :----: | :-------- |
-| `input` | `AudioBuffer` | The audio buffer whose playback speed you want to change. |
-| `playbackSpeed` | `number` | The factor by which to change the playback speed. Values between \[1.0, 2.0] speed up playback, values between \[0.5, 1.0] slow it down. |
-
-#### Returns `Promise`.
-
-Example usage
-
-```tsx
-const url = ... // url to an audio
-const sampleRate = 48000
-
-const buffer = await decodeAudioData(url, sampleRate)
- .then((audioBuffer) => changePlaybackSpeed(audioBuffer, 1.25))
- .catch((error) => {
- console.error('Error decoding audio data source:', error);
- return null;
- });
-```
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/worklets/introduction
-# Title: introduction
-
-import { MobileOnly } from '@site/src/components/Badges';
-
-# RNWorklets Support
-
-The `RNWorklets` library was originally part of Reanimated until version 4.0.0; since then, it has become a separate library.
-
-To use the worklet features provided by `react-native-audio-api`, you need to install this library:
-
-```bash
-npm install react-native-worklets
-```
-> **Note**: Supported versions of `react-native-worklets` are [0.6.x, 0.7.x]. They are checked and updated manually with each release. Nightly versions are always supported but your build may fail.
-
-If the library is not installed, you will encounter runtime errors when trying to use features that depend on worklets and do not have documented fallback implementations.
-
-## What is a worklet?
-
-You can read more about worklets in the [RNWorklets documentation](https://docs.swmansion.com/react-native-worklets/).
-
-Simply put, a worklet is a piece of code that can be executed on a runtime different from the main JavaScript runtime (or more formally, the runtime on which the code was created).
-
-## What kind of worklets are used in react-native-audio-api?
-
-We support two types of worklet runtimes, each optimized for different use cases:
-
-### UIRuntime
-Worklets executed on the UI runtime provided by the `RNWorklets` library. This allows the use of Reanimated utilities and features inside the worklets. The main goal is to enable seamless integration with the UI - for example, creating animations from audio data.
-
-**Use UIRuntime when:**
-- You need to update UI elements from audio data
-- Creating visualizations or animations based on audio
-- Integrating with Reanimated shared values
-- Performance is less critical than UI responsiveness
-
-### AudioRuntime
-Worklets executed on the audio rendering thread for maximum performance and minimal latency. This runtime is optimized for real-time audio processing where timing is critical.
-
-**Use AudioRuntime when:**
-- Performance and low latency are crucial
-- Processing audio in real-time without dropouts
-- Generating audio with precise timing
-- Audio processing doesn't need to interact with UI
-
-You can specify the runtime type when creating worklet nodes using the `workletRuntime` parameter.
-
-## How to use worklets in react-native-audio-api mindfully?
-
-Our API is specifically designed to support high throughput to enable audio playback at 44.1Hz, which is the default frequency for most modern devices.
-
-However, this introduces several limitations on what can be done inside a worklet. Since a worklet must be executed on the JavaScript runtime, each execution introduces latency.
-
-$$ 44.1\text{Hz} \equiv 44100\text{ samples} \equiv 1\text{ s} $$
-
-This means the sample rate indicates how many frames are processed in one second. Most features that allow using worklets as callbacks should also allow setting `bufferLength` for worklet input.
-
-If you set `bufferLength` to 128 (which is the default internal buffer size of our API used to process the graph), you must remember that your worklet should not take more than:
-
-$$ 1\text{ s} = 1000\text{ ms} $$
-
-$$ \frac{44100}{128} \approx 344 $$
-
-$$ \frac{1000\text{ ms}}{344} \approx 2.9\text{ ms} $$
-
-This means that if your worklet, plus the rest of the processing, takes more than 2.9ms, you may start to experience audio dropouts or other playback issues.
-
-### Recommendations
-
-- Use a larger `bufferLength`, like 256, 512 or even 1024 if you don't need more than 40fps.
-- Avoid blocking operations in the worklet (e.g., calling APIs - use JS callbacks for these instead).
-- Do not overuse worklets. Before creating 5 or 6, consider if it can be done with a single one. Creating chained nodes that invoke worklets increases latency linearly.
-- Measure performance and memory usage, and check logs to ensure you are not dropping frames.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/worklets/worklet-node
-# Title: worklet-node
-
-# WorkletNode
-
-> **Warning**
->
-> This node is dependent on `react-native-worklets` and you need to install them in order to use this node. Refer to [getting-started page](/docs/fundamentals/getting-started#possible-additional-dependencies) for more info.
-
-The `WorkletNode` interface represents a node in the audio processing graph that can execute a worklet.
-
-Worklets are a way to run JavaScript code in the audio rendering thread, allowing for low-latency audio processing. For more information, see our introduction [Introduction to worklets](/docs/worklets/worklets-introduction).
-This node lets you execute a worklet on the UI thread. bufferLength specifies the size of the buffer that will be passed to the worklet on each call. The inputChannelCount specifies the number of channels that will be passed to the worklet.
-
-## Constructor
-
-```tsx
-constructor(
- context: BaseAudioContext,
- runtime: AudioWorkletRuntime,
- callback: (audioData: Array, channelCount: number) => void,
- bufferLength: number,
- inputChannelCount: number)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createWorkletNode(worklet, bufferLength, inputChannelCount, workletRuntime)`](/docs/core/base-audio-context#createworkletnode-)
-
-## Example
-
-```tsx
-import { AudioContext, AudioRecorder, AudioManager } from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: "playAndRecord",
- iosMode: "measurement",
- iosOptions: ["mixWithOthers"],
-})
-
-// This example shows how we can use a WorkletNode to process microphone audio data in real-time.
-async function App() {
- const recorder = new AudioRecorder();
-
- const audioContext = new AudioContext({ sampleRate: 16000 });
- const worklet = (audioData: Array, inputChannelCount: number) => {
- 'worklet';
- // here you have access to the number of input channels and the audio data
- // audio data is a two dimensional array where first index is the channel number and second is buffer of exactly bufferLength size
- // !IMPORTANT: here you can only read audio data any modifications will not be reflected in the audio output of this node
- // !VERY IMPORTANT: please read the Known Issue section below
- };
- const workletNode = audioContext.createWorkletNode(worklet, 1024, 2, 'UIRuntime');
- const adapterNode = audioContext.createRecorderAdapter();
-
- const canSetAudioSessionActivity = await AudioManager.setAudioSessionActivity(true);
- if (!canSetAudioSessionActivity) {
- throw new Error("Could not activate the audio session");
- }
- adapterNode.connect(workletNode);
- workletNode.connect(audioContext.destination);
- recorder.connect(adapterNode);
- recorder.start();
- audioContext.resume();
-}
-```
-
-## Properties
-
-It has no own properties but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Methods
-
-It has no own methods but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Known Issue
-
-It might happen that the worklet side effect is not visible on the UI (when you are using UIRuntime kind). For example you have some animated style which depends on some shared value modified in the worklet.
-This is happening because microtask queue is not always being flushed properly, bla bla bla...
-
-To workaround this issue just add this line at the end of your worklet callback function:
-
-```ts
-requestAnimationFrame(() => {});
-```
-
-This will ensure that microtask queue is flushed and your UI will be updated properly. But be aware that this might have some performance implications so it is not included by default.
-So use this only after confirming that your worklet side effects are not visible on the UI.
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/worklets/worklet-processing-node
-# Title: worklet-processing-node
-
-# WorkletProcessingNode
-
-> **Warning**
->
-> This node is dependent on `react-native-worklets` and you need to install them in order to use this node. Refer to [getting-started page](/docs/fundamentals/getting-started#possible-additional-dependencies) for more info.
-
-The `WorkletProcessingNode` interface represents a node in the audio processing graph that can process audio using a worklet function. Unlike [`WorkletNode`](/docs/worklets/worklet-node) which only provides read-only access to audio data, `WorkletProcessingNode` allows you to modify the audio signal by providing both input and output buffers.
-
-This node lets you execute a worklet that receives input audio data and produces output audio data, making it perfect for creating custom audio effects, filters, and processors. The worklet processes the exact number of frames provided by the audio system in each call.
-
-For more information about worklets, see our [Introduction to worklets](/docs/worklets/worklets-introduction).
-
-## Constructor
-
-```tsx
-constructor(
- context: BaseAudioContext,
- runtime: AudioWorkletRuntime,
- callback: (
- inputData: Array,
- outputData: Array,
- framesToProcess: number,
- currentTime: number
- ) => void)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createWorkletProcessingNode(worklet, workletRuntime)`](/docs/core/base-audio-context#createworkletprocessingnode-)
-
-## Example
-
-```tsx
-import { AudioContext, AudioRecorder } from 'react-native-audio-api';
-
-// This example shows how to create a simple gain effect using WorkletProcessingNode
-function App() {
- const recorder = new AudioRecorder({
- sampleRate: 16000,
- bufferLengthInSamples: 16000,
- });
-
- const audioContext = new AudioContext({ sampleRate: 16000 });
-
- // Create a simple gain worklet that multiplies the input by a gain value
- const gainWorklet = (
- inputData: Array,
- outputData: Array,
- framesToProcess: number,
- currentTime: number
- ) => {
- 'worklet';
- const gain = 0.5; // 50% volume
-
- for (let ch = 0; ch < inputData.length; ch++) {
- const input = inputData[ch];
- const output = outputData[ch];
-
- for (let i = 0; i < framesToProcess; i++) {
- output[i] = input[i] * gain;
- }
- }
- };
-
- const workletProcessingNode = audioContext.createWorkletProcessingNode(
- gainWorklet,
- 'AudioRuntime'
- );
- const adapterNode = audioContext.createRecorderAdapter();
-
- adapterNode.connect(workletProcessingNode);
- workletProcessingNode.connect(audioContext.destination);
- recorder.connect(adapterNode);
- recorder.start();
-}
-}
-```
-
-## Worklet Parameters Explanation
-
-The worklet function receives four parameters:
-
-### `inputData: Array`
-
-A two-dimensional array where:
-
-* First dimension represents the audio channel (0 = left, 1 = right for stereo)
-* Second dimension contains the input audio samples for that channel
-* You should **read** from these buffers to get the input audio data
-* The length of each `Float32Array` equals the `framesToProcess` parameter
-
-### `outputData: Array`
-
-A two-dimensional array where:
-
-* First dimension represents the audio channel (0 = left, 1 = right for stereo)
-* Second dimension contains the output audio samples for that channel
-* You must **write** to these buffers to produce the processed audio output
-* The length of each `Float32Array` equals the `framesToProcess` parameter
-
-### `framesToProcess: number`
-
-The number of audio samples to process in this call. This determines how many samples you need to process in each channel's buffer. This value will be at most 128.
-
-### `currentTime: number`
-
-The current audio context time in seconds when this worklet call begins. This represents the absolute time since the audio context was created.
-
-## Audio Processing Pattern
-
-A typical WorkletProcessingNode worklet follows this pattern:
-
-```tsx
-const audioProcessor = (
- inputData: Array,
- outputData: Array,
- framesToProcess: number,
- currentTime: number
-) => {
- 'worklet';
-
- for (let channel = 0; channel < inputData.length; channel++) {
- const input = inputData[channel];
- const output = outputData[channel];
-
- for (let sample = 0; sample < framesToProcess; sample++) {
- // Process each sample
- // Read from: input[sample]
- // Write to: output[sample]
- output[sample] = processAudioSample(input[sample]);
- }
- }
-};
-```
-
-## Properties
-
-It has no own properties but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Methods
-
-It has no own methods but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Performance Considerations
-
-Since `WorkletProcessingNode` processes audio in real-time, performance is critical:
-
-* Keep worklet functions lightweight and efficient
-* Avoid complex calculations that could cause audio dropouts
-* Process samples in-place when possible
-* Consider using lookup tables for expensive operations
-* Use `AudioRuntime` for better performance, `UIRuntime` for UI integration
-* Test on target devices to ensure smooth audio processing
-
-## Use Cases
-
-* **Audio Effects**: Reverb, delay, distortion, filters
-* **Audio Processing**: Compression, limiting, normalization
-* **Real-time Filters**: EQ, high-pass, low-pass, band-pass filters
-* **Custom Algorithms**: Noise reduction, pitch shifting, spectral processing
-* **Signal Analysis**: Feature extraction while passing audio through
-
-
----
-# URL: https://docs.swmansion.com/react-native-audio-api/docs/worklets/worklet-source-node
-# Title: worklet-source-node
-
-# WorkletSourceNode
-
-> **Warning**
->
-> This node is dependent on `react-native-worklets` and you need to install them in order to use this node. Refer to [getting-started page](/docs/fundamentals/getting-started#possible-additional-dependencies) for more info.
-
-The `WorkletSourceNode` interface represents a scheduled source node in the audio processing graph that generates audio using a worklet function. It extends [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node), providing the ability to start and stop audio generation at specific times.
-
-This node allows you to generate audio procedurally using JavaScript worklets, making it perfect for creating custom synthesizers, audio generators, or real-time audio effects that produce sound rather than just process it.
-
-For more information about worklets, see our [Introduction to worklets](/docs/worklets/worklets-introduction).
-
-## Constructor
-
-```tsx
-constructor(
- context: BaseAudioContext,
- runtime: AudioWorkletRuntime,
- callback: (
- audioData: Array,
- framesToProcess: number,
- currentTime: number,
- startOffset: number
- ) => void)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createWorkletSourceNode(worklet, workletRuntime)`](/docs/core/base-audio-context#createworkletsourcenode-)
-
-## Example
-
-```tsx
-import { AudioContext } from 'react-native-audio-api';
-
-function App() {
- const audioContext = new AudioContext({ sampleRate: 44100 });
-
- // Create a simple sine wave generator worklet
- const sineWaveWorklet = (
- audioData: Array,
- framesToProcess: number,
- currentTime: number,
- startOffset: number
- ) => {
- 'worklet';
-
- const frequency = 440; // A4 note
- const sampleRate = 44100;
-
- // Generate audio for each channel
- for (let channel = 0; channel < audioData.length; channel++) {
- for (let i = 0; i < framesToProcess; i++) {
- // Calculate the absolute time for this sample
- const sampleTime = currentTime + (startOffset + i) / sampleRate;
-
- // Generate sine wave
- const phase = 2 * Math.PI * frequency * sampleTime;
- audioData[channel][i] = Math.sin(phase) * 0.5; // 50% volume
- }
- }
- };
-
- const workletSourceNode = audioContext.createWorkletSourceNode(
- sineWaveWorklet,
- 'AudioRuntime'
- );
-
- // Connect to output and start playback
- workletSourceNode.connect(audioContext.destination);
- workletSourceNode.start(); // Start immediately
-
- // Stop after 2 seconds
- setTimeout(() => {
- workletSourceNode.stop();
- }, 2000);
-}
-```
-
-## Worklet Parameters Explanation
-
-The worklet function receives four parameters:
-
-### `audioData: Array`
-
-A two-dimensional array where:
-
-* First dimension represents the audio channel (0 = left, 1 = right for stereo)
-* Second dimension contains the audio samples for that channel
-* You must **write** audio data to these buffers to generate sound
-* The length of each `Float32Array` equals `framesToProcess`
-
-### `framesToProcess: number`
-
-The number of audio samples to generate in this call. This determines how many samples you need to fill in each channel's buffer.
-
-### `currentTime: number`
-
-The current audio context time in seconds when this worklet call begins. This represents the absolute time since the audio context was created.
-
-### `startOffset: number`
-
-The sample offset within the current processing block where your generated audio should begin. This is particularly important for precise timing when the node starts or stops mid-block.
-
-## Understanding `startOffset` and `currentTime`
-
-The relationship between `currentTime` and `startOffset` is crucial for generating continuous audio:
-
-```tsx
-const worklet = (audioData, framesToProcess, currentTime, startOffset) => {
- 'worklet';
-
- const sampleRate = 44100;
-
- for (let i = 0; i < framesToProcess; i++) {
- // Calculate the exact time for this sample
- const sampleTime = currentTime + (startOffset + i) / sampleRate;
-
- // Use sampleTime for phase calculations, LFOs, envelopes, etc.
- const phase = 2 * Math.PI * frequency * sampleTime;
- audioData[0][i] = Math.sin(phase);
- }
-};
-```
-
-**Key points:**
-
-* `currentTime` represents the audio context time at the start of the processing block
-* `startOffset` tells you which sample within the block to start generating audio
-* The absolute time for sample `i` is: `currentTime + (startOffset + i) / sampleRate`
-* This ensures phase continuity and precise timing across processing blocks
-
-## Properties
-
-It has no own properties but inherits from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node).
-
-## Methods
-
-It has no own methods but inherits from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node):
-
-## Performance Considerations
-
-Since `WorkletSourceNode` generates audio in real-time, performance is critical:
-
-* Keep worklet functions lightweight and efficient
-* Avoid complex calculations that could cause audio dropouts
-* Consider using lookup tables for expensive operations like trigonometric functions
-* Test on target devices to ensure smooth audio generation
-* Use `AudioRuntime` for better performance, `UIRuntime` for UI integration
-
-## Use Cases
-
-* **Custom Synthesizers**: Generate waveforms, apply modulation, create complex timbres
-* **Audio Generators**: White noise, pink noise, test tones, sweeps
-* **Procedural Audio**: Dynamic soundscapes, generative music
-* **Real-time Effects**: Audio that responds to user input or external data
-* **Educational Tools**: Demonstrate audio synthesis concepts interactively
-
-## See Also
-
-* [WorkletNode](/docs/worklets/worklet-node) - For processing existing audio with worklets
-* [Introduction to worklets](/docs/worklets/worklets-introduction) - Understanding worklet fundamentals
-* [AudioScheduledSourceNode](/docs/sources/audio-scheduled-source-node) - Base class for scheduled sources
-
diff --git a/packages/audiodocs/static/llms.txt b/packages/audiodocs/static/llms.txt
deleted file mode 100644
index afecc0389..000000000
--- a/packages/audiodocs/static/llms.txt
+++ /dev/null
@@ -1,97 +0,0 @@
-# Documentation
-
-## analysis
-
-- [analyser-node](https://docs.swmansion.com/react-native-audio-api/docs/analysis/analyser-node)
-
-## core
-
-- [audio-context](https://docs.swmansion.com/react-native-audio-api/docs/core/audio-context)
-- [audio-node](https://docs.swmansion.com/react-native-audio-api/docs/core/audio-node)
-- [audio-param](https://docs.swmansion.com/react-native-audio-api/docs/core/audio-param)
-- [base-audio-context](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context)
-- [offline-audio-context](https://docs.swmansion.com/react-native-audio-api/docs/core/offline-audio-context)
-
-## destinations
-
-- [audio-destination-node](https://docs.swmansion.com/react-native-audio-api/docs/destinations/audio-destination-node)
-
-## effects
-
-- [biquad-filter-node](https://docs.swmansion.com/react-native-audio-api/docs/effects/biquad-filter-node)
-- [convolver-node](https://docs.swmansion.com/react-native-audio-api/docs/effects/convolver-node)
-- [delay-node](https://docs.swmansion.com/react-native-audio-api/docs/effects/delay-node)
-- [gain-node](https://docs.swmansion.com/react-native-audio-api/docs/effects/gain-node)
-- [iir-filter-node](https://docs.swmansion.com/react-native-audio-api/docs/effects/iir-filter-node)
-- [periodic-wave](https://docs.swmansion.com/react-native-audio-api/docs/effects/periodic-wave)
-- [stereo-panner-node](https://docs.swmansion.com/react-native-audio-api/docs/effects/stereo-panner-node)
-- [wave-shaper-node](https://docs.swmansion.com/react-native-audio-api/docs/effects/wave-shaper-node)
-
-## fundamentals
-
-- [best-practices](https://docs.swmansion.com/react-native-audio-api/docs/fundamentals/best-practices)
-- [getting-started](https://docs.swmansion.com/react-native-audio-api/docs/fundamentals/getting-started)
-- [introduction](https://docs.swmansion.com/react-native-audio-api/docs/fundamentals/introduction)
-
-## guides
-
-- [create-your-own-effect](https://docs.swmansion.com/react-native-audio-api/docs/guides/create-your-own-effect)
-- [lets-make-some-noise](https://docs.swmansion.com/react-native-audio-api/docs/guides/lets-make-some-noise)
-- [making-a-piano-keyboard](https://docs.swmansion.com/react-native-audio-api/docs/guides/making-a-piano-keyboard)
-- [noise-generation](https://docs.swmansion.com/react-native-audio-api/docs/guides/noise-generation)
-- [see-your-sound](https://docs.swmansion.com/react-native-audio-api/docs/guides/see-your-sound)
-
-## inputs
-
-- [audio-recorder](https://docs.swmansion.com/react-native-audio-api/docs/inputs/audio-recorder)
-
-## other
-
-- [audio-api-plugin](https://docs.swmansion.com/react-native-audio-api/docs/other/audio-api-plugin)
-- [compatibility](https://docs.swmansion.com/react-native-audio-api/docs/other/compatibility)
-- [ffmpeg-info](https://docs.swmansion.com/react-native-audio-api/docs/other/ffmpeg-info)
-- [non-expo-permissions](https://docs.swmansion.com/react-native-audio-api/docs/other/non-expo-permissions)
-- [running_with_mac_catalyst](https://docs.swmansion.com/react-native-audio-api/docs/other/running_with_mac_catalyst)
-- [testing](https://docs.swmansion.com/react-native-audio-api/docs/other/testing)
-- [web-audio-api-coverage](https://docs.swmansion.com/react-native-audio-api/docs/other/web-audio-api-coverage)
-
-## react
-
-- [select-input](https://docs.swmansion.com/react-native-audio-api/docs/react/select-input)
-
-## sources
-
-- [audio-buffer-base-source-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer-base-source-node)
-- [audio-buffer-queue-source-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer-queue-source-node)
-- [audio-buffer-source-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer-source-node)
-- [audio-buffer](https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-buffer)
-- [audio-scheduled-source-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/audio-scheduled-source-node)
-- [constant-source-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/constant-source-node)
-- [oscillator-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/oscillator-node)
-- [recorder-adapter-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/recorder-adapter-node)
-- [streamer-node](https://docs.swmansion.com/react-native-audio-api/docs/sources/streamer-node)
-
-## system
-
-- [audio-manager](https://docs.swmansion.com/react-native-audio-api/docs/system/audio-manager)
-- [playback-notification-manager](https://docs.swmansion.com/react-native-audio-api/docs/system/playback-notification-manager)
-- [recording-notification-manager](https://docs.swmansion.com/react-native-audio-api/docs/system/recording-notification-manager)
-
-## types
-
-- [channel-count-mode](https://docs.swmansion.com/react-native-audio-api/docs/types/channel-count-mode)
-- [channel-interpretation](https://docs.swmansion.com/react-native-audio-api/docs/types/channel-interpretation)
-- [oscillator-type](https://docs.swmansion.com/react-native-audio-api/docs/types/oscillator-type)
-
-## utils
-
-- [decoding](https://docs.swmansion.com/react-native-audio-api/docs/utils/decoding)
-- [time-stretching](https://docs.swmansion.com/react-native-audio-api/docs/utils/time-stretching)
-
-## worklets
-
-- [introduction](https://docs.swmansion.com/react-native-audio-api/docs/worklets/introduction)
-- [worklet-node](https://docs.swmansion.com/react-native-audio-api/docs/worklets/worklet-node)
-- [worklet-processing-node](https://docs.swmansion.com/react-native-audio-api/docs/worklets/worklet-processing-node)
-- [worklet-source-node](https://docs.swmansion.com/react-native-audio-api/docs/worklets/worklet-source-node)
-
diff --git a/packages/audiodocs/static/raw/analysis/analyser-node.md b/packages/audiodocs/static/raw/analysis/analyser-node.md
deleted file mode 100644
index f95d3532b..000000000
--- a/packages/audiodocs/static/raw/analysis/analyser-node.md
+++ /dev/null
@@ -1,118 +0,0 @@
-# AnalyserNode
-
-The `AnalyserNode` interface represents a node providing two core functionalities: extracting time-domain data and frequency-domain data from audio signals.
-It is an [`AudioNode`](/docs/core/audio-node) that passes the audio data unchanged from input to output, but allows to take passed data and process it.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-#### Time domain vs Frequency domain
-
-
-
-A time-domain graph illustrates how a signal evolves over time, displaying changes in amplitude or intensity as time progresses.
-In contrast, a frequency-domain graph reveals how the signal's energy or power is distributed across different frequency bands, highlighting the presence and strength of various frequency components over a specified range.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: AnalyserOptions)
-```
-
-### `AnalyserOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `fftSize` | `number` | 2048 | Number representing size of fast fourier transform |
-| `minDecibels` | `number` | -100 | Initial minimum power in dB for FFT analysis |
-| `maxDecibels` | `number` | -30 | Initial maximum power in dB for FFT analysis |
-| `smoothingTimeConstant` | `number` | 0.8 | Initial smoothing constant for the FFT analysis |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createAnalyser()`](/docs/core/base-audio-context#createanalyser) that creates node with default values.
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `fftSize` | `number` | Integer value representing size of [Fast Fourier Transform](https://en.wikipedia.org/wiki/Fast_Fourier_transform) used to determine frequency domain. In general it is size of returning time-domain data. |
-| `minDecibels` | `number` | Float value representing the minimum value for the range of results from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata). |
-| `maxDecibels` | `number` | Float value representing the maximum value for the range of results from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata). |
-| `smoothingTimeConstant` | `number` | Float value representing averaging constant with the last analysis frame. In general the higher value the smoother is the transition between values over time. |
-| `frequencyBinCount` | `number` | Integer value representing amount of the data obtained in frequency domain, half of the `fftSize` property. | |
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `getFloatFrequencyData`
-
-Copies current frequency data into given array.
-Each value in the array represents the decibel value for a specific frequency.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Float32Array` | The array to which frequency data will be copied. |
-
-#### Returns `undefined`.
-
-### `getByteFrequencyData`
-
-Copies current frequency data into given array.
-Each value in the array is within the range 0 to 255.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Uint8Array` | The array to which frequency data will be copied. |
-
-#### Returns `undefined`.
-
-### `getFloatTimeDomainData`
-
-Copies current time-domain data into given array.
-Each value in the array is the magnitude of the signal at a particular time.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Float32Array` | The array to which time-domain data will be copied. |
-
-#### Returns `undefined`.
-
-### `getByteTimeDomainData`
-
-Copies current time-domain data into given array.
-Each value in the array is within the range 0 to 255, where value of 127 indicates silence.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `array` | `Uint8Array` | The array to which time-domain data will be copied. |
-
-#### Returns `undefined`.
-
-## Remarks
-
-#### `fftSize`
-
-* Must be a power of 2 between 32 and 32768.
-* Throws `IndexSizeError` if set value is not power of 2, or is outside the allowed range.
-
-#### `minDecibels`
-
-* 0 dB([decibel](https://en.wikipedia.org/wiki/Decibel)) is the loudest possible sound, -10 dB is a 10th of that.
-* When getting data from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude lower then `minDecibels` will be returned as 0.
-* Throws `IndexSizeError` if set value is greater than or equal to `maxDecibels`.
-
-#### `maxDecibels`
-
-* 0 dB([decibel](https://en.wikipedia.org/wiki/Decibel)) is the loudest possible sound, -10 dB is a 10th of that.
-* When getting data from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude higher then `maxDecibels` will be returned as 255.
-* Throws `IndexSizeError` if set value is less then or equal to `minDecibels`.
-
-#### `smoothingTimeConstant`
-
-* Nominal range is 0 to 1.
-* 0 means no averaging, 1 means "overlap the previous and current buffer quite a lot while computing the value".
-* Throws `IndexSizeError` if set value is outside the allowed range.
diff --git a/packages/audiodocs/static/raw/core/audio-context.md b/packages/audiodocs/static/raw/core/audio-context.md
deleted file mode 100644
index 73a39874f..000000000
--- a/packages/audiodocs/static/raw/core/audio-context.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# AudioContext
-
-The `AudioContext` interface inherits from [`BaseAudioContext`](/docs/core/base-audio-context).
-It is responsible for supervising and managing audio-processing graph.
-
-## Constructor
-
-`new AudioContext(options: AudioContextOptions)`
-
-```jsx
-interface AudioContextOptions {
- sampleRate: number;
-}
-```
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `NotSupportedError` | `sampleRate` is outside the nominal range \[8000, 96000]. |
-
-## Properties
-
-`AudioContext` does not define any additional properties.
-It inherits all properties from [`BaseAudioContext`](/docs/core/base-audio-context#properties).
-
-## Methods
-
-It inherits all methods from [`BaseAudioContext`](/docs/core/base-audio-context#methods).
-
-### `close`
-
-Closes the audio context, releasing any system audio resources that it uses.
-
-#### Returns `Promise`.
-
-### `suspend`
-
-Suspends time progression in the audio context.
-It is useful when your application will not use audio for a while.
-
-#### Returns `Promise`.
-
-### `resume`
-
-Resumes a previously suspended audio context.
-
-#### Returns `Promise`.
diff --git a/packages/audiodocs/static/raw/core/audio-node.md b/packages/audiodocs/static/raw/core/audio-node.md
deleted file mode 100644
index c3a556b2b..000000000
--- a/packages/audiodocs/static/raw/core/audio-node.md
+++ /dev/null
@@ -1,127 +0,0 @@
-# AudioNode
-
-The `AudioNode` interface serves as a versatile interface for constructing an audio processing graph, representing individual units of audio processing functionality.
-Each `AudioNode` is associated with a certain number of audio channels that facilitate the transfer of audio data through processing graph.
-
-We usually represent the channels with the standard abbreviations detailed in the table below:
-
-| Name | Number of channels | Channels |
-| :----: | :------: | :-------- |
-| Mono | 1 | 0: M - mono |
-| Stereo | 2 | 0: L - left 1: R - right |
-| Quad | 4 | 0: L - left 1: R - right 2: SL - surround left 3: SR - surround right |
-| Stereo | 6 | 0: L - left 1: R - right 2: C - center 3: LFE - subwoofer 4: SL - surround left 5: SR - surround right |
-
-#### Mixing
-
-When node has more then one input or number of inputs channels differs from output up-mixing or down-mixing must be conducted.
-There are three properties involved in mixing process: `channelCount`, [`ChannelCountMode`](/docs/types/channel-count-mode), [`ChannelInterpretation`](/docs/types/channel-interpretation).
-Based on them we can obtain output's number of channels and mixing strategy.
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `context` | [`BaseAudioContext`](/docs/core/base-audio-context) | Associated context. | |
-| `numberOfInputs` | `number` | Integer value representing the number of input connections for the node. | |
-| `numberOfOutputs` | `number` | Integer value representing the number of output connections for the node. | |
-| `channelCount` | `number` | Integer used to determine how many channels are used when up-mixing or down-mixing node's inputs. | |
-| `channelCountMode` | [`ChannelCountMode`](/docs/types/channel-count-mode) | Enumerated value that specifies the method by which channels are mixed between the node's inputs and outputs. | |
-| `channelInterpretation` | [`ChannelInterpretation`](/docs/types/channel-interpretation) | Enumerated value that specifies how input channels are mapped to output channels when number of them is different. | |
-
-## Examples
-
-### Connecting node to node
-
-```tsx
-import { OscillatorNode, GainNode, AudioContext } from 'react-native-audio-api';
-
-function App() {
- const audioContext = new AudioContext();
- const oscillatorNode = audioContext.createOscillator();
- const gainNode = audioContext.createGain();
-
- gainNode.gain.value = 0.5; //lower volume to 0.5
- oscillatorNode.connect(gainNode);
- gainNode.connect(audioContext.destination);
- oscillatorNode.start(audioContext.currentTime);
-}
-```
-
-### Connecting node to audio param (LFO-controlled parameter)
-
-```tsx
-import { OscillatorNode, GainNode, AudioContext } from 'react-native-audio-api';
-
-function App() {
- const audioContext = new AudioContext();
- const oscillatorNode = audioContext.createOscillator();
- const lfo = audioContext.createOscillator();
- const gainNode = audioContext.createGain();
-
- gainNode.gain.value = 0.5; //lower volume to 0.5
- lfo.frequency.value = 2; //low frequency oscillator with 2Hz
-
- // by default oscillator wave values ranges from -1 to 1
- // connecting lfo to gain param will cause the gain param to oscillate at 2Hz and its value will range from 0.5 - 1 to 0.5 + 1
- // you can modulate amplitude by connecting lfo to another gain that would be responsible for this value
- lfo.connect(gainNode.gain)
-
- oscillatorNode.connect(gainNode);
- gainNode.connect(audioContext.destination);
- oscillatorNode.start(audioContext.currentTime);
- lfo.start(audioContext.currentTime);
-}
-```
-
-## Methods
-
-### `connect`
-
-Connects one of the node's outputs to a destination.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `destination` | [`AudioNode`](/docs/core/audio-node) or [`AudioParam`](/docs/core/audio-param) | `AudioNode` or `AudioParam` to which to connect. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `InvalidAccessError` | If `destination` is not part of the same audio context as the node. |
-
-#### Returns `undefined`.
-
-### `disconnect`
-
-Disconnects one or more nodes from the node.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `destination` | [`AudioNode`](/docs/core/audio-node) or [`AudioParam`](/docs/core/audio-param) | `AudioNode` or `AudioParam` from which to disconnect. |
-
-If no arguments provided node disconnects from all outgoing connections.
-
-#### Returns `undefined`.
-
-### `AudioNodeOptions`
-
-It is used to constructing majority of all `AudioNodes`.
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `channelCount` | `number` | 2 | Indicates number of channels used in mixing of node. |
-| `channelCountMode` | [`ChannelCountMode`](/docs/types/channel-count-mode) | `max` | Determines how the number of input channels affects the number of output channels in an audio node. |
-| `channelInterpretation` | [`ChannelInterpretation`](/docs/types/channel-interpretation) | `speakers` | Specifies how input channels are mapped out to output channels when the number of them are different. |
-
-If any of these values are not provided, default values are used.
-
-## Remarks
-
-#### `numberOfInputs`
-
-* Source nodes are characterized by having a `numberOfInputs` value of 0.
-
-#### `numberOfOutputs`
-
-* Destination nodes are characterized by having a `numberOfOutputs` value of 0.
diff --git a/packages/audiodocs/static/raw/core/audio-param.md b/packages/audiodocs/static/raw/core/audio-param.md
deleted file mode 100644
index 0a28703a0..000000000
--- a/packages/audiodocs/static/raw/core/audio-param.md
+++ /dev/null
@@ -1,153 +0,0 @@
-# AudioParam
-
-The `AudioParam` interface represents audio-related parameter (such as `gain` property of [GainNode\`](/docs/effects/gain-node)).
-It can be set to specific value or schedule value change to happen at specific time, and following specific pattern.
-
-#### a-rate vs k-rate
-
-* `a-rate` - takes the current audio parameter value for each sample frame of the audio signal.
-* `k-rate` - uses the same initial audio parameter value for the whole block processed.
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `defaultValue` | `number` | Initial value of the parameter. | |
-| `minValue` | `number` | Minimum possible value of the parameter. | |
-| `maxValue` | `number` | Maximum possible value of the parameter. | |
-| `value` | `number` | Current value of the parameter. Initially set to `defaultValue`. |
-
-## Methods
-
-### `setValueAtTime`
-
-Schedules an instant change to the `value` at given `startTime`.
-
-> **Caution**
->
-> If you need to call this function many times (especially more than 31 times), it is recommended to use the methods described below
-> (such as [`linearRampToValueAtTime`](/docs/core/audio-param#linearramptovalueattime) or [`exponentialRampToValueAtTime`](/docs/core/audio-param#exponentialramptovalueattime)),
-> as they are more efficient for continuous changes. For more specific use cases, you can schedule multiple value changes using [`setValueCurveAtTime`](/docs/core/audio-param#setvaluecurveattime).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `value` | `number` | A float representing the value the `AudioParam` will be set at given time |
-| `startTime` | `number` | The time, in seconds, at which the change in value is going to happen. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `startTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `linearRampToValueAtTime`
-
-Schedules a gradual linear change to the new value.
-The change begins at the time designated for the previous event. It follows a linear ramp to the `value`, achieving it by the specified `endTime`.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `value` | `number` | A float representing the value, the `AudioParam` will ramp to by given time. |
-| `endTime` | `number` | The time, in seconds, at which the value ramp will end. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `endTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `exponentialRampToValueAtTime`
-
-Schedules a gradual exponential change to the new value.
-The change begins at the time designated for the previous event. It follows an exponential ramp to the `value`, achieving it by the specified `endTime`.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `value` | `number` | A float representing the value the `AudioParam` will ramp to by given time. |
-| `endTime` | `number` | The time, in seconds, at which the value ramp will end. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties).|
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `endTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `setTargetAtTime`
-
-Schedules a gradual change to the new value at the start time.
-This method is useful for decay or release portions of [ADSR envelopes](/docs/effects/gain-node#envelope---adsr).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `target` | `number` | A float representing the value to which the `AudioParam` will start transitioning. |
-| `startTime` | `number` | The time, in seconds, at which exponential transition will begin. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-| `timeConstant` | `number` | A double representing the time-constant value of an exponential approach to the `target`. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `startTime` is negative number. |
-| `RangeError` | `timeConstant` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `setValueCurveAtTime`
-
-Schedules the parameters's value change following a curve defined by given array.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `values` | `Float32Array` | The array of values defining a curve, which change will follow. |
-| `startTime` | `number` | The time, in seconds, at which change will begin. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-| `duration` | `number` | A double representing total time over which the change will happen. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `startTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `cancelScheduledValues`
-
-Cancels all scheduled changes after given cancel time.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `cancelTime` | `number` | The time, in seconds, after which all scheduled changes will be cancelled. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties). |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `cancelTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-### `cancelAndHoldAtTime`
-
-Cancels all scheduled changes after given cancel time, but holds its value at given cancel time until further changes appear.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `cancelTime` | `number` | The time, in seconds, after which all scheduled changes will be cancelled. If it's smaller than [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties), it will be clamped to [`currentTime`](https://docs.swmansion.com/react-native-audio-api/docs/core/base-audio-context#properties).|
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `cancelTime` is negative number. |
-
-#### Returns `AudioParam`.
-
-## Remarks
-
-All time parameters should be in the same time coordinate system as [`BaseAudioContext.currentTime`](/docs/core/base-audio-context).
diff --git a/packages/audiodocs/static/raw/core/base-audio-context.md b/packages/audiodocs/static/raw/core/base-audio-context.md
deleted file mode 100644
index bf6c9cd2b..000000000
--- a/packages/audiodocs/static/raw/core/base-audio-context.md
+++ /dev/null
@@ -1,315 +0,0 @@
-# BaseAudioContext
-
-The `BaseAudioContext` interface acts as a supervisor of audio-processing graphs. It provides key processing parameters such as current time, output destination or sample rate.
-Additionally, it is responsible for nodes creation and audio-processing graph's lifecycle management.
-However, `BaseAudioContext` itself cannot be directly utilized, instead its functionalities must be accessed through one of its derived interfaces: [`AudioContext`](/docs/core/audio-context), [`OfflineAudioContext`](/docs/core/offline-audio-context).
-
-#### Audio graph
-
-An audio graph is a structured representation of audio processing elements and their connections within an audio context.
-The graph consists of various types of nodes, each performing specific audio operations, connected in a network that defines the audio signal flow.
-In general we can distinguish four types of nodes:
-
-* Source nodes (e.g [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node), [`OscillatorNode`](/docs/sources/oscillator-node))
-* Effect nodes (e.g [`GainNode`](/docs/effects/gain-node), [`BiquadFilterNode`](/docs/effects/biquad-filter-node))
-* Analysis nodes (e.g [`AnalyserNode`](/docs/analysis/analyser-node))
-* Destination nodes (e.g [`AudioDestinationNode`](/docs/destinations/audio-destination-node))
-
-
-
-#### Rendering audio graph
-
-Audio graph rendering is done in blocks of sample-frames. The number of sample-frames in a block is called render quantum size, and the block itself is called a render quantum.
-By default render quantum size value is 128 and it is constant.
-
-The [`AudioContext`](/docs/core/audio-context) rendering thread is driven by a system-level audio callback.
-Each call has a system-level audio callback buffer size, which is a varying number of sample-frames that needs to be computed on time before the next system-level audio callback arrives,
-but render quantum size does not have to be a divisor of the system-level audio callback buffer size.
-
-> **Info**
->
-> Concept of system-level audio callback does not apply to [`OfflineAudioContext`](/docs/core/offline-audio-context).
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `currentTime` | `number` | Double value representing an ever-increasing hardware time in seconds, starting from 0. | |
-| `destination` | [`AudioDestinationNode`](/docs/destinations/audio-destination-node) | Final output destination associated with the context. | |
-| `sampleRate` | `number` | Float value representing the sample rate (in samples per seconds) used by all nodes in this context. | |
-| `state` | [`ContextState`](/docs/core/base-audio-context#contextstate) | Enumerated value represents the current state of the context. | |
-
-## Methods
-
-### `createAnalyser`
-
-Creates [`AnalyserNode`](/docs/analysis/analyser-node).
-
-#### Returns `AnalyserNode`.
-
-### `createBiquadFilter`
-
-Creates [`BiquadFilterNode`](/docs/effects/biquad-filter-node).
-
-#### Returns `BiquadFilterNode`.
-
-### `createBuffer`
-
-Creates [`AudioBuffer`](/docs/sources/audio-buffer).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `numOfChannels` | `number` | An integer representing the number of channels of the buffer. |
-| `length` | `number` | An integer representing the length of the buffer in sampleFrames. Two seconds buffer has length equals to `2 * sampleRate`. |
-| `sampleRate` | `number` | A float representing the sample rate of the buffer. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `NotSupportedError` | `numOfChannels` is outside the nominal range \[1, 32]. |
-| `NotSupportedError` | `sampleRate` is outside the nominal range \[8000, 96000]. |
-| `NotSupportedError` | `length` is less then 1. |
-
-#### Returns `AudioBuffer`.
-
-### `createBufferSource`
-
-Creates [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `options` | `{ pitchCorrection: boolean }` | Boolean that specifies if pitch correction has to be available. |
-
-#### Returns `AudioBufferSourceNode`.
-
-### `createBufferQueueSource`
-
-Creates [`AudioBufferQueueSourceNode`](/docs/sources/audio-buffer-queue-source-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `options` | `{ pitchCorrection: boolean }` | Boolean that specifies if pitch correction has to be available. |
-
-#### Returns `AudioBufferQueueSourceNode`.
-
-### `createConstantSource`
-
-Creates [`ConstantSourceNode`](/docs/sources/constant-source-node).
-
-#### Returns `ConstantSourceNode`.
-
-### `createConvolver`
-
-Creates [`ConvolverNode`](/docs/effects/convolver-node).
-
-#### Returns `ConvolverNode`.
-
-### `createDelay`
-
-Creates [`DelayNode`](/docs/effects/delay-node)
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `maxDelayTime` | `number` | Maximum amount of time to buffer delayed values|
-
-#### Returns `DelayNode`
-
-### `createGain`
-
-Creates [`GainNode`](/docs/effects/gain-node).
-
-#### Returns `GainNode`.
-
-### `createIIRFilter`
-
-Creates [`IIRFilterNode`](/docs/effects/iir-filter-node).
-
-#### Returns `IIRFilterNode`.
-
-### `createOscillator`
-
-Creates [`OscillatorNode`](/docs/sources/oscillator-node).
-
-#### Returns `OscillatorNode`.
-
-### `createPeriodicWave`
-
-Creates [`PeriodicWave`](/docs/effects/periodic-wave). This waveform specifies a repeating pattern that an OscillatorNode can use to generate its output sound.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `real` | `Float32Array` | An array of cosine terms. |
-| `imag` | `Float32Array` | An array of sine terms. |
-| `constraints` | [`PeriodicWaveConstraints`](/docs/core/base-audio-context#periodicwaveconstraints) | An object that specifies if normalization is disabled. If so, periodic wave will have maximum peak value of 1 and minimum peak value of -1.|
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `InvalidAccessError` | `real` and `imag` arrays do not have same length. |
-
-#### Returns `PeriodicWave`.
-
-### `createRecorderAdapter`
-
-Creates [`RecorderAdapterNode`](/docs/sources/recorder-adapter-node).
-
-#### Returns `RecorderAdapterNode`
-
-### `createStereoPanner`
-
-Creates [`StereoPannerNode`](/docs/effects/stereo-panner-node).
-
-#### Returns `StereoPannerNode`.
-
-### `createStreamer`
-
-Creates [`StreamerNode`](/docs/sources/streamer-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `options` | [`StreamerOptions`](/docs/sources/streamer-node#streameroptions) | Streamer options to initialize. |
-
-#### Returns `StreamerNode`.
-
-### `createWaveShaper`
-
-Creates [`WaveShaperNode`](/docs/effects/wave-shaper-node).
-
-#### Returns `WaveShaperNode`.
-
-### `createWorkletNode`
-
-Creates [`WorkletNode`](/docs/worklets/worklet-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `worklet` | `(Array, number) => void` | The worklet to be executed. |
-| `bufferLength` | `number` | The size of the buffer that will be passed to the worklet on each call. |
-| `inputChannelCount` | `number` | The number of channels that the node expects as input (it will get min(expected, provided)). |
-| `workletRuntime` | `AudioWorkletRuntime` | The kind of runtime to use for the worklet. See [worklet runtimes](/docs/worklets/worklets-introduction#what-kind-of-worklets-are-used-in-react-native-audio-api) for details. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `Error` | `react-native-worklet` is not found as dependency. |
-| `NotSupportedError` | `bufferLength` \< 1. |
-| `NotSupportedError` | `inputChannelCount` is not in range \[1, 32]. |
-
-#### Returns `WorkletNode`.
-
-### `createWorkletSourceNode`
-
-Creates [`WorkletSourceNode`](/docs/worklets/worklet-source-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `worklet` | `(Array, number, number, number) => void` | The worklet to be executed. |
-| `workletRuntime` | `AudioWorkletRuntime` | The kind of runtime to use for the worklet. See [worklet runtimes](/docs/worklets/worklets-introduction#what-kind-of-worklets-are-used-in-react-native-audio-api) for details. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `Error` | `react-native-worklet` is not found as dependency. |
-
-#### Returns `WorkletSourceNode`.
-
-### `createWorkletProcessingNode`
-
-Creates [`WorkletProcessingNode`](/docs/worklets/worklet-processing-node).
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `worklet` | `(Array, Array, number, number) => void` | The worklet to be executed. |
-| `workletRuntime` | `AudioWorkletRuntime` | The kind of runtime to use for the worklet. See [worklet runtimes](/docs/worklets/worklets-introduction#what-kind-of-worklets-are-used-in-react-native-audio-api) for details. |
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `Error` | `react-native-worklet` is not found as dependency. |
-
-#### Returns `WorkletProcessingNode`.
-
-### `decodeAudioData`
-
-Decodes audio data from either a file path or an ArrayBuffer. The optional `sampleRate` parameter lets you resample the decoded audio.
-If not provided, the audio will be automatically resampled to match the audio context's `sampleRate`.
-
-**For the list of supported formats visit [this page](/docs/utils/decoding).**
-
-Parameter
-Type
-Description
-
-input
-ArrayBuffer
-ArrayBuffer with audio data.
-
-string
-Path to remote or local audio file.
-
-number
-Asset module id.
-
-fetchOptions
-[RequestInit](https://github.com/facebook/react-native/blob/ac06f3bdc76a9fd7c65ab899e82bff5cad9b94b6/packages/react-native/src/types/globals.d.ts#L265)
-Additional headers parameters when passing url to fetch.
-
-#### Returns `Promise`.
-
-Example decoding
-
-```tsx
-const url = ... // url to an audio
-
-const buffer = await audioContext.decodeAudioData(url);
-```
-
-### `decodePCMInBase64`
-
-Decodes base64-encoded PCM audio data.
-
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `base64String` | `string` | Base64-encoded PCM audio data. |
-| `inputSampleRate` | `number` | Sample rate of the input PCM data. |
-| `inputChannelCount` | `number` | Number of channels in the input PCM data. |
-| `isInterleaved` | `boolean` | Whether the PCM data is interleaved. Default is `true`. |
-
-#### Returns `Promise`
-
-Example decoding with data in base64 format
-
-```tsx
-const data = ... // data encoded in base64 string
-// data is not interleaved (Channel1, Channel1, ..., Channel2, Channel2, ...)
-const buffer = await this.audioContext.decodeAudioData(data, 4800, 2, false);
-```
-
-## Remarks
-
-#### `currentTime`
-
-* Timer starts when context is created, stops when context is suspended.
-
-### `ContextState`
-
-Details
-
-**Acceptable values:**
-
-* `suspended`
-
-The audio context has been suspended (with one of [`suspend`](/docs/core/audio-context#suspend) or [`OfflineAudioContext.suspend`](/docs/core/offline-audio-context#suspend)).
-
-* `running`
-
-The audio context is running normally.
-
-* `closed`
-
-The audio context has been closed (with [`close`](/docs/core/audio-context#close) method).
diff --git a/packages/audiodocs/static/raw/core/offline-audio-context.md b/packages/audiodocs/static/raw/core/offline-audio-context.md
deleted file mode 100644
index a5ffec496..000000000
--- a/packages/audiodocs/static/raw/core/offline-audio-context.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# OfflineAudioContext
-
-The `OfflineAudioContext` interface inherits from [`BaseAudioContext`](/docs/core/base-audio-context).
-In contrast with a standard [`AudioContext`](/docs/core/audio-context), it doesn't render audio to the device hardware.
-Instead, it processes the audio as quickly as possible and outputs the result to an [`AudioBuffer`](/docs/sources/audio-buffer).
-
-## Constructor
-
-`OfflineAudioContext(options: OfflineAudioContextOptions)`
-
-```typescript
-interface OfflineAudioContextOptions {
- numberOfChannels: number;
- length: number; // The length of the rendered AudioBuffer, in sample-frames
- sampleRate: number;
-}
-```
-
-## Properties
-
-`OfflineAudioContext` does not define any additional properties.
-It inherits all properties from [`BaseAudioContext`](/docs/core/base-audio-context#properties).
-
-## Methods
-
-It inherits all methods from [`BaseAudioContext`](/docs/core/base-audio-context#methods).
-
-### `suspend`
-
-Schedules a suspension of the time progression in audio context at the specified time.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `suspendTime` | `number` | A floating-point number specifying the suspend time, in seconds. |
-
-#### Returns `Promise`.
-
-### `resume`
-
-Resume time progression in audio context when it has been suspended.
-
-#### Returns `Promise`
-
-### `startRendering`
-
-Starts rendering the audio, taking into account the current connections and the current scheduled changes.
-
-#### Returns `Promise`.
diff --git a/packages/audiodocs/static/raw/destinations/audio-destination-node.md b/packages/audiodocs/static/raw/destinations/audio-destination-node.md
deleted file mode 100644
index 5b46f9718..000000000
--- a/packages/audiodocs/static/raw/destinations/audio-destination-node.md
+++ /dev/null
@@ -1,22 +0,0 @@
-# AudioDestinationNode
-
-The `AudioDestinationNode` interface represents the final destination of an audio graph, where all processed audio is ultimately directed.
-
-In most cases, this means the sound is sent to the system’s default output device, such as speakers or headphones.
-When used with an [`OfflineAudioContext`](/docs/core/offline-audio-context) the rendered audio isn’t played back immediately—instead,
-it is stored in an [`AudioBuffer`](/docs/sources/audio-buffer).
-
-Each `AudioContext` has exactly one AudioDestinationNode, which can be accessed through its
-[`AudioContext.destination`](/docs/core/base-audio-context/#properties) property.
-
-#### [`AudioNode`](/docs/core/audio-node#read-only-properties) properties
-
-## Properties
-
-`AudioDestinationNode` does not define any additional properties.
-It inherits all properties from [`AudioNode`](/docs/core/audio-node), listed above.
-
-## Methods
-
-`AudioDestinationNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node).
diff --git a/packages/audiodocs/static/raw/effects/biquad-filter-node.md b/packages/audiodocs/static/raw/effects/biquad-filter-node.md
deleted file mode 100644
index 17fcd6ba6..000000000
--- a/packages/audiodocs/static/raw/effects/biquad-filter-node.md
+++ /dev/null
@@ -1,86 +0,0 @@
-# BiquadFilterNode
-
-The `BiquadFilterNode` interface represents a low-order filter. It is an [`AudioNode`](/docs/core/audio-node) used for tone controls, graphic equalizers, and other audio effects.
-Multiple `BiquadFilterNode` instances can be combined to create more complex filtering chains.
-
-#### [`AudioNode`](/docs/core/audio-node#read-only-properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: BiquadFilterOptions)
-```
-
-### `BiquadFilterOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `Q` | `number` | 1 | Initial value for [`Q`](/docs/effects/biquad-filter-node#properties) |
-| `detune` | `number` | 0 | Initial value for [`detune`](/docs/effects/biquad-filter-node#properties) |
-| `frequency` | `number` | 350 | Initial value for [`frequency`](/docs/effects/biquad-filter-node#properties) |
-| `gain` | `number` | 0 | Initial value for [`gain`](/docs/effects/biquad-filter-node#properties) |
-| `type` | `BiquadFilterType` | `lowpass` | Initial value for [`type`](/docs/effects/biquad-filter-node#properties) |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createBiquadFilter()`](/docs/core/base-audio-context#createbiquadfilter) that creates node with default values.
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Rate | Description |
-| :--: | :--: | :----------: | :-- |
-| `frequency` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | The filter’s cutoff or center frequency in hertz (Hz). |
-| `detune` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | Amount by which the frequency is detuned in cents. |
-| `Q` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | The filter’s Q factor (quality factor). |
-| `gain` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) | Gain applied by specific filter types, in decibels (dB). |
-| `type` | [`BiquadFilterType`](#biquadfiltertype-enumeration-description) | — | Defines the kind of filtering algorithm the node applies (e.g. `"lowpass"`, `"highpass"`). |
-
-#### BiquadFilterType enumeration description
-
-Note: The detune parameter behaves the same way for all filter types, so it is not repeated below.
-| `type` | Description | `frequency` | `Q` | `gain` |
-|:------:|:-----------:|:-----------:|:---:|:------:|
-| `lowpass` | Second-order resonant lowpass filter with 12dB/octave rolloff. Frequencies below the cutoff pass through; higher frequencies are attenuated. | The cutoff frequency. | Determines how peaked the frequency is around the cutoff. Higher values result in a sharper peak. | Not used |
-| `highpass` | Second-order resonant highpass filter with 12dB/octave rolloff. Frequencies above the cutoff pass through; lower frequencies are attenuated. | The cutoff frequency. | Determines how peaked the frequency is around the cutoff. Higher values result in a sharper peak. | Not used |
-| `bandpass` | Second-order bandpass filter. Frequencies within a given range pass through; others are attenuated. | The center of the frequency band. | Controls the bandwidth. Higher values result in a narrower band. | Not used |
-| `lowshelf` | Second-order lowshelf filter. Frequencies below the cutoff are boosted or attenuated; others remain unchanged. | The upper limit of the frequencies where the boost (or attenuation) is applied. | Not used | The boost (in dB) to be applied. Negative values attenuate the frequencies.|
-| `highshelf` | Second-order highshelf filter. Frequencies above the cutoff are boosted or attenuated; others remain unchanged. | The lower limit of the frequencies where the boost (or attenuation) is applied. | Not used | The boost (in dB) to be applied. Negative values attenuate the frequencies. |
-| `peaking` | Frequencies around a center frequency are boosted or attenuated; others remain unchanged. | The center of the frequency range where the boost (or an attenuation) is applied. | Controls the bandwidth. Higher values result in a narrower band. | The boost (in dB) to be applied. Negative values attenuate the frequencies. |
-| `notch` | Notch (band-stop) filter. Opposite of a bandpass filter: frequencies around the center are attenuated; others remain unchanged. | The center of the frequency range where the notch is applied. | Controls the bandwidth. Higher values result in a narrower band. | Not used |
-| `allpass` | Second-order allpass filter. All frequencies pass through, but changes the phase relationship between the various frequencies. | The frequency where the center of the phase transition occurs (maximum group delay). | Controls how sharp the phase transition is at the center frequency. Higher values result in a sharper transition and a larger group delay. | Not used |
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `getFrequencyResponse`
-
-| Parameter | Type | Description |
-| :--------: | :--: | :---------- |
-| `frequencyArray` | `Float32Array` | Array of frequencies (in Hz), which you want to filter. |
-| `magResponseOutput` | `Float32Array` | Output array to store the computed linear magnitude values for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-| `phaseResponseOutput` | `Float32Array` | Output array to store the computed phase response values (in radians) for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-
-#### Returns `undefined`.
-
-## Remarks
-
-#### `frequency`
-
-* Range: \[10, $\frac$].
-
-#### `Q`
-
-* Range:
- * For `lowpass` and `highpass` is \[-Q, Q], where Q is the largest value for which $10^$ does not overflow the single-precision floating-point representation.
- Numerically: Q ≈ 770.63678.
- * For `bandpass`, `notch`, `allpass`, and `peaking`: Q is related to the filter’s bandwidth and should be positive.
- * Not used for `lowshelf` and `highshelf`.
-
-#### `gain`
-
-* Range: \[-40, 40].
-* Positive values correspond to amplification; negative to attenuation.
diff --git a/packages/audiodocs/static/raw/effects/convolver-node.md b/packages/audiodocs/static/raw/effects/convolver-node.md
deleted file mode 100644
index 836d6eefd..000000000
--- a/packages/audiodocs/static/raw/effects/convolver-node.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# ConvolverNode
-
-The `ConvolverNode` interface represents a linear convolution effect, that can be applied to a signal given an impulse response.
-This is the easiest way to achieve `echo` or [`reverb`](https://en.wikipedia.org/wiki/Reverb_effect) effects.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-> **Info**
->
-> Convolver is a node with tail-time, which means, that it continues to output non-silent audio with zero input for the length of the buffer.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: ConvolverOptions)
-```
-
-### `ConvolverOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `buffer` | `number` | | Initial value for [`buffer`](/docs/effects/convolver-node#properties). |
-| `normalize` | `boolean` | true | Initial value for [`normalize`](/docs/effects/convolver-node#properties). |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createConvolver()`](/docs/core/base-audio-context#createconvolver)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `buffer` | [`AudioBuffer`](/docs/sources/audio-buffer) | Associated AudioBuffer. |
-| `normalize` | `boolean` | Whether the impulse response from the buffer will be scaled by an equal-power normalization when the buffer attribute is set. |
-
-> **Caution**
->
-> Linear convolution is a heavy computational process, so if your audio has some weird artefacts that should not be there, try to decrease the duration of impulse response buffer.
diff --git a/packages/audiodocs/static/raw/effects/delay-node.md b/packages/audiodocs/static/raw/effects/delay-node.md
deleted file mode 100644
index 9e376e6e9..000000000
--- a/packages/audiodocs/static/raw/effects/delay-node.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# DelayNode
-
-The `DelayNode` interface represents the latency of the audio signal by given time. It is an [`AudioNode`](/docs/core/audio-node) that applies time shift to incoming signal f.e.
-if `delayTime` value is 0.5, it means that audio will be played after 0.5 seconds.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-> **Info**
->
-> Delay is a node with tail-time, which means, that it continues to output non-silent audio with zero input for the duration of `delayTime`.
-
-## Constructor
-
-[`BaseAudioContext.createDelay(maxDelayTime?: number)`](/docs/core/base-audio-context#createdelay)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `delayTime`| [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing value of time shift to apply. |
-
-> **Warning**
->
-> In web audio api specs `delayTime` is an `a-rate` param.
-
-## Methods
-
-`DelayNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-#### `maxDelayTime`
-
-* Default value is 1.0.
-* Nominal range is 0 - 180.
-
-#### `delayTime`
-
-* Default value is 0.
-* Nominal range is 0 - `maxDelayTime`.
diff --git a/packages/audiodocs/static/raw/effects/gain-node.md b/packages/audiodocs/static/raw/effects/gain-node.md
deleted file mode 100644
index 72d850a7e..000000000
--- a/packages/audiodocs/static/raw/effects/gain-node.md
+++ /dev/null
@@ -1,73 +0,0 @@
-# GainNode
-
-The `GainNode` interface represents a change in volume (amplitude) of the audio signal. It is an [`AudioNode`](/docs/core/audio-node) with a single `gain` [`AudioParam`](/docs/core/audio-param) that multiplies every sample passing through it.
-
-> **Tip**
->
-> Direct, immediate gain changes often cause audible clicks. Use the scheduling methods of [`AudioParam`](/docs/core/audio-param) (e.g. `linearRampToValueAtTime`, `exponentialRampToValueAtTime`) to smoothly interpolate volume transitions.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: GainOptions)
-```
-
-### `GainOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `gain` | `number` | `1.0` | Initial value for [`gain`](/docs/effects/gain-node#properties) |
-
-You can also create a `GainNode` via the [`BaseAudioContext.createGain()`](/docs/core/base-audio-context#creategain) factory method, which uses default values.
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `gain` | [`AudioParam`](/docs/core/audio-param) | [`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing the gain value to apply. | |
-
-## Methods
-
-`GainNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Usage
-
-A common use case is controlling the master volume of an audio graph:
-
-```tsx
-const audioContext = new AudioContext();
-const gainNode = audioContext.createGain();
-
-// Set volume to 50%
-gainNode.gain.setValueAtTime(0.5, audioContext.currentTime);
-
-// Connect source → gain → output
-source.connect(gainNode);
-gainNode.connect(audioContext.destination);
-```
-
-To fade in a sound over 2 seconds:
-
-```tsx
-gainNode.gain.setValueAtTime(0, audioContext.currentTime);
-gainNode.gain.linearRampToValueAtTime(1, audioContext.currentTime + 2);
-```
-
-## Remarks
-
-#### `gain`
-
-* Nominal range is -∞ to ∞.
-* Values greater than `1.0` amplify the signal; values between `0` and `1.0` attenuate it.
-* A value of `0` silences the signal. Negative values invert the signal phase.
-
-## Advanced usage — Envelope (ADSR)
-
-`GainNode` is the key building block for implementing sound envelopes. For a practical, step-by-step walkthrough of ADSR envelopes and how to apply them in a real app, see the [Making a piano keyboard](/docs/guides/making-a-piano-keyboard#envelopes-) guide.
diff --git a/packages/audiodocs/static/raw/effects/iir-filter-node.md b/packages/audiodocs/static/raw/effects/iir-filter-node.md
deleted file mode 100644
index a24f72841..000000000
--- a/packages/audiodocs/static/raw/effects/iir-filter-node.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# IIRFilterNode
-
-The `IIRFilterNode` interface represents a general infinite impulse response (IIR) filter.
-It is an [`AudioNode`](/docs/core/audio-node) used for tone controls, graphic equalizers, and other audio effects.
-`IIRFilterNode` lets the parameters of the filter response be specified, so that it can be tuned as needed.
-
-In general, it is recommended to use [`BiquadFilterNode`](/docs/effects/biquad-filter-node) for implementing higher-order filters,
-as it is less sensitive to numeric issues and its parameters can be automated. You can create all even-order IIR filters with `BiquadFilterNode`,
-but if odd-ordered filters are needed or automation is not needed, then `IIRFilterNode` may be appropriate.
-
-## Constructor
-
-[`BaseAudioContext.createIIRFilter(options: IIRFilterNodeOptions)`](/docs/core/base-audio-context#createiirfilter)
-
-```jsx
-interface IIRFilterNodeOptions {
- feedforward: number[]; // array of floating-point values specifying the feedforward (numerator) coefficients
- feedback: number[]; // array of floating-point values specifying the feedback (denominator) coefficients
-}
-```
-
-#### Errors
-
-| Error type | Description |
-| :---: | :---- |
-| `NotSupportedError` | One or both of the input arrays exceeds 20 members. |
-| `InvalidStateError` | All of the feedforward coefficients are 0, or the first feedback coefficient is 0. |
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `getFrequencyResponse`
-
-| Parameter | Type | Description |
-| :--------: | :--: | :---------- |
-| `frequencyArray` | `Float32Array` | Array of frequencies (in Hz), which you want to filter. |
-| `magResponseOutput` | `Float32Array` | Output array to store the computed linear magnitude values for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-| `phaseResponseOutput` | `Float32Array` | Output array to store the computed phase response values (in radians) for each frequency. For frequencies outside the range \[0, $\frac$], the corresponding results are NaN. |
-
-#### Returns `undefined`.
diff --git a/packages/audiodocs/static/raw/effects/periodic-wave.md b/packages/audiodocs/static/raw/effects/periodic-wave.md
deleted file mode 100644
index 4579f136e..000000000
--- a/packages/audiodocs/static/raw/effects/periodic-wave.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# PeriodicWave
-
-The `PeriodicWave` interface defines a periodic waveform that can be used to shape the output of an OscillatorNode.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options: PeriodicWaveOptions)
-```
-
-### `PeriodicWaveOptions`
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `real` | `Float32Array` | - | [Cosine terms](/docs/core/base-audio-context#createperiodicwave) |
-| `imag` | `Float32Array` | - | [Sine terms](/docs/core/base-audio-context#createperiodicwave) |
-| `disableNormalization` | `boolean` | false | Whether the periodic wave is [normalized](/docs/core/base-audio-context#createperiodicwave) or not. |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createPeriodicWave(real, imag, constraints?: PeriodicWaveConstraints)`](/docs/core/base-audio-context#createperiodicwave)
-
-## Properties
-
-None. `PeriodicWave` has no own or inherited properties.
-
-## Methods
-
-None. `PeriodicWave` has no own or inherited methods.
-
-## Remarks
-
-#### `real` and `imag`
-
-* if only one is specified, the other one is treated as array of 0s of the same length
-* if neither is given values are equivalent to the sine wave
-* if both given, they have to have the same length
-* to see how values corresponds to the output wave [see](https://webaudio.github.io/web-audio-api/#waveform-generation) for more information
diff --git a/packages/audiodocs/static/raw/effects/stereo-panner-node.md b/packages/audiodocs/static/raw/effects/stereo-panner-node.md
deleted file mode 100644
index 1d2e97b14..000000000
--- a/packages/audiodocs/static/raw/effects/stereo-panner-node.md
+++ /dev/null
@@ -1,42 +0,0 @@
-# StereoPannerNode
-
-The `StereoPannerNode` interface represents the change in ratio between two output channels (f. e. left and right speaker).
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, stereoPannerOptions?: StereoPannerOptions)
-```
-
-### `StereoPannerOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `pan` | `number` | - | Number representing pan value |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createStereoPanner()`](/docs/core/base-audio-context#createstereopanner)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :--: | :--: | :---------- |
-| `pan` | [`AudioParam`](/docs/core/audio-param) | [`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing how the audio signal is distributed between the left and right channels. |
-
-## Methods
-
-`StereoPannerNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-#### `pan`
-
-* Default value is 0
-* Nominal range is -1 (only left channel) to 1 (only right channel).
diff --git a/packages/audiodocs/static/raw/effects/wave-shaper-node.md b/packages/audiodocs/static/raw/effects/wave-shaper-node.md
deleted file mode 100644
index 0e6b39596..000000000
--- a/packages/audiodocs/static/raw/effects/wave-shaper-node.md
+++ /dev/null
@@ -1,60 +0,0 @@
-# WaveShaperNode
-
-The `WaveShaperNode` interface represents non-linear signal distortion effects.
-Non-linear distortion is commonly used for both subtle non-linear warming, or more obvious distortion effects.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, waveShaperOptions?: WaveShaperOptions)
-```
-
-### `WaveShaperOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `curve` | `Float32Array` | - | Array representing curve values |
-| `oversample` | [`OverSampleType`](/docs/effects/wave-shaper-node#oversampletype) | - | Value representing oversample property |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createStereoPanner()`](/docs/core/base-audio-context#createwaveshaper)
-
-## Properties
-
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-| Name | Type | Description |
-| :--: | :--: | :---------- |
-| `curve` | `Float32Array \| null` | The shaping curve used for waveshaping effect. |
-| `oversample` | [`OverSampleType`](/docs/effects/wave-shaper-node#oversampletype) | Specifies what type of oversampling should be used when applying shaping curve. |
-
-## Methods
-
-`WaveShaperNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-#### `curve`
-
-* Default value is null
-* Contains at least two values.
-* Subsequent modifications of curve have no effects. To change the curve, assign a new Float32Array object to this property.
-
-#### `oversample`
-
-* Default value `none`
-* Value of `2x` or `4x` can increases quality of the effect, but in some cases it is better not to use oversampling for very accurate shaping curve.
-
-### `OverSampleType`
-
-Type definitions
-
-```typescript
-// Do not oversample | Oversample two times | Oversample four times
-type OverSampleType = 'none' | '2x' | '4x';
-```
diff --git a/packages/audiodocs/static/raw/fundamentals/best-practices.md b/packages/audiodocs/static/raw/fundamentals/best-practices.md
deleted file mode 100644
index 6113a05e2..000000000
--- a/packages/audiodocs/static/raw/fundamentals/best-practices.md
+++ /dev/null
@@ -1,24 +0,0 @@
-# Best Practices
-
-When working with audio in a web or mobile application, following best practices ensures optimal performance,
-user experience, and maintainability. Here are some key best practices to consider when using the React Native Audio API:
-
-## [**AudioContext**](/docs/core/audio-context) Management
-
-* **Single Audio Context**: Create one instance of `AudioContext` in order to easily and efficiently manage the audio layer's state in your application.
- Creating many instances could lead to undefined behavior. Same of them could still be in [`running`](/docs/core/base-audio-context#contextstate) state while others could be
- [`suspended`](/docs/core/base-audio-context#contextstate) or [`closed`](/docs/core/base-audio-context#contextstate), if you do not manage them by yourself.
-
-* **Clean up**: Always close the `AudioContext` using the [`close()`](/docs/core/audio-context#close) method when it is no longer needed.
- This releases system audio resources and prevents memory leaks.
-
-* **Suspend when not in use**: Suspend the `AudioContext` when audio is not needed to save system resources and battery life, especially on mobile devices.
- Running `AudioContext` is still playing silence even if there is no playing source node connected to the [`destination`](/docs/core/base-audio-context#properties).
- Additionally, on iOS devices, the state of the `AudioContext` is directly related with state of the lock screen. If running `AudioContext` exists, it is impossible to set lock screen state to [`state_paused`](/docs/system/audio-manager#lockscreeninfo).
-
-## React hooks vs React Native Audio API
-
-* **Create singleton class to manage audio layer**: Instead of storing `AudioContext` or nodes directly in your React components using `useState` or `useRef`,
- consider creating a singleton class that encapsulates the audio layer logic using React Native Audio API.
- This class can manage the lifecycle of the `AudioContext`, handle audio nodes, and provide methods for playing, pausing, and stopping audio.
- This approach promotes separation of concerns and makes it easier to manage audio state across your application.
diff --git a/packages/audiodocs/static/raw/fundamentals/getting-started.md b/packages/audiodocs/static/raw/fundamentals/getting-started.md
deleted file mode 100644
index 4c003d305..000000000
--- a/packages/audiodocs/static/raw/fundamentals/getting-started.md
+++ /dev/null
@@ -1,157 +0,0 @@
-# Getting started
-
-The goal of *Fundamentals* is to guide you through the setup process of the Audio API, as well as to show the basic concepts behind audio programming using a web audio framework, giving you the confidence to explore more advanced use cases on your own. This section is packed with interactive examples, code snippets, and explanations. Are you ready? Let's make some noise!
-
-## Installation
-
-It takes only a few steps to add Audio API to your project:
-
-### Step 1: Install the package
-
-Install the `react-native-audio-api` package from npm:
-
-```sh
-npx expo install react-native-audio-api
-```
-
-```sh
-npm install react-native-audio-api
-```
-
-```sh
-yarn add react-native-audio-api
-```
-
-### Step 2: Add Audio API expo plugin (optional)
-
-Add `react-native-audio-api` expo plugin to your `app.json` or `app.config.js`.
-
-app.json
-
-```javascript
-{
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-}
-```
-
-app.config.js
-
-```javascript
-export default {
- ...
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-};
-```
-
-#### Special permissions
-
-If you plan to use [`AudioRecorder`](/docs/inputs/audio-recorder) entry `iosMicrophonePermission` and `android.permission.RECORD_AUDIO` in `androidPermissions` section is **MANDATORY**.
-
-> **Info**
->
-> If your app is not managed by expo, see [non-expo-permissions page](/docs/other/non-expo-permissions) how to handle permissions.
-
-Read more about plugin [here](/docs/other/audio-api-plugin)!
-
-### Step 3: Install system-wide bash (only Windows OS)
-
-There are many ways to do that f.e. using git bash. To make sure just test if any unix command works.
-
-```bash
-bash -c 'echo Hello World!'
-```
-
-### Possible additional dependencies
-
-If you plan to use any of [`WorkletNode`](/docs/worklets/worklet-node), [`WorkletSourceNode`](/docs/worklets/worklet-source-node), [`WorkletProcessingNode`](/docs/worklets/worklet-processing-node), it is required to have
-`react-native-worklets` library set up with version 0.6.0 or higher. See [worklets getting-started page](https://docs.swmansion.com/react-native-worklets/docs/) for info how to do it.
-
-> **Info**
->
-> If you are not planning to use any of mentioned nodes, `react-native-worklets` dependency is **OPTIONAL** and your app will build successfully without them.
-
-### Usage with expo
-
-`react-native-audio-api` contains native custom code and isn't part of the Expo Go application. In order to be available in expo managed builds, you have to use Expo development build. Simplest way on starting local expo dev builds, is to use:
-
-```sh
-npx expo run:ios
-```
-
-```sh
-npx expo run:android
-```
-
-To learn more about expo development builds, please check out [Development Builds Documentation](https://docs.expo.dev/develop/development-builds/introduction/).
-
-#### Android
-
-No further steps are necessary.
-
-#### iOS
-
-While developing for iOS, make sure to install [pods](https://cocoapods.org) first before running the app:
-
-```sh
-cd ios && pod install && cd ..
-```
-
-#### Web
-
-No further steps are necessary.
-
-> **Caution**
->
-> `react-native-audio-api` on the web exposes the browser's built-in Web Audio API, but for compatibility between platforms, it limits the available interfaces to APIs that are implemented on iOS and Android.
-
-### Clear Metro bundler cache (recommended)
-
-```sh
-npx expo start -c
-```
-
-```sh
-npm start -- --reset-cache
-```
-
-```sh
-yarn start --reset-cache
-```
-
-## What's next?
-
-In [the next section](/docs/guides/lets-make-some-noise), we will learn how to prepare Audio API and to play some sound!.
diff --git a/packages/audiodocs/static/raw/fundamentals/introduction.md b/packages/audiodocs/static/raw/fundamentals/introduction.md
deleted file mode 100644
index 1b0ee3012..000000000
--- a/packages/audiodocs/static/raw/fundamentals/introduction.md
+++ /dev/null
@@ -1,25 +0,0 @@
-# Introduction
-
-React Native Audio API is an imperative, high-level API for processing and synthesizing audio in React Native Applications. React Native Audio API follows the [Web Audio Specification](https://www.w3.org/TR/webaudio-1.1/) making it easier to write audio-heavy applications for iOS, Android and Web with just one codebase.
-
-## Highlights
-
-* Supports react-native, react-native-web or any web react based project
-* API strictly follows the Web Audio API standard
-* Blazingly fast, all of the Audio API core is written in C++ to deliver the best performance possible
-* Truly native, we use most up-to-date native apis such as AVFoundation, CoreAudio or Oboe
-* Modular routing architecture to fit simple (and complex) use-cases
-* Sample-accurate scheduled sound playback with low-latency for musical applications requiring the highest degree of rhythmic precision.
-* Efficient real-time time-domain and frequency-domain analysis / visualization
-* Efficient biQuad filters for most common filtering methods.
-* Support for computational audio synthesis
-
-## Motivation
-
-By aligning with the Web Audio specification, we're creating a single API that works seamlessly across native iOS, Android, browsers, and even standalone desktop applications. The React Native ecosystem currently lacks a high-performance API for creating audio, adding effects, or controlling basic parameters like volume for each audio separately - and we're here to bridge that gap!
-
-## Alternatives
-
-### Expo Audio
-
-[Expo Audio](https://docs.expo.dev/versions/latest/sdk/audio/) might be a better fit for you, if you are looking for simple playback functionality, as its simple and well documented API makes it easy to use.
diff --git a/packages/audiodocs/static/raw/guides/create-your-own-effect.md b/packages/audiodocs/static/raw/guides/create-your-own-effect.md
deleted file mode 100644
index be0e45552..000000000
--- a/packages/audiodocs/static/raw/guides/create-your-own-effect.md
+++ /dev/null
@@ -1,249 +0,0 @@
-# Create your own effect
-
-In this section, we will create our own [`pure C++ turbo-module`](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules) and use it to create custom processing node that can change sound whatever you want.
-
-### Prerequisites
-
-We highly encourage you to get familiar with [this guide](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules), since we will be using many similar concepts that are explained here.
-
-## Generate files
-
-We prepared a script that generates all of the boiler plate code for you.
-Only parts that will be needed by you, are:
-
-* customizing processor to your tasks
-* configuring [`codegen`](https://reactnative.dev/docs/the-new-architecture/what-is-codegen) with your project
-* writing native specific code to compile those files
-
-```bash
-npx rn-audioapi-custom-node-generator create -o # path where you want files to be generated, usually same level as android/ and ios/
-```
-
-## Analyzing generated files
-
-You should see two directories:
-
-* `shared/` - it contains c++ files (source code for custom effect and JSI layer - Host Objects, needed to communicate with JavaScript)
-* `specs/` - defines typescript interface that will invoke c++ code in JavaScript
-
-> **Caution**
->
-> Name of the file in `specs/` has to start with `Native` to be seen by codegen.
-
-The most important file is `MyProcessorNode.cpp`, it contains main processing part that directly manipulates raw data.
-
-In this guide, we will edit files in order to achieve [`GainNode`](/docs/effects/gain-node) functionality.
-For the sake of a simplicity, we will use value as a raw `double` type, not wrapped in [`AudioParam`](/docs/core/audio-param).
-
-MyProcessorNode.h
-
-```cpp
-#pragma once
-#include
-
-namespace audioapi {
-class AudioBuffer;
-
-class MyProcessorNode : public AudioNode {
-public:
- explicit MyProcessorNode(const std::shared_ptr &context, );
-
-protected:
- std::shared_ptr
- processNode(const std::shared_ptr &buffer,
- int framesToProcess) override;
-
-// highlight-start
-private:
- double gain; // value responsible for gain value
-// highlight-end
-};
-} // namespace audioapi
-```
-
-MyProcessorNode.cpp
-
-```cpp
-#include "MyProcessorNode.h"
-#include
-#include
-
-namespace audioapi {
- MyProcessorNode::MyProcessorNode(const std::shared_ptr &context)
- //highlight-next-line
- : AudioNode(context), gain(0.5) {
- isInitialized_.store(true, std::memory_order_release);
- }
-
- std::shared_ptr MyProcessorNode::processNode(const std::shared_ptr &buffer,
- int framesToProcess) {
- // highlight-start
- for (int channel = 0; channel < buffer->getNumberOfChannels(); ++channel) {
- auto *audioArray = bus->getChannel(channel);
- for (size_t i = 0; i < framesToProcess; ++i) {
- // Apply gain to each sample in the audio array
- (*audioArray)[i] *= gain;
- }
- }
- // highlight-end
- }
-} // namespace audioapi
-```
-
-MyProcessorNodeHostObject.h
-
-```cpp
-#pragma once
-
-#include "MyProcessorNode.h"
-#include
-
-#include
-#include
-
-namespace audioapi {
-using namespace facebook;
-
-class MyProcessorNodeHostObject : public AudioNodeHostObject {
-public:
- explicit MyProcessorNodeHostObject(
- const std::shared_ptr &node)
- : AudioNodeHostObject(node) {
- // highlight-start
- addGetters(JSI_EXPORT_PROPERTY_GETTER(MyProcessorNodeHostObject, getter));
- addSetters(JSI_EXPORT_PROPERTY_SETTER(MyProcessorNodeHostObject, setter));
- // highlight-end
- }
-
- // highlight-start
- JSI_PROPERTY_GETTER(getter) {
- auto processorNode = std::static_pointer_cast(node_);
- return {processorNode->someGetter()};
- }
- // highlight-end
-
- // highlight-start
- JSI_PROPERTY_SETTER(setter) {
- auto processorNode = std::static_pointer_cast(node_);
- processorNode->someSetter(value.getNumber());
- }
- // highlight-end
-};
-} // namespace audioapi
-```
-
-## Codegen
-
-Onboarding codegen doesn't require anything special in regards to basic [react-native tutorial](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules#2-configure-codegen)
-
-## Native files
-
-### iOS
-
-When it comes to iOS there is also nothing more than following [react-native tutorial](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules#ios)
-
-### Android
-
-Case with android is much different, because of the way android is compiled we need to compile our library with whole turbo-module.
-Firstly, follow [the guide](https://reactnative.dev/docs/the-new-architecture/pure-cxx-modules#android), but replace `CmakeLists.txt` with this content:
-
-```cmake
-cmake_minimum_required(VERSION 3.13)
-
-project(appmodules)
-
-set(ROOT ${CMAKE_SOURCE_DIR}/../../../../..)
-set(AUDIO_API_DIR ${ROOT}/node_modules/react-native-audio-api)
-
-include(${REACT_ANDROID_DIR}/cmake-utils/ReactNative-application.cmake)
-
-target_sources(${CMAKE_PROJECT_NAME} PRIVATE
- ${ROOT}/shared/NativeAudioProcessingModule.cpp
- ${ROOT}/shared/MyProcessorNode.cpp
- ${ROOT}/shared/MyProcessorNodeHostObject.cpp
-)
-
-target_include_directories(${CMAKE_PROJECT_NAME} PUBLIC
- ${ROOT}/shared
- ${AUDIO_API_DIR}/common/cpp
-)
-
-add_library(react-native-audio-api SHARED IMPORTED)
-string(TOLOWER ${CMAKE_BUILD_TYPE} BUILD_TYPE_LOWER)
-# we need to import built library from android directory
-set_target_properties(react-native-audio-api PROPERTIES IMPORTED_LOCATION
- ${AUDIO_API_DIR}/android/build/intermediates/merged_native_libs/${BUILD_TYPE_LOWER}/merge${CMAKE_BUILD_TYPE}NativeLibs/out/lib/${CMAKE_ANDROID_ARCH_ABI}/libreact-native-audio-api.so
-)
-target_link_libraries(${CMAKE_PROJECT_NAME} react-native-audio-api android log)
-```
-
-Last part that is required for you to do, is to add following lines to `build.gradle` file located in `android/app` directory.
-
-```Cmake
-evaluationDependsOn(":react-native-audio-api")
-
-afterEvaluate {
- tasks.getByName("buildCMakeDebug").dependsOn(findProject(":react-native-audio-api").tasks.getByName("mergeDebugNativeLibs"))
- tasks.getByName("buildCMakeRelWithDebInfo").dependsOn(findProject(":react-native-audio-api").tasks.getByName("mergeReleaseNativeLibs"))
-}
-```
-
-Since in `CmakeLists.txt` we depend on libreact-native-audio-api.so, we need to make sure that building an app will be invoked after library is existing.
-
-## Final touches
-
-Last part is to finally onboard your custom module to your app, by creating typescript interface that would map c++ layer.
-
-```typescript
-// types.ts
-import { AudioNode, BaseAudioContext } from "react-native-audio-api";
-import { IAudioNode, IBaseAudioContext } from "react-native-audio-api/lib/typescript/interfaces";
-
-export interface IMyProcessorNode extends IAudioNode {
- gain: number;
-}
-
-export class MyProcessorNode extends AudioNode {
- constructor(context: BaseAudioContext, node: IMyProcessorNode) {
- super(context, node);
- }
-
- public set gain(value: number) {
- (this.node as IMyProcessorNode).gain = value;
- }
-
- public get gain(): number {
- return (this.node as IMyProcessorNode).gain;
- }
-}
-
-declare global {
- var createCustomProcessorNode: (context: IBaseAudioContext) => IMyProcessorNode;
-}
-```
-
-## Example
-
-```tsx
-import {
- AudioContext,
- OscillatorNode,
-} from 'react-native-audio-api';
-import { MyProcessorNode } from './types';
-
-function App() {
- const audioContext = new AudioContext();
- const oscillator = audioContext.createOscillator();
- // constructor is put in global scope
- const processor = new MyProcessorNode(audioContext, global.createCustomProcessorNode(audioContext.context));
- oscillator.connect(processor);
- processor.connect(audioContext.destination);
- oscillator.start(audioContext.currentTime);
-}
-```
-
-**Check out fully working [demo app](https://github.com/software-mansion-labs/custom-processor-node-example)**
-
-## What's next?
-
-I’m not sure, but give yourself a pat on the back – you’ve earned it! More guides are on the way, so stay tuned! 🎼
diff --git a/packages/audiodocs/static/raw/guides/lets-make-some-noise.md b/packages/audiodocs/static/raw/guides/lets-make-some-noise.md
deleted file mode 100644
index b257e7b80..000000000
--- a/packages/audiodocs/static/raw/guides/lets-make-some-noise.md
+++ /dev/null
@@ -1,102 +0,0 @@
-# Let's make some noise!
-
-In this section, we will guide you through the basic concepts of Audio API. We are going to use core audio components such as [`AudioContext`](/docs/core/audio-context) and [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) to simply play sound from a file, which will help you develop a basic understanding of the library.
-
-## Using audio context
-
-Let's start by bootstrapping a simple application with a play button and creating our first instance of `AudioContext` object.
-
-```jsx
-import React from 'react';
-import { View, Button } from 'react-native';
-// highlight-next-line
-import { AudioContext } from 'react-native-audio-api';
-
-export default function App() {
- const handlePlay = async () => {
- // highlight-next-line
- const audioContext = new AudioContext();
- };
-
- return (
-
-
-
- );
-}
-```
-
-`AudioContext` is an object that controls both the creation of the nodes and the execution of the audio processing or decoding.
-
-## Loading an audio file
-
-Before we can play anything, we need to gain access to some audio data. For the purpose of this guide, we will first download it from a remote source using.
-
-```jsx
-import React from 'react';
-import { View, Button } from 'react-native';
-import { AudioContext } from 'react-native-audio-api';
-
-export default function App() {
- const handlePlay = async () => {
- const audioContext = new AudioContext();
- // highlight-start
- const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
- // highlight-end
- };
-
- return (
-
-
-
- );
-}
-```
-
-We have used the [`decodeAudioData`](/docs/core/base-audio-context#decodeaudiodata) method of the [`BaseAudioContext`](/docs/core/base-audio-context), which takes a URL to a local file or bundled audio asset and decodes it into raw audio data that can be used within our system.
-
-## Play the audio
-
-The last and final step is to create an [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node), connect it to the `AudioContext's` destination, and start playing the sound. For the purpose of this guide, we will play the sound for just 10 seconds.
-
-```jsx {10-11,13-15}
-import React from 'react';
-import { View, Button } from 'react-native';
-import { AudioContext } from 'react-native-audio-api';
-
-export default function App() {
- const handlePlay = async () => {
- const audioContext = new AudioContext();
- const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
-
- const playerNode = audioContext.createBufferSource();
- playerNode.buffer = audioBuffer;
-
- playerNode.connect(audioContext.destination);
- playerNode.start(audioContext.currentTime);
- playerNode.stop(audioContext.currentTime + 10);
- };
-
- return (
-
-
-
- );
-}
-```
-
-And that's it! you have just played your first sound using react-native-audio-api. you can hear how it works in the live example below:
-
-## Summary
-
-In this guide, we have learned how to create a simple audio player using [`AudioContext`](/docs/core/audio-context) and [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) as well as how we can load audio data from a remote source. To sum up:
-
-* `AudioContext` is the main object that controls the audio graph.
-* the [`decodeAudioData`](/docs/core/base-audio-context#decodeaudiodata) method can be used to load audio data from a remote resource in the form of an [`AudioBuffer`](/docs/sources/audio-buffer).
-* `AudioBufferSourceNode` can be used with any `AudioBuffer`.
-* In order to hear the sounds, we need to connect the source node to the destination node exposed by `AudioContext`.
-* We can control the playback of the sound using [`start`](/docs/sources/audio-buffer-source-node#start) and [`stop`](/docs/sources/audio-scheduled-source-node#stop) methods of the `AudioBufferSourceNode` (and other source nodes, which we will show later).
-
-## What's next?
-
-In [the next section](/docs/guides/making-a-piano-keyboard), we will learn more about how the audio graph works, what audio parameters are, and how we can use them to create a simple piano keyboard.
diff --git a/packages/audiodocs/static/raw/guides/making-a-piano-keyboard.md b/packages/audiodocs/static/raw/guides/making-a-piano-keyboard.md
deleted file mode 100644
index dbfebc1a5..000000000
--- a/packages/audiodocs/static/raw/guides/making-a-piano-keyboard.md
+++ /dev/null
@@ -1,359 +0,0 @@
-# Making a piano keyboard
-
-In this section, we will use some of the core Audio API interfaces to create a simple piano keyboard. We will learn what an [`AudioParam`](/docs/core/audio-param) is and how to use it to change the pitch of the sound.
-
-## Base application
-
-Like in the previous example, we will start with a simple app with a couple of buttons so we don't need to worry about the UI later.
-You can just copy and paste the code below to your project.
-
-```tsx
-import React from 'react';
-import { View, Text, Pressable } from 'react-native';
-
-type KeyName = 'A' | 'B' | 'C' | 'D' | 'E';
-
-interface ButtonProps {
- keyName: KeyName;
- onPressIn: (key: KeyName) => void;
- onPressOut: (key: KeyName) => void;
-}
-
-const Button = ({ onPressIn, onPressOut, keyName }: ButtonProps) => (
- onPressIn(keyName)}
- onPressOut={() => onPressOut(keyName)}
- style={({ pressed }) => ({
- margin: 4,
- padding: 12,
- borderRadius: 2,
- backgroundColor: pressed ? '#d2e6ff' : '#abcdef',
- })}
- >
- {`${keyName}`}
-
-);
-
-export default function SimplePiano() {
- const onKeyPressIn = (which: KeyName) => {};
- const onKeyPressOut = (which: KeyName) => {};
-
- return (
-
- {Keys.map((key) => (
-
- ))}
-
- );
-}
-```
-
-## Create audio context and preload the data
-
-Like previously, we will need to preload the audio files in order to be able to play them. Using the interfaces we already know, we will download them and store in the memory using the good old `useRef` hook.
-
-First, we have the import section and the list of sources we will be using. Let’s also make things easier by using type shorthand for the partial record:
-
-```tsx
-import { AudioBuffer, AudioContext } from 'react-native-audio-api';
-
-/* ... */
-
-type PR = Partial>;
-
-const sourceList: PR = {
- A: 'https://software-mansion.github.io/react-native-audio-api/audio/sounds/C4.mp3',
- C: 'https://software-mansion.github.io/react-native-audio-api/audio/sounds/Ds4.mp3',
- E: 'https://software-mansion.github.io/react-native-audio-api/audio/sounds/Fs4.mp3',
-};
-```
-
-Then, we will want to fetch the audio files and store them. We want the audio data to be available to play as soon as possible, so we will use the `useEffect` hook to download them and store them in the `useRef` hook for simplicity.
-
-```tsx
-export default function SimplePiano() {
- const audioContextRef = useRef(null);
- const bufferMapRef = useRef>({});
-
- useEffect(() => {
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
-
- Object.entries(sourceList).forEach(async ([key, url]) => {
- bufferListRef.current[key as KeyName] = await audioContextRef.current!.decodeAudioData(url);
- });
- }, []);
-}
-```
-
-## Playing the sounds
-
-Now it is finally time to play the sounds. We will use the [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) and simply play the buffers.
-
-```tsx
-export default function SimplePiano() {
- const onKeyPressIn = (which: KeyName) => {
- const audioContext = audioContextRef.current;
- const buffer = bufferMapRef.current[which];
-
- if (!audioContext || !buffer) {
- return;
- }
-
- const source = new AudioBufferSourceNode(audioContext, {
- buffer,
- });
-
- source.connect(audioContext.destination);
- source.start();
- };
-}
-```
-
-When we put everything together, we will get something like this:
-
-Great! But there are a few things off here:
-
-* We are not stopping the sound when the button is released, which is how a piano should work, right? 🙃
-* You have probably noticed in the previous section, but we are missing sounds for keys 'B' and 'D'.
-
-Let’s see how we can address these issues using the Audio API. We will go through them one by one. Ready?
-
-## Key release
-
-To stop the sound when keys are released, we will need to store somewhere source nodes, in order to be able to call [`stop`](/docs/sources/audio-scheduled-source-node#stop) on them later. Just like with the audio context, let's use the `useRef` hook for this.
-
-```tsx
-const playingNotesRef = useRef>({});
-```
-
-Now we need to modify the `onKeyPressIn` function a bit
-
-```tsx
-const onKeyPressIn = (which: KeyName) => {
- const audioContext = audioContextRef.current!;
- const buffer = bufferMapRef.current[which];
-
- const source = new AudioBufferSourceNode(audioContext, {
- buffer,
- });
-
- source.connect(audioContext.destination);
- source.start();
-
- playingNotesRef.current[which] = source;
-};
-```
-
-And finally, we can implement the `onKeyPressOut` function
-
-```tsx
-const onKeyPressOut = (which: KeyName) => {
- const source = playingNotesRef.current[which];
-
- if (source) {
- source.stop();
- }
-};
-```
-
-Putting it all together again, we get:
-
-And they stop on release, just as we wanted. But if we hold the keys for a short time, it sounds a bit strange. Also, have you noticed that the sound is simply cut off when we release the key? 🤔
-It leaves a bit of an unpleasant feeling, right? So let’s try to make it a bit smoother.
-
-## Envelopes ✉️
-
-We will start from the end this time, and finally, we will use new type of audio node - [`GainNode`](/docs/effects/gain-node) :tada:
-`GainNode` is a simple node that can change the volume of any node (or nodes) connected to it. The `GainNode` has a single element called [`AudioParam`](/docs/core/audio-param), which is also named `gain`.
-
-## What is an AudioParam?
-
-An `AudioParam` is an interface that controls various aspects of most audio nodes, like volume (in the `GainNode` described above), pan or frequency. It allows us to control these aspects over time, enabling smooth transitions and complex audio effects.
-For our use case, we are interested in two methods of an AudioParam:
-
-* [`setValueAtTime`](/docs/core/audio-param/#setvalueattime)
-* [`exponentialRampToValueAtTime`](/docs/core/audio-param/#exponentialramptovalueattime).
-
-## What is an Envelope?
-
-An envelope describes how a sound's amplitude changes over time. The most widely used model is **ADSR**, which stands for **Attack**, **Decay**, **Sustain**, and **Release**:
-
-* **Attack** — time to ramp from silence to peak volume.
-* **Decay** — time to fall from peak down to the sustain level.
-* **Sustain** — volume level held while the note is active.
-* **Release** — time to fade out after the note ends.
-
-You can read more about envelopes and ADSR on [Wikipedia](https://en.wikipedia.org/wiki/Envelope_\(music\)).
-
-## Implementing the envelope
-
-With all the knowledge we have gathered, let's get back to the code. In our `onKeyPressIn` function, besides creating the source node, we will create a [`GainNode`](/docs/effects/gain-node) which will stand in the middle between the source and destination nodes, acting as our envelope.
-We want to implement the **attack** in `onKeyPressIn` function, and **release** in `onKeyPressOut`. In order to be able to access the envelope in both functions we will have to store it somewhere, so let's modify the `playingNotesRef` introduced earlier.
-Also, let’s not forget about the issue with short key presses. We will address that by enforcing a minimal duration of the sound to one second (as it works nicely with the samples we have 😉).
-
-Let’s start with the types:
-
-```tsx
-interface PlayingNote {
- source: AudioBufferSourceNode;
- envelope: GainNode;
- startedAt: number;
-}
-```
-
-and the `useRef` hook:
-
-```tsx
-const playingNotesRef = useRef>({});
-```
-
-Now we can modify the `onKeyPressIn` function:
-
-```tsx
-const onKeyPressIn = (which: KeyName) => {
- const audioContext = audioContextRef.current!;
- const buffer = bufferMapRef.current[which];
- const tNow = audioContext.currentTime;
-
- if (!audioContext || !buffer) {
- return;
- }
-
- const source = new AudioBufferSourceNode(audioContext, {
- buffer,
- });
-
- const envelope = audioContext.createGain();
-
- source.connect(envelope);
- envelope.connect(audioContext.destination);
-
- envelope.gain.setValueAtTime(0.001, tNow);
- envelope.gain.exponentialRampToValueAtTime(1, tNow + 0.1);
-
- source.start(tNow);
- playingNotesRef.current[which] = { source, envelope, startedAt: tNow };
-};
-```
-
-and the `onKeyPressOut` function:
-
-```tsx
-const onKeyPressOut = (which: KeyName) => {
- const audioContext = audioContextRef.current!;
- const playingNote = playingNotesRef.current[which];
-
- if (!playingNote || !audioContext) {
- return;
- }
-
- const { source, envelope, startedAt } = playingNote;
-
- const tStop = Math.max(audioContext.currentTime, startedAt + 5);
-
- envelope.gain.exponentialRampToValueAtTime(0.0001, tStop + 0.08);
- envelope.gain.setValueAtTime(0, tStop + 0.09);
- source.stop(tStop + 0.1);
-
- playingNotesRef.current[which] = undefined;
-};
-```
-
-As a result, we can hear something like this:
-
-And it finally sounds smooth and nice. But what about decay and sustain phases? Both are handled by the audio samples themselves, so we do not need to worry about them. To be honest, same goes for the attack phase, but we have implemented it for the sake of this guide. 🙂
-So, the only piece left is addressing the missing sample files for the 'B' and 'D' keys. What can we do about that?
-
-## Tampering with the playback rate
-
-The [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) also has its own [`AudioParam`](/docs/core/audio-param), called `playbackRate` as the title suggests. It allows us to change the speed of the playback of the audio buffer.
-Yay! Nice. But how can we use that to make the missing keys sound? I will keep this short, as this guide is already quite long, so let’s wrap up!
-
-When we change the speed of a sound, it will also change its pitch (frequency). So, we can use that to make the missing keys sound.
-Each piano key has its own dominant frequency (e.g., the frequency of the `A4` key is `440Hz`). We can check the frequency of each key, calculate the ratio between them, and use that ratio to adjust the playback rate of the buffers we have.
-
-
-
-For our example, let's use these frequencies as the base for our calculations:
-
-```tsx
-const noteToFrequency = {
- A: 261.626, // real piano middle C
- B: 277.193, // Db
- C: 311.127, // Eb
- D: 329.628, // E
- E: 369.994, // Gb
-};
-```
-
-First, we need to find the closest key to the missing one. We can do this in simple for loop:
-
-```tsx
-function getClosest(key: KeyName) {
- let closestKey = 'A';
- let minDiff = noteToFrequency.A - noteToFrequency[key];
-
- for (const sourcedKey of Object.keys(sourceList)) {
- const diff = noteToFrequency[sourcedKey] - noteToFrequency[key];
-
- if (Math.abs(diff) < Math.abs(minDiff)) {
- minDiff = diff;
- closestKey = sourcedKey;
- }
- }
-
- return closestKey;
-}
-```
-
-Now, we simply use the function in `onKeyPressIn` when the buffer is not found and adjust the playback rate for the source node accordingly:
-
-```tsx
-const onKeyPressIn = (which: KeyName) => {
- let buffer = bufferListRef.current[which];
- const aCtx = audioContextRef.current;
- let playbackRate = 1;
-
- if (!buffer) {
- const closestKey = getClosest(which);
- const closestBuffer = bufferMapRef.current[closestKey];
- playbackRate = noteToFrequency[closestKey] / noteToFrequency[which];
- }
-
- const source = aCtx.createBufferSource();
- const envelope = aCtx.createGain();
- source.buffer = buffer;
-};
-```
-
-## Final effects
-
-As before, you can see the final results in the live example below, along with the full source code.
-
-## Summary
-
-In this guide, we have learned how to create a simple piano keyboard with the help of the GainNode and AudioParams. To sum up:
-
-* [`AudioParam`](/docs/core/audio-param) is an interface that provides ways to control various aspects of audio nodes over time.
-* [`GainNode`](/docs/effects/gain-node) is a simple node that can change the volume of any node connected to it.
-* [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) has a parameter called `playbackRate` that allows to change the speed of the audio buffer's playback, thereby altering the pitch of the sound.
-* We can use `GainNode` to create envelopes, making the sound transitions smoother and more pleasant.
-* We have learned how to use the Audio API in the React environment, simulating a more production-like scenario.
-
-## What's next?
-
-In [the next section](/docs/guides/noise-generation), we will learn how we can generate noise using the audio buffer source node.
diff --git a/packages/audiodocs/static/raw/guides/noise-generation.md b/packages/audiodocs/static/raw/guides/noise-generation.md
deleted file mode 100644
index d35b5c2fb..000000000
--- a/packages/audiodocs/static/raw/guides/noise-generation.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-import InteractiveExample from '@site/src/components/InteractiveExample';
-
-# Noise generation
-
-Noise is one of the most basic and common tools in digital audio processing, in this guide, we will go through most common noise types and how to implement them using web audio api.
-
-## White noise
-
-The most used type of noise. White is a random signal having equal intensity at different frequencies, giving it a constant [power spectral density. (Wikipedia)](https://en.wikipedia.org/wiki/Spectral_density#Power_spectral_density).
-
-To produce white noise, we simply create an [`AudioBuffer`](/docs/sources/audio-buffer) containing random samples in range of `[-1; 1]` (in which audio api operates),
-which can be used by [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node) for playback, further filtering or modification
-
-```tsx
-function createWhiteNoise() {
- const aCtx = = new AudioContext();
- const bufferSize = aCtx.sampleRate * 2;
- const output = new Float32Array(bufferSize);
-
- for (let i = 0; i < bufferSize; i += 1) {
- output[i] = Math.random() * 2 - 1;
- }
-
- const noiseBuffer = aCtx.createBuffer(1, bufferSize, aCtx.sampleRate);
- noiseBuffer.copyToChannel(output, 0, 0);
-
- return noiseBuffer;
-}
-```
-
-Usually we want the noise to be able to be played constantly. To achieve this we are generating 2 seconds of the noise sound, which we will later loop using the `AudioBufferSourceNode` properties. In audio processing `sampleRate` means number of samples that will be played during one second, thus we simply multiply this value by `2` to achieve desired length of the buffer.
-
-import WhiteNoise from '@site/src/examples/NoiseGeneration/WhiteNoiseComponent';
-import WhiteNoiseSrc from '!!raw-loader!@site/src/examples/NoiseGeneration/WhiteNoiseSource';
-
-
-
-## Pink noise
-
-Pink noise, also known as 1/f noise (where "f" stands for frequency), is a type of signal or sound that has equal energy per octave. This means that the power spectral density (PSD) decreases inversely with frequency. In simpler terms, pink noise has more energy at lower frequencies and less energy at higher frequencies, which makes it sound softer and more balanced to the human ear than white noise.
-
-To generate pink noise, we will use the effects of a $\frac{-3dB}{octave}$ filter using the [Paul Kellet's refined method](https://www.musicdsp.org/en/latest/Filters/76-pink-noise-filter.html)
-
-```tsx
-const createPinkNoise = () => {
- const aCtx = new AudioContext();
-
- const bufferSize = 2 * aCtx.sampleRate;
- const output = new Float32Array(bufferSize);
-
- let b0, b1, b2, b3, b4, b5, b6;
- b0 = b1 = b2 = b3 = b4 = b5 = b6 = 0.0;
-
- for (let i = 0; i < bufferSize; i += 1) {
- const white = Math.random() * 2 - 1;
-
- b0 = 0.99886 * b0 + white * 0.0555179;
- b1 = 0.99332 * b1 + white * 0.0750759;
- b2 = 0.969 * b2 + white * 0.153852;
- b3 = 0.8665 * b3 + white * 0.3104856;
- b4 = 0.55 * b4 + white * 0.5329522;
- b5 = -0.7616 * b5 - white * 0.016898;
-
- output[i] = 0.11 * (b0 + b1 + b2 + b3 + b4 + b5 + b6 + white * 0.5362);
- b6 = white * 0.115926;
- }
-
- const noiseBuffer = aCtx.createBuffer(1, bufferSize, aCtx.sampleRate);
- noiseBuffer.copyToChannel(output, 0, 0);
-
- return noiseBuffer;
-}
-```
-
-You can find more information about pink noise generation here: [https://www.firstpr.com.au/dsp/pink-noise/](https://www.firstpr.com.au/dsp/pink-noise/)
-
-import PinkNoise from '@site/src/examples/NoiseGeneration/PinkNoiseComponent';
-import PinkNoiseSrc from '!!raw-loader!@site/src/examples/NoiseGeneration/PinkNoiseSource';
-
-
-
-## Brownian noise
-
-The last noise type I would like to describe is brownian noise (also known as Brown or red noise). Brownian noise is named after the Brownian motion phenomenon, where particles inside a fluid move randomly due to collisions with other particles. It relates to its sonic counterpart in that Brownian noise is characterized by a significant presence of low frequencies, with energy decreasing as the frequency increases. Brownian noise is believed to sound like waterfall.
-
-Brownian noise, similarly to pink one, decreases in power by $\frac{12dB}{octave}$ and sounds similar to waterfall. The implementation is taken from article by Zach Denton, [How to Generate Noise with the Web Audio API](https://noisehack.com/generate-noise-web-audio-api/):
-
-
-```tsx
- const createBrownianNoise = () => {
- const aCtx = new AudioContext();
-
- const bufferSize = 2 * aCtx.sampleRate;
- const output = new Float32Array(bufferSize);
- let lastOut = 0.0;
-
- for (let i = 0; i < bufferSize; i += 1) {
- const white = Math.random() * 2 - 1;
- output[i] = (lastOut + 0.02 * white) / 1.02;
- lastOut = output[i];
- output[i] *= 3.5;
- }
-
- const noiseBuffer = aCtx.createBuffer(1, bufferSize, aCtx.sampleRate);
- noiseBuffer.copyToChannel(output, 0, 0);
-
- return noiseBuffer;
- }
-```
-
-import BrownianNoise from '@site/src/examples/NoiseGeneration/BrownianNoiseComponent';
-import BrownianNoiseSrc from '!!raw-loader!@site/src/examples/NoiseGeneration/BrownianNoiseSource';
-
-
-
-## What's next?
-
-In [the next section](/docs/guides/see-your-sound), we will explore how to capture audio data, visualize this data effectively, and utilize it to create basic animations.
diff --git a/packages/audiodocs/static/raw/guides/see-your-sound.md b/packages/audiodocs/static/raw/guides/see-your-sound.md
deleted file mode 100644
index 624c2174e..000000000
--- a/packages/audiodocs/static/raw/guides/see-your-sound.md
+++ /dev/null
@@ -1,255 +0,0 @@
-# See your sound
-
-In this section, we will get familiar with capabilities of the [`AnalyserNode`](/docs/analysis/analyser-node) interface,
-focusing on how to extract audio data in order to create a simple real-time visualization of the sounds.
-
-## Base application
-
-To kick-start things a bit, lets use code based on previous tutorials.
-It is a simple application that can load and play a sound from file.
-As previously if you would like to code along the tutorial, copy and paste the code provided below into your project.
-
-```tsx
-import React, {
- useState,
- useEffect,
- useRef,
- useMemo,
-} from 'react';
-import {
- AudioContext,
- AudioBuffer,
- AudioBufferSourceNode,
-} from 'react-native-audio-api';
-import { ActivityIndicator, View, Button, LayoutChangeEvent } from 'react-native';
-
-const AudioVisualizer: React.FC = () => {
- const [isPlaying, setIsPlaying] = useState(false);
- const [isLoading, setIsLoading] = useState(false);
-
- const audioContextRef = useRef(null);
- const bufferSourceRef = useRef(null);
- const audioBufferRef = useRef(null);
-
- const handlePlayPause = () => {
- if (isPlaying) {
- bufferSourceRef.current?.stop();
- } else {
- if (!audioContextRef.current) {
- return
- }
-
- bufferSourceRef.current = audioContextRef.current.createBufferSource();
- bufferSourceRef.current.buffer = audioBufferRef.current;
- bufferSourceRef.current.connect(audioContextRef.current.destination);
-
- bufferSourceRef.current.start();
- }
-
- setIsPlaying((prev) => !prev);
- };
-
- useEffect(() => {
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
-
- const fetchBuffer = async () => {
- setIsLoading(true);
- const url = 'https://software-mansion.github.io/react-native-audio-api/audio/music/example-music-02.mp3';
- audioBufferRef.current = await audioContextRef.current!.decodeAudioData(url);
- setIsLoading(false);
- };
-
- fetchBuffer();
-
- return () => {
- audioContextRef.current?.close();
- };
- }, []);
-
- return (
-
-
-
- {isLoading && }
-
-
-
-
-
- );
-};
-
-export default AudioVisualizer;
-```
-
-## Create an analyzer to capture and process audio data
-
-To obtain frequency and time-domain data, we need to utilize the [`AnalyserNode`](/docs/analysis/analyser-node).
-It is an [`AudioNode`](/docs/core/audio-node) that passes data unchanged from input to output while enabling the extraction of this data in two domains: time and frequency.
-
-We will use two specific `AnalyserNode's` methods:
-
-* [`getByteTimeDomainData`](/docs/analysis/analyser-node#getbytetimedomaindata)
-* [`getByteFrequencyData`](/docs/analysis/analyser-node#getbytefrequencydata)
-
-These methods will allow us to acquire the necessary data for our analysis.
-
-```jsx {7,12,17-22,27,33,39,43,49-66,73-79}
-/* ... */
-
-import {
- AudioContext,
- AudioBuffer,
- AudioBufferSourceNode,
- AnalyserNode,
-} from 'react-native-audio-api';
-
-/* ... */
-
-const FFT_SIZE = 512;
-
-const AudioVisualizer: React.FC = () => {
- const [isPlaying, setIsPlaying] = useState(false);
- const [isLoading, setIsLoading] = useState(false);
- const [times, setTimes] = useState(
- new Uint8Array(FFT_SIZE).fill(127)
- );
- const [freqs, setFreqs] = useState(
- new Uint8Array(FFT_SIZE / 2).fill(0)
- );
-
- const audioContextRef = useRef(null);
- const bufferSourceRef = useRef(null);
- const audioBufferRef = useRef(null);
- const analyserRef = useRef(null);
-
- const handlePlayPause = () => {
- if (isPlaying) {
- bufferSourceRef.current?.stop();
- } else {
- if (!audioContextRef.current || !analyserRef.current) {
- return
- }
-
- bufferSourceRef.current = audioContextRef.current.createBufferSource();
- bufferSourceRef.current.buffer = audioBufferRef.current;
- bufferSourceRef.current.connect(analyserRef.current);
-
- bufferSourceRef.current.start();
-
- requestAnimationFrame(draw);
- }
-
- setIsPlaying((prev) => !prev);
- };
-
- const draw = () => {
- if (!analyserRef.current) {
- return;
- }
-
- const timesArrayLength = analyserRef.current.fftSize;
- const frequencyArrayLength = analyserRef.current.frequencyBinCount;
-
- const timesArray = new Uint8Array(timesArrayLength);
- analyserRef.current.getByteTimeDomainData(timesArray);
- setTimes(timesArray);
-
- const freqsArray = new Uint8Array(frequencyArrayLength);
- analyserRef.current.getByteFrequencyData(freqsArray);
- setFreqs(freqsArray);
-
- requestAnimationFrame(draw);
- };
-
- useEffect(() => {
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
-
- if (!analyserRef.current) {
- analyserRef.current = audioContextRef.current.createAnalyser();
- analyserRef.current.fftSize = FFT_SIZE;
- analyserRef.current.smoothingTimeConstant = 0.8;
-
- analyserRef.current.connect(audioContextRef.current.destination);
- }
-
- const fetchBuffer = async () => {
- setIsLoading(true);
- const url = 'https://software-mansion.github.io/react-native-audio-api/audio/music/example-music-02.mp3';
- audioBufferRef.current = await audioContextRef.current!.decodeAudioData(url);
- setIsLoading(false);
- };
-
- fetchBuffer();
-
- return () => {
- audioContextRef.current?.close();
- };
- }, []);
-
- return (
-
-
-
- {isLoading && }
-
-
-
-
-
- );
-};
-
-export default AudioVisualizer;
-```
-
-We utilize the [`requestAnimationFrame`](https://reactnative.dev/docs/timers) method to continuously fetch and update real-time audio visualization data.
-
-## Visualize time-domain and frequency data
-
-To render both the time as well as frequency domain visualizations, we will use our beloved graphic library - [`react-native-skia`](https://shopify.github.io/react-native-skia/).
-
-If you would like to know more what are time and frequency domains, have at look at [Time domain vs Frequency domain](/docs/analysis/analyser-node#time-domain-vs-frequency-domain) section of the AnalyserNode documentation,
-which explains those terms in details, but otherwise here is the code:
-
-**Time domain**
-
-**Frequency domain**
-
-## Summary
-
-In this guide, we have learned how to extract audio data using [`AnalyserNode`](/docs/analysis/analyser-node), what types of data we can obtain and how to visualize them. To sum up:
-
-* `AnalyserNode` is sniffer node that extracting audio data.
-* There are two domains of audio data: `frequency` and `time`.
-* We have learned how to use those data to create simple animation.
-
-## What's next?
-
-In [the next section](/docs/guides/create-your-own-effect), we will learn how to create our own processing node, utilizing react native turbo-modules.
diff --git a/packages/audiodocs/static/raw/inputs/audio-recorder.md b/packages/audiodocs/static/raw/inputs/audio-recorder.md
deleted file mode 100644
index d1b8aad73..000000000
--- a/packages/audiodocs/static/raw/inputs/audio-recorder.md
+++ /dev/null
@@ -1,726 +0,0 @@
-# AudioRecorder
-
-AudioRecorder is a primary interface for capturing audio. It supports three main modes of operations:
-
-* **File recording:** Writing audio data directly to the filesystem.
-* **Data callback:** Emitting raw audio buffers, that can be used in either further processing or streamed.
-* **Graph processing:** Connect the recorder with either `AudioContext` or `OfflineAudioContext` for further more advanced and/or realtime processing
-
-## Configuration
-
-To access microphone you need to make sure your app has required permission configuration - check [getting started permission section](/docs/fundamentals/getting-started#special-permissions) for more information.
-
-Additionally to be able to record audio while application is in the background, you need to enable background mode on iOS and configure foreground service on android.
-
-In an Expo application you can do so through `react-native-audio-api` expo plugin, e.g.
-
-```json
-{
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "[YOUR_APP_NAME] requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.RECORD_AUDIO",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MICROPHONE",
- ],
- "androidForegroundService": true,
- "androidFSTypes": ["microphone"]
- }
- ]
- ]
-}
-```
-
-For more configuration options, check out the [Expo plugin section](/docs/other/audio-api-plugin).
-
-For bare react-native applications, background mode is configurable through `Signing & Capabilities` section of your app target config using XCode
-
-
-
-Microphone permission can be created or modified through the `Info.plist` file
-
-
-
-Alternatively you can modify the `Info.plist` file directly in your editor of choice by adding those lines:
-
-```xml
-NSMicrophoneUsageDescription
-$(PRODUCT_NAME) wants to access your microphone in order to use voice memo recording
-UIBackgroundModes
-
- audio
-
-```
-
-To enable required permissions or foreground service you have to manually edit the `AndroidManifest.xml` file
-
-```xml
-
-
-
-
-
-
-
-
-
-
-
-
-
-```
-
-## Usage
-
-```tsx
-import React, { useState } from 'react';
-import { View, Pressable, Text } from 'react-native';
-import { AudioRecorder, AudioManager } from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: 'record',
- iosMode: 'default',
- iosOptions: [],
-});
-
-const audioRecorder = new AudioRecorder();
-
-// Enables recording to file with default configuration
-audioRecorder.enableFileOutput();
-
-const MyRecorder: React.FC = () => {
- const [isRecording, setIsRecording] = useState(false);
-
- const onStart = async () => {
- if (isRecording) {
- return;
- }
-
- // Make sure the permissions are granted
- const permissions = await AudioManager.requestRecordingPermissions();
-
- if (permissions !== 'Granted') {
- console.warn('Permissions are not granted');
- return;
- }
-
- // Activate audio session
- const success = await AudioManager.setAudioSessionActivity(true);
-
- if (!success) {
- console.warn('Could not activate the audio session');
- return;
- }
-
- const result = audioRecorder.start();
- if (result.status === 'error') {
- console.warn(result.message);
- return;
- }
-
- console.log('Recording started to file:', result.path);
- setIsRecording(true);
- };
-
- const onStop = () => {
- if (!isRecording) {
- return;
- }
-
- const result = audioRecorder.stop();
- console.log(result);
- setIsRecording(false);
- AudioManager.setAudioSessionActivity(false);
- };
-
- return (
-
-
- {isRecording ? 'Stop' : 'Record'}
-
-
- );
-};
-
-export default MyRecorder;
-```
-
-```tsx
-import React, { useState, useEffect } from 'react';
-import { View, Pressable, Text } from 'react-native';
-import { AudioRecorder, AudioManager } from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: 'record',
- iosMode: 'default',
- iosOptions: [],
-});
-
-const audioRecorder = new AudioRecorder();
-const sampleRate = 16000;
-
-const MyRecorder: React.FC = () => {
- const [isRecording, setIsRecording] = useState(false);
-
- useEffect(() => {
- audioRecorder.onAudioReady(
- {
- sampleRate,
- bufferLength: sampleRate * 0.1, // 0.1s of audio each batch
- channelCount: 1,
- },
- ({ buffer, numFrames, when }) => {
- // do something with the data, i.e. stream it
- }
- );
-
- return () => {
- audioRecorder.clearOnAudioReady();
- };
- }, []);
-
- const onStart = async () => {
- if (isRecording) {
- return;
- }
-
- // Make sure the permissions are granted
- const permissions = await AudioManager.requestRecordingPermissions();
-
- if (permissions !== 'Granted') {
- console.warn('Permissions are not granted');
- return;
- }
-
- // Activate audio session
- const success = await AudioManager.setAudioSessionActivity(true);
-
- if (!success) {
- console.warn('Could not activate the audio session');
- return;
- }
-
- const result = audioRecorder.start();
-
- if (result.status === 'error') {
- console.warn(result.message);
- return;
- }
-
- setIsRecording(true);
- };
-
- const onStop = () => {
- if (!isRecording) {
- return;
- }
-
- audioRecorder.stop();
- setIsRecording(false);
- AudioManager.setAudioSessionActivity(false);
- };
-
- return (
-
-
- {isRecording ? 'Stop' : 'Record'}
-
-
- );
-};
-
-export default MyRecorder;
-```
-
-```tsx
-import React, { useState } from 'react';
-import { View, Pressable, Text } from 'react-native';
-import {
- AudioRecorder,
- AudioContext,
- AudioManager,
-} from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: 'playAndRecord',
- iosMode: 'default',
- iosOptions: [],
-});
-
-const audioRecorder = new AudioRecorder();
-const audioContext = new AudioContext();
-
-const MyRecorder: React.FC = () => {
- const [isRecording, setIsRecording] = useState(false);
-
- const onStart = async () => {
- if (isRecording) {
- return;
- }
-
- // Make sure the permissions are granted
- const permissions = await AudioManager.requestRecordingPermissions();
-
- if (permissions !== 'Granted') {
- console.warn('Permissions are not granted');
- return;
- }
-
- // Activate audio session
- const success = await AudioManager.setAudioSessionActivity(true);
-
- if (!success) {
- console.warn('Could not activate the audio session');
- return;
- }
-
- const adapter = audioContext.createRecorderAdapter();
- adapter.connect(audioContext.destination);
- audioRecorder.connect(adapter);
-
- if (audioContext.state === 'suspended') {
- await audioContext.resume();
- }
-
- const result = audioRecorder.start();
-
- if (result.status === 'error') {
- console.warn(result.message);
- return;
- }
-
- setIsRecording(true);
- };
-
- const onStop = () => {
- if (!isRecording) {
- return;
- }
-
- audioRecorder.stop();
- audioContext.suspend();
- setIsRecording(false);
- AudioManager.setAudioSessionActivity(false);
- };
-
- return (
-
-
- {isRecording ? 'Stop' : 'Record'}
-
-
- );
-};
-
-export default MyRecorder;
-```
-
-## API
-
-MethodDescription
-
-##### Constructor
-
-Creates new instance of AudioRecorder. It is preferred to create only a single instance of the AudioRecorder class for the best performance, memory and battery consumption reasons. While the idle recorder has minimal impact on anything mentioned, switching between separate recorder instances might have a noticeable impact on the device.
-
-```tsx
-import { AudioRecorder } from 'react-native-audio-api';
-
-const audioRecorder = new AudioRecorder();
-```
-
-##### start
-
-Starts the stream from system audio input device.
-You can pass optional object with `fileNameOverride` string, to provide your own fileName generation.
-
-```tsx
-const result = audioRecorder.start({
- fileNameOverride: `my_audio_${mySessionId}`
-});
-
-if (result.status === 'success') {
- const openedFilePath = result.path;
-} else if (result.status === 'error') {
- console.error(result.message);
-}
-```
-
-##### stop
-
-Stops the input stream and cleans up each input access method.
-
-```tsx
-const result = audioRecorder.stop();
-
-if (result.status === 'success') {
- const { path, duration, size } = result;
-} else if (result.status === 'error') {
- console.error(result.message);
-}
-```
-
-##### pause
-
-Pauses the recording. This is useful when recording to file is active, but you don't want to finalize the file.
-
-```tsx
- audioRecorder.pause();
-```
-
-##### resume
-
-Resumes the recording if it was previously paused, otherwise does nothing.
-
-```tsx
- audioRecorder.resume();
-```
-
-##### isRecording
-
-Returns `true` if the recorder is in active/recording state
-
-```tsx
- const isRecording = audioRecorder.isRecording();
-```
-
-##### isPaused
-
-Returns `true` if the recorder is in paused state.
-
-```tsx
- const isPaused = audioRecorder.isPaused();
-```
-
-##### onError
-
-Sets an error callback for any possible internal error that might happen during file writing, callback invocation or adapter access.
-
-For details check: [OnRecorderErrorEventType](#onrecordererroreventtype)
-
-```tsx
- audioRecorder.onError((error: OnRecorderErrorEventType) => {
- console.log(error);
- });
-```
-
-##### clearOnError
-
-Removes the error callback.
-
-```tsx
- audioRecorder.clearOnError();
-```
-
-### Recording to file
-
-MethodDescription
-
-##### enableFileOutput
-
-Configures and enables the file output with defined options and stream properties. Options property allows for configuration of the output file structure and quality. By default the recorder writes to cache directory using high-quality `M4A` file.
-
-For further information check: [AudioRecorderFileOptions](#audiorecorderfileoptions)
-
-```tsx
- audioRecorder.enableFileOutput();
-```
-
-##### disableFileOutput
-
-Disables the file output and finalizes the currently recorded file if the recorder is active.
-
-```tsx
- audioRecorder.disableFileOutput();
-```
-
-##### getCurrentDuration
-
-Returns current recording duration if recording to file is enabled.
-
-```tsx
- const duration = audioRecorder.getCurrentDuration();
-```
-
-### Data callback
-
-MethodDescription
-
-##### onAudioReady
-
-The callback is periodically invoked with audio buffers that match the preferred configuration provided in `options`. These parameters (sample rate, buffer length, and channel count) guide how audio data is chunked and delivered, though the exact values may vary depending on device capabilities.
-
-For further information check:
-
-* [AudioRecorderCallbackOptions](#audiorecordercallbackoptions)
-* [OnAudioReadyEventType](#onaudioreadyeventtype)
-
-```tsx
- const sampleRate = 16000;
-
- audioRecorder.onAudioReady(
- {
- sampleRate,
- bufferLength: 0.1 * sampleRate, // 0.1s of data
- channelCount: 1,
- },
- ({ buffer, numFrames, when }) => {
- // do something with the data
- });
-```
-
-##### clearOnAudioReady
-
-Disables and flushes the remaining audio data through `onAudioReady` callback as explained above.
-
-```tsx
- audioRecorder.clearOnAudioReady();
-```
-
-#### Graph processing
-
-MethodDescription
-
-##### connect
-
-Connects AudioRecorder with [RecorderAdapterNode](/docs/sources/recorder-adapter-node) instance that can be used for further audio processing.
-
-```tsx
- const adapter = audioContext.createRecorderAdapter();
- audioRecorder.connect(adapter);
-```
-
-##### disconnect
-
-Disconnects AudioRecorder from the audio graph.
-
-```tsx
- audioRecorder.disconnect();
-```
-
-## Types
-
-#### AudioRecorderCallbackOptions
-
-```tsx
-interface AudioRecorderCallbackOptions {
- sampleRate: number;
- bufferLength: number;
- channelCount: number;
-}
-```
-
-* `sampleRate` - The desired sample rate (in Hz) for audio buffers delivered to the
- recording callback. Common values include 44100 or 48000 Hz. The actual
- sample rate may differ depending on hardware and system capabilities.
-
-* `bufferLength` - The preferred size of each audio buffer, expressed as the number of samples per channel. Smaller buffers reduce latency but increase CPU load, while larger buffers improve efficiency at the cost of higher latency.
-
-* `channelCount` - The desired number of audio channels per buffer. Typically 1 for mono or 2 for stereo recordings.
-
-#### OnRecorderErrorEventType
-
-```tsx
-interface OnRecorderErrorEventType {
- message: string;
-}
-```
-
-#### OnAudioReadyEventType
-
-Represents the data payload received by the audio recorder callback each time a new audio buffer becomes available during recording.
-
-```tsx
-interface OnAudioReadyEventType {
- buffer: AudioBuffer;
- numFrames: number;
- when: number;
-}
-```
-
-* `buffer` - The audio buffer containing the recorded PCM data. This buffer includes one or more channels of floating-point samples in the range of -1.0 to 1.0.
-* `numFrames` - The number of audio frames contained in this buffer. A frame represents a single sample across all channels.
-* `when` - The timestamp (in seconds) indicating when this buffer was captured, relative to the start of the recording session.
-
-### File handling
-
-#### AudioRecorderFileOptions
-
-```tsx
-interface AudioRecorderFileOptions {
- channelCount?: number;
-
- format?: FileFormat;
- preset?: FilePresetType;
-
- directory?: FileDirectory;
- subDirectory?: string;
- fileNamePrefix?: string;
- androidFlushIntervalMs?: number;
-}
-```
-
-* `channelCount` - The desired channel count in the resulting file. not all file formats supports all possible channel counts.
-* `format` - The desired extension and file format of the recorder file. Check: [FileFormat](#fileformat) below.
-* `preset` - The desired recorder file properties, you can use either one of built-in properties or tweak low-level parameters yourself. Check [FilePresetType](#filepresettype) for more details.
-* `directory` - Either `FileDirectory.Cache` or `FileDirectory.Document` (default: `FileDirectory.Cache`). Determines the system directory that the file will be saved to.
-* `subDirectory` - If configured it will create the recording inside requested directory (default: `undefined`).
-* `fileNamePrefix` - Prefix of the recording files without the unique ID (default: `recording`).
-* `androidFlushIntervalMs` - How often the recorder should force the system to write data to the device storage (default: `500`).
- * Lower values are good for crash-resilience and are more memory friendly.
- * Higher values are more battery- and storage-efficient.
-
-#### FileFormat
-
-Describes desired file extension as well as codecs, containers (and muxers!) used to encode the file.
-
-```tsx
-enum FileFormat {
- Wav,
- Caf,
- M4A,
- Flac,
-}
-```
-
-#### FilePresetType
-
-Describes audio format that is used during writing to file as well as encoded final file properties. You can use one of predefined presets, or fully customize the result file, but be aware that the properties aren't limited to only valid configurations, you may find property pairs that will result in error result during recording start (or when enabling the file output during active input session)!
-
-##### Built-in file presets
-
-For convenience we have provided set of most basic file configurations that should cover most of the cases (or at least we hope they will, please raise an issue if you find something lacking or misconfigured!).
-
-###### Usage
-
-```tsx
-import { AudioRecorder, FileFormat, FilePreset } from 'react-native-audio-api';
-
-const audioRecorder = new AudioRecorder();
-
-audioRecorder.enableFileOutput({
- format: FileFormat.M4A,
- preset: FilePreset.High,
-});
-```
-
-PresetDescription
-
-##### Lossless
-
-Writes audio data directly to file without encoding, preserving the maximum audio quality supported by the device. This results in large file sizes, particularly for longer recordings. Available only when using WAV or CAF file formats.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.CAF,
- preset: FilePreset.Lossless,
-});
-```
-
-##### High Quality
-
-Uses high-fidelity audio parameters with efficient encoding to deliver near-lossless perceptual quality while producing smaller files than fully uncompressed recordings. Suitable for music and high-quality voice capture.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.Flac,
- preset: FilePreset.High,
-});
-```
-
-##### Medium Quality
-
-Uses balanced audio parameters that provide good perceptual quality while keeping file sizes moderate. Intended for everyday recording scenarios such as voice notes, podcasts, and general in-app audio, where efficiency and compatibility outweigh maximum fidelity.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.M4A,
- preset: FilePreset.Medium,
-});
-```
-
-##### Low Quality
-
-Uses reduced audio parameters to minimize file size and processing overhead. Designed for cases where speech intelligibility is sufficient and audio fidelity is not critical, such as quick voice notes, background recording, or diagnostic capture.
-
-```tsx
-audioRecorder.enableFileOutput({
- format: FileFormat.M4A,
- preset: FilePreset.Low,
-});
-```
-
-#### Preset customization
-
-In addition to the predefined presets, you may supply a custom FilePresetType to fine-tune how audio data is written and encoded. This allows you to optimize for specific use cases such as speech-only recording, reduced storage footprint, or faster encoding.
-
-```tsx
-export interface FilePresetType {
- bitRate: number;
- sampleRate: number;
- bitDepth: BitDepth;
- iosQuality: IOSAudioQuality;
- flacCompressionLevel: FlacCompressionLevel;
-}
-```
-
-PropertyDescription
-
-##### bitRate
-
-Defines the target bitrate for lossy encoders (for example AAC or M4A). Higher values generally improve perceptual quality at the cost of larger file sizes. This value may be ignored when using lossless formats.
-
-| Use case | Bitrate (bps) | Notes |
-| :- | - | :- |
-| Very low quality / telemetry | 32000 | Bare minimum for speech intelligibility |
-| Low quality voice notes | 48000 | Optimized for small files and fast encoding |
-| Standard speech / podcasts | 64000 – 96000 | Good balance of clarity and size |
-| Medium quality general audio | 128000 | Common default for consumer audio |
-| High quality music / voice | 160000 – 192000 | Near-transparent for most listeners |
-| Very high quality | 256000 – 320000 | Large files, minimal perceptual loss |
-
-##### sampleRate
-
-Specifies the sampling frequency used during recording. Higher sample rates capture a wider frequency range but increase processing and storage requirements.
-
-##### bitDepth
-
-Controls the PCM bit depth of the recorded audio. Higher bit depths increase dynamic range and precision, primarily affecting uncompressed or lossless output formats.
-
-##### iosQuality
-
-Maps the preset to the closest matching quality level provided by iOS native audio APIs, ensuring consistent behavior across Apple devices.
-
-```tsx
-enum IOSAudioQuality {
- Min,
- Low,
- Medium,
- High,
- Max,
-}
-```
-
-##### flacCompressionLevel
-
-Determines the compression level used when encoding FLAC files. Higher levels reduce file size at the cost of increased CPU usage, without affecting audio quality.
-
-```tsx
-enum FlacCompressionLevel {
- L0,
- L1,
- L2,
- L3,
- L4,
- L5,
- L6,
- L7,
- L8,
-}
-```
-
-## Remarks & known issues
diff --git a/packages/audiodocs/static/raw/other/audio-api-plugin.md b/packages/audiodocs/static/raw/other/audio-api-plugin.md
deleted file mode 100644
index c8e0089ef..000000000
--- a/packages/audiodocs/static/raw/other/audio-api-plugin.md
+++ /dev/null
@@ -1,137 +0,0 @@
-# Audio API Expo plugin
-
-## What is Audio API Expo plugin
-
-The Audio API Expo plugin allows to set certain permissions and
-background audio related settings in developer friendly way.
-
-Type definitions
-
-```typescript
-interface Options {
- iosMicrophonePermission?: string;
- iosBackgroundMode: boolean;
- androidPermissions: string[];
- androidForegroundService: boolean;
- androidFSTypes: string[];
-}
-```
-
-## How to use it?
-
-Add `react-native-audio-api` expo plugin to your `app.json` or `app.config.js`.
-
-app.json
-
-```javascript
-{
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-}
-```
-
-app.config.js
-
-```javascript
-export default {
- ...
- "plugins": [
- [
- "react-native-audio-api",
- {
- "iosBackgroundMode": true,
- "iosMicrophonePermission": "This app requires access to the microphone to record audio.",
- "androidPermissions" : [
- "android.permission.MODIFY_AUDIO_SETTINGS",
- "android.permission.FOREGROUND_SERVICE",
- "android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK"
- ],
- "androidForegroundService": true,
- "androidFSTypes": [
- "mediaPlayback"
- ]
- }
- ]
- ]
-};
-```
-
-## Options
-
-### iosBackgroundMode
-
-Defaults to `true`.
-
-Allows app to play audio in the background on iOS.
-
-### iosMicrophonePermission
-
-Defaults to `undefined`.
-
-Allows to specify a custom microphone permission message for iOS. If not specified it will be omitted in the `Info.plist`.
-
-### androidPermissions
-
-Defaults to
-
-```
-[
- 'android.permission.FOREGROUND_SERVICE',
- 'android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK'
-]
-```
-
-Allows to specify certain android app permissions to apply.
-
-##### Permissions:
-
-* `android.permission.POST_NOTIFICATIONS` - Required by Foreground Services on Android 13+ to post notifications.
-
-* `android.permission.FOREGROUND_SERVICE` - Allows an app to run a Foreground Service
-
-* `android.permission.FOREGROUND_SERVICE_MEDIA_PLAYBACK` - Allows an app to run a Foreground Service specifically for continues audio or video playback.
-
-* `android.permission.FOREGROUND_SERVICE_MICROPHONE` - Allows an app to run a Foreground Service specifically for continues microphone capture from the background.
-
-* `android.permission.MODIFY_AUDIO_SETTINGS` - Allows an app to modify global audio settings.
-
-* `android.permission.INTERNET` - Allows applications to open network sockets.
-
-* `android.permission.RECORD_AUDIO` - Allows an application to record audio.
-
-### androidForegroundService
-
-Defaults to true
-
-Allows app to use Foreground Service options specified by user,
-it permits app to play audio in the background on Android.
-
-### androidFSTypes
-
-Allows user to specify appropriate Foreground Service type.
-
-##### Types description
-
-* `mediaPlayback` - Continue audio or video playback from the background.
-
-* `microphone` - Continue microphone capture from the background, such as voice recorders or communication apps.
-
- Runtime prerequisites:
-
- * Request and be granted the RECORD\_AUDIO runtime permission.
diff --git a/packages/audiodocs/static/raw/other/compatibility.md b/packages/audiodocs/static/raw/other/compatibility.md
deleted file mode 100644
index 01dad6713..000000000
--- a/packages/audiodocs/static/raw/other/compatibility.md
+++ /dev/null
@@ -1,31 +0,0 @@
-# React Native compatibility table
-
-### Supported React Native versions on [the New Architecture](https://reactnative.dev/docs/the-new-architecture/landing-page) (Fabric)
-
-| | 0.74 | 0.75 | 0.76 | 0.77 | 0.78 | 0.79 | 0.80 | 0.81 | 0.82 | 0.83 | 0.84 |
-| ----------------------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-| | | | | | | | | | | | |
-
-### Supported React Native versions on the Old Architecture (Paper)
-
-| | 0.74 | 0.75 | 0.76 | 0.77 | 0.78 | 0.79 | 0.80 | 0.81 |
-| ----------------------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
-| | | | | | | | | |
diff --git a/packages/audiodocs/static/raw/other/ffmpeg-info.md b/packages/audiodocs/static/raw/other/ffmpeg-info.md
deleted file mode 100644
index 9c933e427..000000000
--- a/packages/audiodocs/static/raw/other/ffmpeg-info.md
+++ /dev/null
@@ -1,33 +0,0 @@
-# FFmpeg additional information
-
-We use [`ffmpeg`](https://github.com/FFmpeg/FFmpeg) for few components:
-
-* [`StreamerNode`](/docs/sources/streamer-node)
-* decoding `aac`, `mp4`, `m4a` files
-
-## Disabling FFmpeg
-
-The ffmpeg usage is enabled by default, however if you would like not to use it, f.e. there are some name clashes with other ffmpeg
-binaries in your project, you can easily disable it. Just add one flag in corresponding file.
-
-> **Info**
->
-> FFmpeg is enabled by default
-
-Add entry in [expo plugin](/docs/fundamentals/getting-started#step-2-add-audio-api-expo-plugin-optional).
-
-```
-"disableFFmpeg": true
-```
-
-Podfile
-
-```
-ENV['DISABLE_AUDIOAPI_FFMPEG'] = '1'
-```
-
-gradle.properties
-
-```
-disableAudioapiFFmpeg=true
-```
diff --git a/packages/audiodocs/static/raw/other/non-expo-permissions.md b/packages/audiodocs/static/raw/other/non-expo-permissions.md
deleted file mode 100644
index ab1ab1aaa..000000000
--- a/packages/audiodocs/static/raw/other/non-expo-permissions.md
+++ /dev/null
@@ -1,24 +0,0 @@
-# Non-expo app permissions
-
-If your app needs to access non-trivial resources such as microphone or running in the background, there has to be explicit entries about it in special places.
-
-## iOS
-
-On iOS the file that handles special permissions is named [`Info.plist`](https://developer.apple.com/documentation/bundleresources/information-property-list?language=objc).
-This file is placed in `ios/YourAppName` directory.
-For example to tell system that our app wants to use a microphone, we would need to add this entry to the file.
-
-```
-NSMicrophoneUsageDescription
-App wants to access your microphone in order to use voice memo recording
-```
-
-## Android
-
-On Android the file that handles special permissions is named [`AndroidManifest.xml`](https://developer.android.com/guide/topics/manifest/manifest-intro).
-This file is placed in `android/app/src/main` directory.
-For example to tell system that our app wants to use a microphone, we would need to add this entry to the file.
-
-```
-
-```
diff --git a/packages/audiodocs/static/raw/other/running_with_mac_catalyst.md b/packages/audiodocs/static/raw/other/running_with_mac_catalyst.md
deleted file mode 100644
index 5f1a18be2..000000000
--- a/packages/audiodocs/static/raw/other/running_with_mac_catalyst.md
+++ /dev/null
@@ -1,165 +0,0 @@
-# Running with Mac Catalyst
-
-Mac Catalyst allows you to run your iOS apps natively on macOS. This guide covers the necessary changes to your Podfile to enable Mac Catalyst support for your React Native app with `react-native-audio-api`.
-
-## Podfile Configuration
-
-To build your app for Mac Catalyst, you need to make several changes to your `ios/Podfile`:
-
-### 1. Enable building React Native from source
-
-Add this environment variable at the top of your Podfile:
-
-```ruby
-ENV['RCT_BUILD_FROM_SOURCE'] = '1'
-```
-
-### 2. Enable static frameworks
-
-Add `use_frameworks!` with static linkage inside your target block:
-
-```ruby
-target 'YourApp' do
- config = use_native_modules!
- use_frameworks! :linkage => :static
-
- # ... rest of your configuration
-end
-```
-
-### 3. Update post\_install with Mac Catalyst support
-
-Replace your existing `post_install` block with one that enables Mac Catalyst:
-
-```ruby
-post_install do |installer|
- react_native_post_install(
- installer,
- config[:reactNativePath],
- :mac_catalyst_enabled => true,
- )
-end
-```
-
-### 4. Hermes Framework Fix (RN 0.83.x only)
-
-> **Note**
->
-> This step is only required for React Native 0.83.x. There's a [known issue](https://github.com/facebook/react-native/issues/55540) where the Hermes framework bundle structure is ambiguous on Mac Catalyst. If you're on a different version, you can skip this step.
-
-If you're using React Native 0.83.x, extend your `post_install` block with the following fix that restructures the Hermes framework to follow the correct macOS bundle layout:
-
-```ruby
-post_install do |installer|
- react_native_post_install(
- installer,
- config[:reactNativePath],
- :mac_catalyst_enabled => true,
- )
-
- # Hermes Mac Catalyst framework layout fix (RN 0.83.x)
- require 'fileutils'
-
- hermes_fw = File.join(__dir__,
- 'Pods/hermes-engine/destroot/Library/Frameworks/universal/hermesvm.xcframework',
- 'ios-arm64_x86_64-maccatalyst/hermesvm.framework'
- )
-
- if File.directory?(hermes_fw)
- Dir.chdir(hermes_fw) do
- FileUtils.mkdir_p('Versions/A')
- File.symlink('A', 'Versions/Current') unless File.exist?('Versions/Current')
-
- if File.exist?('hermesvm') && !File.symlink?('hermesvm')
- FileUtils.mkdir_p('Versions/Current')
- FileUtils.mv('hermesvm', 'Versions/Current/hermesvm')
- File.symlink('Versions/Current/hermesvm', 'hermesvm')
- end
-
- FileUtils.mkdir_p('Versions/Current/Resources')
- if File.exist?('Resources') && !File.symlink?('Resources')
- FileUtils.rm_rf('Resources')
- end
- File.symlink('Versions/Current/Resources', 'Resources') unless File.exist?('Resources')
- end
- end
- # ⬆️ End of Hermes fix ⬆️
- end
-end
-```
-
-## Complete Example
-
-Here's a complete Podfile configured for Mac Catalyst (includes Hermes fix for RN 0.83.x — remove the Hermes section if you're on a different version):
-
-```ruby
-ENV['RCT_NEW_ARCH_ENABLED'] = '1'
-ENV['RCT_BUILD_FROM_SOURCE'] = '1'
-
-require Pod::Executable.execute_command('node', ['-p',
- 'require.resolve(
- "react-native/scripts/react_native_pods.rb",
- {paths: [process.argv[1]]},
- )', __dir__]).strip
-
-platform :ios, min_ios_version_supported
-prepare_react_native_project!
-
-target 'YourApp' do
- config = use_native_modules!
- use_frameworks! :linkage => :static
-
- use_react_native!(
- :path => config[:reactNativePath],
- :hermes_enabled => true,
- :app_path => "#{Pod::Config.instance.installation_root}/..",
- :privacy_file_aggregation_enabled => true
- )
-
- post_install do |installer|
- react_native_post_install(
- installer,
- config[:reactNativePath],
- :mac_catalyst_enabled => true,
- )
-
- # ⬇️ Hermes fix for RN 0.83.x only - remove if using different version ⬇️
- require 'fileutils'
-
- hermes_fw = File.join(__dir__,
- 'Pods/hermes-engine/destroot/Library/Frameworks/universal/hermesvm.xcframework',
- 'ios-arm64_x86_64-maccatalyst/hermesvm.framework'
- )
-
- if File.directory?(hermes_fw)
- Dir.chdir(hermes_fw) do
- FileUtils.mkdir_p('Versions/A')
- File.symlink('A', 'Versions/Current') unless File.exist?('Versions/Current')
-
- if File.exist?('hermesvm') && !File.symlink?('hermesvm')
- FileUtils.mkdir_p('Versions/Current')
- FileUtils.mv('hermesvm', 'Versions/Current/hermesvm')
- File.symlink('Versions/Current/hermesvm', 'hermesvm')
- end
-
- FileUtils.mkdir_p('Versions/Current/Resources')
- if File.exist?('Resources') && !File.symlink?('Resources')
- FileUtils.rm_rf('Resources')
- end
- File.symlink('Versions/Current/Resources', 'Resources') unless File.exist?('Resources')
- end
- end
- # ⬆️ End of Hermes fix ⬆️
- end
-end
-```
-
-## Building for Mac Catalyst
-
-After updating your Podfile:
-
-1. Run `pod install` to regenerate the Pods project
-2. Open your `.xcworkspace` in Xcode
-3. Select your target and go to **General** → **Deployment Info**
-4. Check **Mac (Designed for iPad)** or **Mac Catalyst** depending on your Xcode version
-5. Build and run targeting "My Mac"
diff --git a/packages/audiodocs/static/raw/other/testing.md b/packages/audiodocs/static/raw/other/testing.md
deleted file mode 100644
index d343f8826..000000000
--- a/packages/audiodocs/static/raw/other/testing.md
+++ /dev/null
@@ -1,361 +0,0 @@
-# Testing
-
-React Native Audio API provides a comprehensive mock implementation to help you test your audio-related code without requiring actual audio hardware or platform-specific implementations.
-
-## Mock Implementation
-
-The mock implementation provides the same API surface as the real library but with no-op or simplified implementations that are perfect for unit testing.
-
-### Importing Mocks
-
-```typescript
-import * as MockAudioAPI from 'react-native-audio-api/mock';
-
-// Or import specific components
-import { AudioContext, AudioRecorder } from 'react-native-audio-api/mock';
-```
-
-```typescript
-// In your test setup file
-jest.mock('react-native-audio-api', () =>
- require('react-native-audio-api/mock')
-);
-
-// Then in your tests
-import { AudioContext, AudioRecorder } from 'react-native-audio-api';
-```
-
-## Basic Usage
-
-### Audio Context Testing
-
-```typescript
-import { AudioContext } from 'react-native-audio-api/mock';
-
-describe('Audio Graph Tests', () => {
- it('should create and connect audio nodes', () => {
- const context = new AudioContext();
-
- // Create nodes
- const oscillator = context.createOscillator();
- const gainNode = context.createGain();
-
- // Configure properties
- oscillator.frequency.value = 440; // A4 note
- gainNode.gain.value = 0.5; // 50% volume
-
- // Connect the audio graph
- oscillator.connect(gainNode);
- gainNode.connect(context.destination);
-
- // Test the configuration
- expect(oscillator.frequency.value).toBe(440);
- expect(gainNode.gain.value).toBe(0.5);
- });
-
- it('should support context state management', async () => {
- const context = new AudioContext();
- expect(context.state).toBe('running');
-
- await context.suspend();
- expect(context.state).toBe('suspended');
-
- await context.resume();
- expect(context.state).toBe('running');
- });
-});
-```
-
-### Audio Recording Testing
-
-```typescript
-import { AudioContext, AudioRecorder, FileFormat, FileDirectory } from 'react-native-audio-api/mock';
-
-describe('Audio Recording Tests', () => {
- it('should configure and control recording', () => {
- const context = new AudioContext();
- const recorder = new AudioRecorder();
-
- // Configure file output
- const result = recorder.enableFileOutput({
- format: FileFormat.M4A,
- channelCount: 2,
- directory: FileDirectory.Document,
- });
-
- expect(result.status).toBe('success');
-
- // Set up recording chain
- const oscillator = context.createOscillator();
- const recorderAdapter = context.createRecorderAdapter();
-
- oscillator.connect(recorderAdapter);
- recorder.connect(recorderAdapter);
-
- // Test recording workflow
- const startResult = recorder.start();
- expect(startResult.status).toBe('success');
- expect(recorder.isRecording()).toBe(true);
-
- const stopResult = recorder.stop();
- expect(stopResult.status).toBe('success');
- expect(recorder.isRecording()).toBe(false);
- });
-});
-```
-
-### Offline Audio Processing
-
-```typescript
-import { OfflineAudioContext } from 'react-native-audio-api/mock';
-
-describe('Offline Processing Tests', () => {
- it('should render offline audio', async () => {
- const offlineContext = new OfflineAudioContext({
- numberOfChannels: 2,
- length: 44100, // 1 second at 44.1kHz
- sampleRate: 44100,
- });
-
- // Create a simple tone
- const oscillator = offlineContext.createOscillator();
- oscillator.frequency.value = 440;
- oscillator.connect(offlineContext.destination);
-
- // Render the audio
- const renderedBuffer = await offlineContext.startRendering();
-
- expect(renderedBuffer.numberOfChannels).toBe(2);
- expect(renderedBuffer.length).toBe(44100);
- expect(renderedBuffer.sampleRate).toBe(44100);
- });
-});
-```
-
-## Advanced Testing Scenarios
-
-### Custom Worklet Testing
-
-```typescript
-import { AudioContext, WorkletProcessingNode } from 'react-native-audio-api/mock';
-
-describe('Worklet Tests', () => {
- it('should create custom audio processing', () => {
- const context = new AudioContext();
-
- const processingCallback = jest.fn((inputData, outputData, framesToProcess) => {
- // Mock audio processing logic
- for (let channel = 0; channel < outputData.length; channel++) {
- for (let i = 0; i < framesToProcess; i++) {
- outputData[channel][i] = inputData[channel][i] * 0.5; // Simple gain
- }
- }
- });
-
- const workletNode = new WorkletProcessingNode(
- context,
- 'AudioRuntime',
- processingCallback
- );
-
- expect(workletNode.context).toBe(context);
- });
-});
-```
-
-### Audio Streaming Testing
-
-```typescript
-import { AudioContext } from 'react-native-audio-api/mock';
-
-describe('Streaming Tests', () => {
- it('should handle audio streaming', () => {
- const context = new AudioContext();
-
- const streamer = context.createStreamer({
- streamPath: 'https://example.com/audio-stream',
- });
-
- expect(streamer.streamPath).toBe('https://example.com/audio-stream');
-
- // Test streaming controls
- streamer.start();
- streamer.pause();
- streamer.resume();
- streamer.stop();
- });
-});
-```
-
-### Error Handling Testing
-
-```typescript
-import {
- AudioRecorder,
- NotSupportedError,
- InvalidStateError
-} from 'react-native-audio-api/mock';
-
-describe('Error Handling Tests', () => {
- it('should handle various error conditions', () => {
- // Test error creation
- expect(() => {
- throw new NotSupportedError('Feature not supported');
- }).toThrow('Feature not supported');
-
- // Test recorder connection errors
- const recorder = new AudioRecorder();
- const context = new AudioContext();
- const adapter = context.createRecorderAdapter();
-
- // First connection should work
- recorder.connect(adapter);
-
- // Second connection should throw
- expect(() => recorder.connect(adapter)).toThrow();
- });
-});
-```
-
-## Mock Configuration
-
-### System Volume Testing
-
-```typescript
-import { useSystemVolume, setMockSystemVolume, AudioManager } from 'react-native-audio-api/mock';
-
-describe('System Integration Tests', () => {
- it('should mock system audio management', () => {
- // Test system sample rate
- const preferredRate = AudioManager.getDevicePreferredSampleRate();
- expect(preferredRate).toBe(44100);
-
- // Test volume management
- setMockSystemVolume(0.7);
- const currentVolume = useSystemVolume();
- expect(currentVolume).toBe(0.7);
-
- // Test event listeners
- const volumeCallback = jest.fn();
- const listener = AudioManager.addSystemEventListener(
- 'volumeChange',
- volumeCallback
- );
-
- expect(listener.remove).toBeDefined();
- listener.remove();
- });
-});
-```
-
-### Audio Callback Testing
-
-```typescript
-import { AudioRecorder } from 'react-native-audio-api/mock';
-
-describe('Callback Tests', () => {
- it('should handle audio data callbacks', () => {
- const recorder = new AudioRecorder();
- const audioDataCallback = jest.fn();
-
- const result = recorder.onAudioReady(
- {
- sampleRate: 44100,
- bufferLength: 1024,
- channelCount: 2,
- },
- audioDataCallback
- );
-
- expect(result.status).toBe('success');
-
- // Test callback cleanup
- recorder.clearOnAudioReady();
- expect(() => recorder.clearOnAudioReady()).not.toThrow();
- });
-});
-```
-
-## Type Safety
-
-The mock implementation provides full TypeScript support with the same types as the real library:
-
-```typescript
-import type { AudioContext, AudioParam, GainNode } from 'react-native-audio-api/mock';
-
-// All types are available and identical to the real implementation
-function processAudioNode(node: GainNode): void {
- node.gain.value = 0.5;
-}
-```
-
-## Testing Best Practices
-
-1. **Isolate Audio Logic**: Test audio processing logic separately from UI components
-2. **Mock External Dependencies**: Use mocks for file system, network, and platform-specific operations
-3. **Test Error Scenarios**: Verify your code handles various error conditions gracefully
-4. **Validate Audio Graph Structure**: Ensure nodes are connected correctly
-5. **Test Async Operations**: Use proper async/await patterns for operations like rendering
-
-## Example Test Suite
-
-```typescript
-import {
- AudioContext,
- AudioRecorder,
- FileFormat,
- decodeAudioData
-} from 'react-native-audio-api/mock';
-
-describe('Audio Application Tests', () => {
- let context: AudioContext;
-
- beforeEach(() => {
- context = new AudioContext();
- });
-
- afterEach(() => {
- // Clean up if needed
- context.close();
- });
-
- describe('Audio Graph', () => {
- it('should create complex audio processing chain', () => {
- const oscillator = context.createOscillator();
- const filter = context.createBiquadFilter();
- const delay = context.createDelay();
- const gainNode = context.createGain();
-
- // Configure effects chain
- filter.type = 'lowpass';
- filter.frequency.value = 2000;
- delay.delayTime.value = 0.3;
- gainNode.gain.value = 0.8;
-
- // Connect the chain
- oscillator.connect(filter);
- filter.connect(delay);
- delay.connect(gainNode);
- gainNode.connect(context.destination);
-
- // Verify configuration
- expect(filter.type).toBe('lowpass');
- expect(delay.delayTime.value).toBe(0.3);
- expect(gainNode.gain.value).toBe(0.8);
- });
- });
-
- describe('File Operations', () => {
- it('should handle audio file processing', async () => {
- const mockAudioData = new ArrayBuffer(1024);
-
- // Test audio decoding
- const decodedBuffer = await decodeAudioData(mockAudioData);
- expect(decodedBuffer.numberOfChannels).toBe(2);
- expect(decodedBuffer.sampleRate).toBe(44100);
- });
- });
-});
-```
-
-The mock implementation provides a complete testing environment that allows you to thoroughly test your audio applications without requiring real audio hardware or complex setup.
diff --git a/packages/audiodocs/static/raw/other/web-audio-api-coverage.md b/packages/audiodocs/static/raw/other/web-audio-api-coverage.md
deleted file mode 100644
index 24d2c79be..000000000
--- a/packages/audiodocs/static/raw/other/web-audio-api-coverage.md
+++ /dev/null
@@ -1,53 +0,0 @@
-# [Web Audio API coverage](https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API)
-
-### Coverage table
-
-| Interface | Status | Remarks |
-| :-------: | :----: | :------ |
-| AnalyserNode | ✅ |
-| AudioBuffer | ✅ |
-| AudioBufferSourceNode | ✅ |
-| AudioDestinationNode | ✅ |
-| AudioNode | ✅ |
-| AudioParam | ✅ |
-| AudioScheduledSourceNode | ✅ |
-| BiquadFilterNode | ✅ |
-| ConstantSourceNode | ✅ |
-| ConvolverNode | ✅ |
-| DelayNode | ✅ |
-| GainNode | ✅ |
-| IIRFilterNode | ✅ |
-| OfflineAudioContext | ✅ |
-| OscillatorNode | ✅ |
-| PeriodicWave | ✅ |
-| StereoPannerNode | ✅ |
-| WaveShaperNode | ✅ |
-| AudioContext | 🚧 | Available props and methods: `close`, `suspend`, `resume` |
-| BaseAudioContext | 🚧 | Available props and methods: `currentTime`, `destination`, `sampleRate`, `state`, `decodeAudioData`, all create methods for available or partially implemented nodes |
-| AudioListener | ❌ |
-| AudioSinkInfo | ❌ |
-| AudioWorklet | ❌ |
-| AudioWorkletGlobalScope | ❌ |
-| AudioWorkletNode | ❌ |
-| AudioWorkletProcessor | ❌ |
-| ChannelMergerNode | ❌ |
-| ChannelSplitterNode | ❌ |
-| DynamicsCompressorNode | ❌ |
-| MediaElementAudioSourceNode | ❌ |
-| MediaStreamAudioDestinationNode | ❌ |
-| MediaStreamAudioSourceNode | ❌ |
-| PannerNode | ❌ |
-
-### Description
-
-✅ - Completed
-
-🚧 - Partially implemented
-
-❌ - Not yet available
-
-> **Info**
->
-> If you have a use case for any of not yet available interfaces,
-> contact us or [create issue](https://github.com/software-mansion/react-native-audio-api).
-> We will do our best to ship it as soon as possible!
diff --git a/packages/audiodocs/static/raw/react/select-input.md b/packages/audiodocs/static/raw/react/select-input.md
deleted file mode 100644
index de762a5eb..000000000
--- a/packages/audiodocs/static/raw/react/select-input.md
+++ /dev/null
@@ -1,62 +0,0 @@
-# useAudioInput
-
-React hook for managing audio input device selection and monitoring available audio input devices. Current input will be available after first activation of the audio session. Not all connected devices might be listed as available inputs, some might be filtered out as incompatible with current session configuration.
-
-The `useAudioInput` hook provides an interface for:
-
-* Retrieving all available audio input devices
-* Getting the currently active input device
-* Switching between different input devices
-
- **Platform support:** Input device selection is currently only supported on iOS. On Android, `useAudioInput` is implemented as a no-op: the hook will not list or switch input devices, and any selection calls will effectively be ignored.
-
-## Usage
-
-```tsx
-import React from 'react';
-import { View, Text, Button } from 'react-native';
-import { useAudioInput } from 'react-native-audio-api';
-
-function AudioInputSelector() {
- const { availableInputs, currentInput, onSelectInput } = useAudioInput();
-
- return (
-
- Current Input: {currentInput?.name || 'None'}
-
- {availableInputs.map((input) => (
-
- );
-}
-```
-
-## Return Value
-
-The hook returns an object with the following properties:
-
-### `availableInputs: AudioDeviceInfo[]`
-
-An array of all available audio input devices. Each device contains:
-
-* `id: string` - Unique device identifier
-* `name: string` - Human-readable device name
-* `category: string` - Device category (e.g., "Built-In Microphone", "Bluetooth")
-
-### `currentInput: AudioDeviceInfo | null`
-
-The currently active audio input device, or `null` if no device is selected.
-
-### `onSelectInput: (device: AudioDeviceInfo) => Promise`
-
-Function to programmatically select an audio input device. Takes an `AudioDeviceInfo` object and attempts to set it as the active input device.
-
-## Related
-
-* [AudioManager](/docs/system/audio-manager) - For managing audio sessions and permissions
-* [AudioRecorder](/docs/inputs/audio-recorder) - For capturing audio from the selected input device
diff --git a/packages/audiodocs/static/raw/sources/audio-buffer-base-source-node.md b/packages/audiodocs/static/raw/sources/audio-buffer-base-source-node.md
deleted file mode 100644
index 2d5e48734..000000000
--- a/packages/audiodocs/static/raw/sources/audio-buffer-base-source-node.md
+++ /dev/null
@@ -1,90 +0,0 @@
-# AudioBufferBaseSourceNode
-
-The `AudioBufferBaseSourceNode` interface is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which aggregates behavior of nodes that requires [`AudioBuffer`](/docs/sources/audio-buffer).
-
-Child classes:
-
-* [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node)
-* [`AudioBufferQueueSourceNode`](/docs/sources/audio-buffer-queue-source-node)
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `detune` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing detuning of oscillation in cents. |
-| `playbackRate` | [`AudioParam`](/docs/core/audio-param) | [`k-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` defining speed factor at which the audio will be played. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
-
-### `getLatency`
-
-Returns the playback latency introduced by the pitch correction algorithm, in seconds.
-When scheduling precise playback times, start input samples this many seconds earlier to compensate for processing delay.
-Typically around `0.06s` when pitch correction is enabled, and `0` otherwise.
-
-#### Returns `number`.
-
-Example usage
-
-```tsx
-const source = audioContext.createBufferSource({ pitchCorrection: true });
-source.buffer = buffer;
-source.connect(audioContext.destination);
-
-const latency = source.getLatency();
-
-// Schedule playback slightly earlier to compensate for latency
-const startTime = audioContext.currentTime + 1.0; // play in 1 second
-source.start(startTime - latency);
-```
-
-## Events
-
-### `onPositionChanged`
-
-Allow to set (or remove) callback that will be fired after processing certain part of an audio.
-Frequency is defined by `onPositionChangedInterval`. By setting this callback you can achieve pause functionality.
-You can remove callback by passing `null`.
-
-### `onPositionChangedInterval`
-
-Allow to set frequency for `onPositionChanged` event. Value that can be set is around `1000/x` Hz.
-
-```ts
-import { AudioContext, AudioBufferSourceNode } from 'react-native-audio-api';
-
-function App() {
- const ctx = new AudioContext();
- const sourceNode = ctx.createBufferSource();
- sourceNode.buffer = null; //set your buffer
- let offset = 0;
-
- sourceNode.onPositionChanged = (event) => { //setting callback
- this.offset = event.value;
- };
-
- sourceNode.onPositionChangedInterval = 100; //setting frequency to ~10Hz
-
- sourceNode.start();
-}
-```
-
-## Remarks
-
-#### `detune`
-
-* Default value is 0.0.
-* Nominal range is -∞ to ∞.
-* For example value of 100 detune the source up by one semitone, whereas -1200 down by one octave.
-* When `createBufferSource(true)` it is clamped to range -1200 to 1200.
-
-#### `playbackRate`
-
-* Default value is 1.0.
-* Nominal range is -∞ to ∞.
-* For example value of 1.0 plays audio at normal speed, whereas value of 2.0 plays audio twice as fast as normal speed.
-* When created with `createBufferSource(true)` it is clamped to range 0 to 3 and uses pitch correction algorithm.
diff --git a/packages/audiodocs/static/raw/sources/audio-buffer-queue-source-node.md b/packages/audiodocs/static/raw/sources/audio-buffer-queue-source-node.md
deleted file mode 100644
index 7e3710409..000000000
--- a/packages/audiodocs/static/raw/sources/audio-buffer-queue-source-node.md
+++ /dev/null
@@ -1,130 +0,0 @@
-
-import { Optional, Experimental, Overridden, MobileOnly } from '@site/src/components/Badges';
-
-# AudioBufferQueueSourceNode
-
-The `AudioBufferQueueSourceNode` is an [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node) which represents player that consists of many short buffers.
-
-## Constructor
-
-[`BaseAudioContext.createBufferQueueSource(options: AudioBufferBaseSourceNodeOptions)`](/docs/core/base-audio-context#createbufferqueuesource)
-
-```jsx
-interface AudioBufferBaseSourceNodeOptions {
- pitchCorrection: boolean // specifies if pitch correction algorithm has to be available
-}
-```
-
-:::caution
-The pitch correction algorithm introduces processing latency.
-As a result, when scheduling precise playback times, you should start input samples slightly ahead of the intended playback time.
-For more details, see [getLatency()](/docs/sources/audio-buffer-base-source-node#getlatency).
-:::
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import {
- AudioContext,
- AudioBufferQueueSourceNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const audioBufferQueue = audioContextRef.current.createBufferQueueSource();
- const buffer1 = ...; // Load your audio buffer here
- const buffer2 = ...; // Load another audio buffer if needed
- audioBufferQueue.enqueueBuffer(buffer1);
- audioBufferQueue.enqueueBuffer(buffer2);
- audioBufferQueue.connect(audioContextRef.current.destination);
- audioBufferQueue.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-`AudioBufferQueueSourceNode` does not define any additional properties.
-It inherits all properties from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#properties).
-
-## Methods
-
-It inherits all methods from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#methods).
-
-### `enqueueBuffer`
-
-Adds another buffer to queue. Returns `bufferId` that can be used to identify the buffer in [`onBufferEnded`](audio-buffer-queue-source-node#onbufferended) event.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `buffer` | [`AudioBuffer`](/docs/sources/audio-buffer) | Buffer with next data. |
-
-#### Returns `string`.
-
-### `dequeueBuffer`
-
-Removes a buffer from the queue. Note that [`onBufferEnded`](audio-buffer-queue-source-node#onbufferended) event will not be fired for buffers that were removed.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `bufferId` | `string` | ID of the buffer to remove from the queue. It should be valid id provided by `enqueueBuffer` method. |
-
-#### Returns `undefined`.
-
-### `clearBuffers`
-
-Removes all buffers from the queue. Note that [`onBufferEnded`](audio-buffer-queue-source-node#onbufferended) event will not be fired for buffers that were removed.
-
-#### Returns `undefined`.
-
-### `start` {#start}
-
-Schedules the `AudioBufferQueueSourceNode` to start playback of enqueued [`AudioBuffers`](/docs/sources/audio-buffer), or starts to play immediately.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which playback is scheduled to start. If `when` is less than [`AudioContext.currentTime`](/docs/core/base-audio-context#properties) or set to 0, the node starts playing immediately. Default: `0`. |
-| `offset` | `number` | The position, in seconds, within the first enqueued audio buffer where playback begins. The default value is `0`, which starts playback from the beginning of the first enqueued buffer. If the offset exceeds the buffer’s [`duration`](/docs/sources/audio-buffer#properties), it’s automatically clamped to the valid range. |
-
-
-### `pause`
-
-Stops audio immediately. Unlike [`stop()`](/docs/sources/audio-scheduled-source-node#stop), which fully stops playback and clears the queued buffers,
-pause() halts the audio while keeping the current playback position, allowing you to resume from the same point later.
-
-#### Returns `undefined`.
-
-## Events
-
-### `onBufferEnded`
-
-Sets (or remove) callback that will be fired when a specific buffer has ended with payload of type [`OnBufferEndEventType`](audio-buffer-queue-source-node#onbufferendeventtype)
-
-You can remove callback by passing `null`.
-
-```ts
-audioBufferQueueSourceNode.onBufferEnded = (event) => { //setting callback
- console.log(`buffer with id {event.bufferId} ended`);
-
- if (event.isLastBufferInQueue) {
- console.log('That was the last buffer in the queue');
- }
-};
-```
-
-## Remarks
-
-### `OnBufferEndEventType`
-
-
-Type definitions
-```typescript
-interface OnBufferEndEventType {
- bufferId: string; // the ID of the buffer that has ended
- isLastBufferInQueue: boolean; // a boolean indicating whether it was the last buffer in the queue
-}
-```
-
diff --git a/packages/audiodocs/static/raw/sources/audio-buffer-source-node.md b/packages/audiodocs/static/raw/sources/audio-buffer-source-node.md
deleted file mode 100644
index b2850592b..000000000
--- a/packages/audiodocs/static/raw/sources/audio-buffer-source-node.md
+++ /dev/null
@@ -1,141 +0,0 @@
-
-import AudioNodePropsTable from "@site/src/components/AudioNodePropsTable"
-import { Optional, Overridden } from '@site/src/components/Badges';
-import AudioApiExample from '@site/src/components/AudioApiExample'
-import InteractivePlayground from '@site/src/components/InteractivePlayground';
-import { useAudioBufferSourcePlayground } from '@site/src/components/InteractivePlayground/AudioBufferSourceExample/useAudioBufferSourcePlayground';
-import { useGainAdsrPlayground } from '@site/src/components/InteractivePlayground/GainAdsrExample/useGainAdsrPlayground';
-
-
-# AudioBufferSourceNode
-
-The `AudioBufferSourceNode` is an [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node) which represents audio source with in-memory audio data, stored in
-[`AudioBuffer`](/docs/sources/audio-buffer). You can use it for audio playback, including standard pause and resume functionalities.
-
-An `AudioBufferSourceNode` can be started only once, so if you want to play the same sound again you have to create a new one.
-However, this node is very inexpensive to create, and what is crucial you can reuse same [`AudioBuffer`](/docs/sources/audio-buffer).
-
-
-AudioBufferSourceNode interactive playground
-
-
-
-
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-
-
-## Constructor
-
-[`BaseAudioContext.createBufferSource(options: AudioBufferBaseSourceNodeOptions)`](/docs/core/base-audio-context#createbuffersource)
-
-```jsx
-interface AudioBufferBaseSourceNodeOptions {
- pitchCorrection: boolean // specifies if pitch correction algorithm has to be available
-}
-```
-
-:::caution
-The pitch correction algorithm introduces processing latency.
-As a result, when scheduling precise playback times, you should start input samples slightly ahead of the intended playback time.
-For more details, see [getLatency()](/docs/sources/audio-buffer-base-source-node#getlatency).
-
-If you plan to play multiple buffers one after another, consider using [AudioBufferQueueSourceNode](/docs/sources/audio-buffer-queue-source-node)
-:::
-
-## Example
-
-```tsx
-import React, { useEffect, useRef, FC } from 'react';
-import {
- AudioContext,
- AudioBufferSourceNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const audioBufferSource = audioContextRef.current.createBufferSource();
- const buffer = ...; // Load your audio buffer here
- audioBufferSource.buffer = buffer;
- audioBufferSource.connect(audioContextRef.current.destination);
- audioBufferSource.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :-------- |
-| `buffer` | [`AudioBuffer`](/docs/sources/audio-buffer) | Associated `AudioBuffer`. |
-| `loop` | `boolean` | Boolean indicating if audio data must be replayed after when end of the associated `AudioBuffer` is reached. |
-| `loopSkip` | `boolean` | Boolean indicating if upon setting up `loopStart` we want to skip immediately to the loop start. |
-| `loopStart` | `number` | Float value indicating the time, in seconds, at which playback of the audio must begin, if loop is true. |
-| `loopEnd` | `number` | Float value indicating the time, in seconds, at which playback of the audio must end and loop back to `loopStart`, if loop is true. |
-
-## Methods
-
-It inherits all methods from [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node#methods).
-
-### `start` {#start}
-
-Schedules the `AudioBufferSourceNode` to start playback of audio data contained in the associated [`AudioBuffer`](/docs/sources/audio-buffer), or starts to play immediately.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which playback is scheduled to start. If `when` is less than [`AudioContext.currentTime`](/docs/core/base-audio-context#properties) or set to 0, the node starts playing immediately. Default: `0`. |
-| `offset` | `number` | The position, in seconds, within the audio buffer where playback begins. The default value is `0`, which starts playback from the beginning of the buffer. If the offset exceeds the buffer’s [`duration`](/docs/sources/audio-buffer#properties) (or the defined [`loopEnd`](/docs/sources/audio-buffer-source-node#properties) value), it’s automatically clamped to the valid range. Offsets are calculated using the buffer’s natural sample rate rather than the current playback rate — so even if the sound is played at double speed, halfway through a 10-second buffer is still 5 seconds. |
-| `duration` | `number` | The playback duration, in seconds. If not provided, playback continues until the sound ends naturally or is manually stopped with [`stop() method`](/docs/sources/audio-scheduled-source-node#stop). Equivalent to calling `start(when, offset)` followed by `stop(when + duration)`. |
-
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `when` is negative number. |
-| `RangeError` | `offset` is negative number. |
-| `RangeError` | `duration` is negative number. |
-| `InvalidStateError` | If node has already been started once. |
-
-#### Returns `undefined`.
-
-
-## Events
-
-### `onLoopEnded`
-
-Sets (or remove) callback that will be fired when buffer source node reached the end of the loop and is looping back to `loopStart`.
-You can remove callback either by passing `null` or calling `remove` on the returned subscription.
-
-```ts
-const subscription = audioBufferSourceNode.onLoopEnded = () => { // setting the callback
- console.log("loop ended");
-};
-
-subscription.remove(); // removal of the subscription
-```
-
-## Remarks
-
-#### `buffer`
-- If is null, it outputs a single channel of silence (all samples are equal to 0).
-
-#### `loop`
-- Default value is false.
-
-#### `loopStart`
-- Default value is 0.
-
-#### `loopEnd`
-- Default value is `buffer.duration`.
-
-#### `playbackRate`
-- Default value is 1.0.
-- Nominal range is -∞ to ∞.
-- For example value of 1.0 plays audio at normal speed, whereas value of 2.0 plays audio twice as fast as normal speed.
-- When created with `createBufferSource(true)` it is clamped to range 0 to 3 and uses pitch correction algorithm.
diff --git a/packages/audiodocs/static/raw/sources/audio-buffer.md b/packages/audiodocs/static/raw/sources/audio-buffer.md
deleted file mode 100644
index 1f1d996a9..000000000
--- a/packages/audiodocs/static/raw/sources/audio-buffer.md
+++ /dev/null
@@ -1,96 +0,0 @@
-# AudioBuffer
-
-The `AudioBuffer` interface represents a short audio asset, commonly shorter then one minute.
-It can consists of one or more channels, each one appearing to be 32-bit floating-point linear [PCM](https://en.wikipedia.org/wiki/Pulse-code_modulation) values with a nominal range of \[−1, 1] (but not limited to that range),
-specific sample rate which is the quantity of frames that will play in one second and length.
-
-
-
-It can be created from audio file using [`decodeAudioData`](/docs/utils/decoding#decodeaudiodata) or from raw data using `constructor`.
-Once you have data in `AudioBuffer`, audio can be played by passing it to [`AudioBufferSourceNode`](audio-buffer-source-node).
-
-## Constructor
-
-```tsx
-constructor(options: AudioBufferOptions)
-```
-
-### `AudioBufferOptions`
-
-| Parameter | Type | Default | Description |
-| :---: | :---: | :----: | :---- |
-| `length` | `number` | - | [`Length`](/docs/sources/audio-buffer#properties) of the buffer |
-| `numberOfChannels` | `number` | 1.0 | Number of [`channels`](/docs/sources/audio-buffer#properties) in buffer |
-| `sampleRate` | `number` | - | [`Sample rate`](/docs/sources/audio-buffer#properties) of the buffer in Hz |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createBuffer(numChannels, length, sampleRate)`](/docs/core/base-audio-context#createbuffer) that creates buffer with default values.
-
-## Decoding
-
-See example implementations in [`BaseAudioContext`](/docs/core/base-audio-context#decodeaudiodata) on how to decode data in various ways.
-
-## Properties
-
-| Name | Type | Description | |
-| :----: | :----: | :-------- | :-: |
-| `sampleRate` | `number` | Float value representing sample rate of the PCM data stored in the buffer. | |
-| `length` | `number` | Integer value representing length of the PCM data stored in the buffer. | |
-| `duration` | `number` | Double value representing duration, in seconds, of the PCM data stored in the buffer. | |
-| `numberOfChannels` | `number` | Integer value representing the number of audio channels of the PCM data stored in the buffer. | |
-
-## Methods
-
-### `getChannelData`
-
-Gets modifiable array with PCM data from given channel.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `channel` | `number` | Index of the `AudioBuffer's` channel, from which data will be returned. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `IndexSizeError` | `channel` specifies unexisting audio channel. |
-
-#### Returns `Float32Array`.
-
-### `copyFromChannel`
-
-Copies data from given channel of the `AudioBuffer` to an array.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `destination` | `Float32Array` | The array to which data will be copied. |
-| `channelNumber` | `number` | Index of the `AudioBuffer's` channel, from which data will be copied. |
-| `startInChannel` | `number` | Channel's offset from which to start copying data. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `IndexSizeError` | `channelNumber` specifies unexisting audio channel. |
-| `IndexSizeError` | `startInChannel` is greater then the `AudioBuffer` length. |
-
-#### Returns `undefined`.
-
-### `copyToChannel`
-
-Copies data from given array to specified channel of the `AudioBuffer`.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `source` | `Float32Array` | The array from which data will be copied. |
-| `channelNumber` | `number` | Index of the `AudioBuffer's` channel to which data will be copied. |
-| `startInChannel` | `number` | Channel's offset from which to start copying data. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `IndexSizeError` | `channelNumber` specifies unexisting audio channel. |
-| `IndexSizeError` | `startInChannel` is greater then the `AudioBuffer` length. |
-
-#### Returns `undefined`.
diff --git a/packages/audiodocs/static/raw/sources/audio-scheduled-source-node.md b/packages/audiodocs/static/raw/sources/audio-scheduled-source-node.md
deleted file mode 100644
index 3f2b9c9d4..000000000
--- a/packages/audiodocs/static/raw/sources/audio-scheduled-source-node.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# AudioScheduledSourceNode
-
-The `AudioScheduledSourceNode` interface is an [`AudioNode`](/docs/core/audio-node) which serves as a parent interface for several types of audio source nodes.
-It provides ability to start and stop audio playback.
-
-Child classes:
-
-* [`AudioBufferBaseSourceNode`](/docs/sources/audio-buffer-base-source-node)
-* [`OscillatorNode`](/docs/sources/oscillator-node)
-* [`StreamerNode`](/docs/sources/streamer-node)
-
-## Properties
-
-`AudioScheduledSourceNode` does not define any additional properties.
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-## Methods
-
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-### `start`
-
-Schedules the node to start audio playback at specified time. If no time is given, it starts immediately.
-You can invoke this method only once in node's life.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which the node will start to play. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `when` is negative number. |
-| `InvalidStateError` | If node has already been started once. |
-
-#### Returns `undefined`.
-
-### `stop`
-
-Schedules the node to stop audio playback at specified time. If no time is given, it stops immediately.
-If you invoke this method multiple times on the same node before the designated stop time, the most recent call overwrites previous one.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `when` | `number` | The time, in seconds, at which the node will stop playing. |
-
-#### Errors:
-
-| Error type | Description |
-| :---: | :---- |
-| `RangeError` | `when` is negative number. |
-| `InvalidStateError` | If node has not been started yet. |
-
-#### Returns `undefined`.
-
-## Events
-
-### `onEnded`
-
-Sets (or remove) callback that will be fired when source node has stopped playing,
-either because it's reached a predetermined stop time, the full duration of the audio has been performed, or because the entire buffer has been played.
-You can remove callback either by passing `null` or calling `remove` on the returned subscription.
-
-```ts
-const subscription = audioBufferSourceNode.onEnded = () => { // setting the callback
- console.log("audio ended");
-};
-
-subscription.remove(); // removal of the subscription
-```
diff --git a/packages/audiodocs/static/raw/sources/constant-source-node.md b/packages/audiodocs/static/raw/sources/constant-source-node.md
deleted file mode 100644
index 49e49659a..000000000
--- a/packages/audiodocs/static/raw/sources/constant-source-node.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# ConstantSourceNode
-
-The `ConstantSourceNode` is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which represents an audio source, that outputs a single constant value.
-The `offset` parameter controls this value. Although the node is called "constant" its `offset` value can be automated to change over time, which makes it powerful tool
-for controlling multiple other [`AudioParam`](/docs/core/audio-param) values in an audio graph.
-Just like `AudioScheduledSourceNode`, it can be started only once.
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: ConstantSourceOptions)
-```
-
-### `ConstantSourceOptions`
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `offset` | `number` | 1 | Initial value for [`offset`](/docs/sources/constant-source-node#properties) |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createConstantSource()`](/docs/core/base-audio-context#createconstantsource) that creates node with default values.
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import { Text } from 'react-native';
-import {
- AudioContext,
- OscillatorNode,
- GainNode,
- ConstantSourceNode
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const audioContext = audioContextRef.current;
-
- const oscillator1 = audioContext.createOscillator();
- const oscillator2 = audioContext.createOscillator();
- const gainNode1 = audioContext.createGain();
- const gainNode2 = audioContext.createGain();
- const constantSource = audioContext.createConstantSource();
-
- oscillator1.frequency.value = 440;
- oscillator2.frequency.value = 392;
- constantSource.offset.value = 0.5;
-
- oscillator1.connect(gainNode1);
- gainNode1.connect(audioContext.destination);
-
- oscillator2.connect(gainNode2);
- gainNode2.connect(audioContext.destination);
-
- // We connect the constant source to the gain nodes gain AudioParams
- // to control both of them at the same time
- constantSource.connect(gainNode1.gain);
- constantSource.connect(gainNode2.gain);
-
- oscillator1.start(audioContext.currentTime);
- oscillator2.start(audioContext.currentTime);
- constantSource.start(audioContext.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Default value | Description |
-| :----: | :----: | :--------: | :------- |
-| `offset` | [`AudioParam`](/docs/core/audio-param) | 1.0 |[`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing the value which the node constantly outputs. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
diff --git a/packages/audiodocs/static/raw/sources/oscillator-node.md b/packages/audiodocs/static/raw/sources/oscillator-node.md
deleted file mode 100644
index 8a567e609..000000000
--- a/packages/audiodocs/static/raw/sources/oscillator-node.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-import AudioNodePropsTable from "@site/src/components/AudioNodePropsTable"
-import { Optional, ReadOnly } from '@site/src/components/Badges';
-import InteractivePlayground from '@site/src/components/InteractivePlayground';
-import { useOscillatorPlayground } from '@site/src/components/InteractivePlayground/OscillatorExample/useOscilatorPlayground';
-
-# OscillatorNode
-
-The `OscillatorNode` is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which represents a simple periodic wave signal.
-Similar to all of `AudioScheduledSourceNodes`, it can be started only once. If you want to play the same sound again you have to create a new one.
-
-
-OscillatorNode interactive playground
-
-
-
-
-
-#### [`AudioNode`](/docs/core/audio-node#properties) properties
-
-
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options?: OscillatorOptions)
-```
-
-### `OscillatorOptions`
-
-Inherits all properties from [`AudioNodeOptions`](/docs/core/audio-node#audionodeoptions)
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `type` | [`OscillatorType`](/docs/types/oscillator-type) | `sine` | Initial value for [`type`](/docs/sources/oscillator-node#properties). |
-| `frequency` | `number` | 440 | Initial value for [`frequency`](/docs/sources/oscillator-node#properties). |
-| `detune` | `number` | 0 | Initial value for [`detune`](/docs/sources/oscillator-node#properties). |
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createOscillator()`](/docs/core/base-audio-context#createoscillator)
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import {
- AudioContext,
- OscillatorNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const oscillator = audioContextRef.current.createOscillator();
- oscillator.connect(audioContextRef.current.destination);
- oscillator.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Default value | Description |
-| :----: | :----: | :-------- | :------- |
-| `detune` | [`AudioParam`](/docs/core/audio-param) | 0 |[`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing detuning of oscillation in cents. |
-| `frequency` | [`AudioParam`](/docs/core/audio-param) | 440 | [`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate) `AudioParam` representing frequency of wave in herzs. |
-| `type` | [`OscillatorType`](/docs/types/oscillator-type)| `sine` | String value represening type of wave. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
-
-### `setPeriodicWave`
-
-Sets any periodic wave.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `wave` | [`PeriodicWave`](/docs/effects/periodic-wave) | Data representing custom wave. [`See for reference`](/docs/core/base-audio-context#createperiodicwave) |
-
-#### Returns `undefined`.
-
-## Remarks
-
-#### `detune`
-- Nominal range is: -∞ to ∞.
-- For example value of 100 detune the source up by one semitone, whereas -1200 down by one octave.
-
-#### `frequency`
-- 440 Hz is equivalent to piano note A4.
-- Nominal range is: $-\frac{\text{sampleRate}}{2}$ to $\frac{\text{sampleRate}}{2}$
-(`sampleRate` value is taken from [`AudioContext`](/docs/core/base-audio-context#properties))
diff --git a/packages/audiodocs/static/raw/sources/recorder-adapter-node.md b/packages/audiodocs/static/raw/sources/recorder-adapter-node.md
deleted file mode 100644
index bd052e2aa..000000000
--- a/packages/audiodocs/static/raw/sources/recorder-adapter-node.md
+++ /dev/null
@@ -1,43 +0,0 @@
-# RecorderAdapterNode
-
-The `RecorderAdapterNode` is an [`AudioNode`](/docs/core/audio-node) which is an adapter for [`AudioRecorder`](/docs/inputs/audio-recorder).
-It lets you compose audio input from recorder into an audio graph.
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createRecorderAdapter()`](/docs/core/base-audio-context#createrecorderadapter)
-
-## Example
-
-```tsx
-const recorder = new AudioRecorder({
- sampleRate: 48000,
- bufferLengthInSamples: 48000,
-});
-const audioContext = new AudioContext({ sampleRate: 48000 });
-const recorderAdapterNode = aCtxRef.current.createRecorderAdapter();
-
-recorder.connect(recorderAdapterNode);
-recorderAdapterNode.connect(audioContext.destination)
-```
-
-## Properties
-
-`RecorderAdapterNode` does not define any additional properties.
-It inherits all properties from [`AudioNode`](/docs/core/audio-node#properties).
-
-## Methods
-
-`RecorderAdapterNode` does not define any additional methods.
-It inherits all methods from [`AudioNode`](/docs/core/audio-node#methods).
-
-## Remarks
-
-* Adapter without a connected recorder will produce silence.
-* Adapter connected only to a recorder will function correctly and keep a small buffer of recorded data.
-* Adapter will not be garbage collected as long as it remains connected to either a destination or a recorder.
diff --git a/packages/audiodocs/static/raw/sources/streamer-node.md b/packages/audiodocs/static/raw/sources/streamer-node.md
deleted file mode 100644
index dfd243df0..000000000
--- a/packages/audiodocs/static/raw/sources/streamer-node.md
+++ /dev/null
@@ -1,59 +0,0 @@
-# StreamerNode
-
-> **Caution**
->
-> Mobile only.
-
-The `StreamerNode` is an [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node) which represents a node that can decode and play [Http Live Streaming](https://developer.apple.com/streaming/) data.
-Similar to all of `AudioScheduledSourceNodes`, it can be started only once. If you want to play the same sound again you have to create a new one.
-
-#### [`AudioNode`](/docs/core/audio-node#read-only-properties) properties
-
-## Constructor
-
-```tsx
-constructor(context: BaseAudioContext, options: StreamerOptions)
-```
-
-### `StreamerOptions`
-
-| Parameter | Type | Default | |
-| :---: | :---: | :----: | :---- |
-| `streamPath` | `string` | - | Value for [`streamPath`](/docs/sources/streamer-node#properties) |
-
-Or by using `BaseAudioContext` factory method:
-
-[`BaseAudioContext.createStreamer()`](/docs/core/base-audio-context#createstreamer).
-
-## Example
-
-```tsx
-import React, { useRef } from 'react';
-import {
- AudioContext,
- StreamerNode,
-} from 'react-native-audio-api';
-
-function App() {
- const audioContextRef = useRef(null);
- if (!audioContextRef.current) {
- audioContextRef.current = new AudioContext();
- }
- const streamer = audioContextRef.current.createStreamer();
- streamer.initialize('link/to/your/hls/source');
- streamer.connect(audioContextRef.current.destination);
- streamer.start(audioContextRef.current.currentTime);
-}
-```
-
-## Properties
-
-It inherits all properties from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#properties).
-
-| Name | Type | Description |
-| :----: | :----: | :------- |
-| `streamPath` | `string` | String value representing url to stream. |
-
-## Methods
-
-It inherits all methods from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node#methods).
diff --git a/packages/audiodocs/static/raw/system/audio-manager.md b/packages/audiodocs/static/raw/system/audio-manager.md
deleted file mode 100644
index 0f3bbc9ff..000000000
--- a/packages/audiodocs/static/raw/system/audio-manager.md
+++ /dev/null
@@ -1,295 +0,0 @@
-# AudioManager
-
-The `AudioManager` is a layer of an abstraction between user and a system.
-It provides a set of system-specific functions that are invoked directly in native code, by related system.
-
-## Example
-
-```tsx
-import { AudioManager } from 'react-native-audio-api';
-import { useEffect } from 'react';
-
-function App() {
- // set AVAudioSession example options (iOS)
- AudioManager.setAudioSessionOptions({
- iosCategory: 'playback',
- iosMode: 'default',
- iosOptions: ['defaultToSpeaker', 'allowBluetoothA2DP'],
- })
- // enabling emission of events
- AudioManager.observeAudioInterruptions(true);
- AudioManager.getDevicesInfo().then(console.log);
-
- useEffect(() => {
- // callback to be invoked on 'interruption' event
- const interruptionSubscription = AudioManager.addSystemEventListener(
- 'interruption',
- (event) => {
- console.log('Interruption event:', event);
- }
- );
-
- return () => {
- interruptionSubscription?.remove();
- };
- }, []);
-}
-```
-
-## Methods
-
-### `setAudioSessionOptions`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| options | [`SessionOptions`](/docs/system/audio-manager#sessionoptions) | Options to be set for [AVAudioSession](https://developer.apple.com/documentation/avfaudio/avaudiosession?language=objc#Configuring-standard-audio-behaviors) |
-
-#### Returns `undefined`
-
-### `setAudioSessionActivity`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| enabled | `boolean` | It is used to set/unset [AVAudioSession](https://developer.apple.com/documentation/avfaudio/avaudiosession?language=objc#Activating-the-audio-configuration) activity |
-
-#### Returns promise of `boolean` type, which is resolved to `true` if invokation ended with success, `false` otherwise.'
-
-### `disableSessionManagement`
-
-#### Returns `undefined`.
-
-Disables all internal default [AVAudioSession](https://developer.apple.com/documentation/avfaudio/avaudiosession) configurations and management done by the `react-native-audio-api` package. After calling this method, user is responsible for managing audio session entirely on their own.
-Typical use-case for this method is when user wants to fully control audio session outside of `react-native-audio-api` package,
-commonly when using another audio library along `react-native-audio-api`. The method has to be called before `AudioContext` is created, for example in app initialization code.
-Any later call to `setAudioSessionOptions` or `setAudioSessionActivity` will re-enable internal audio session management.
-
-### `getDevicePreferredSampleRate`
-
-#### Returns `number`.
-
-### `observeAudioInterruptions`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `param` | [`AudioFocusType`](audio-manager#audiofocustype) | `boolean` | `null` | It is used to enable/disable observing audio interruptions. Passing `false` or `null` disables the observation, otherwise it is enabled. |
-
-> **Info**
->
-> On android passing the audio focus type set the native [audio focus](https://developer.android.com/media/optimize/audio-focus) accordingly.
-> It is recommended for apps to respect the rules for good user experience.
-> On iOS it just enables/disables event emission and has no additional effects.
-
-#### Returns `undefined`
-
-### `activelyReclaimSession`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `enabled` | `boolean` | It is used to enable/disable session spoofing |
-
-#### Returns `undefined`
-
-More aggressively try to reactivate the audio session during interruptions.
-
-In some cases (depends on app session settings and other apps using audio) system may never
-send the `interruption ended` event. This method will check if any other audio is playing
-and try to reactivate the audio session, as soon as there is "silence".
-Although this might change the expected behavior.
-
-Internally method uses `AVAudioSessionSilenceSecondaryAudioHintNotification` as well as
-interval polling to check if other audio is playing.
-
-### `observeVolumeChanges`
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `enabled` | `boolean` | It is used to enable/disable observing volume changes |
-
-#### Returns `undefined`
-
-### `addSystemEventListener`
-
-Adds callback to be invoked upon hearing an event.
-
-| Parameter | Type | Description |
-| :---: | :---: | :---- |
-| `name` | [`SystemEventName`](audio-manager#systemeventname) | Name of an event listener |
-| `callback` | [`SystemEventCallback`](audio-manager#systemeventname) | Callback that will be invoked upon hearing an event |
-
-#### Returns [`AudioEventSubscription`](/docs/system/audio-manager#audioeventsubscription) if `enabled` is set to true, `undefined` otherwise
-
-### `requestRecordingPermissions`
-
-Brings up the system microphone permissions pop-up on demand. The pop-up automatically shows if microphone data
-is directly requested, but sometimes it is better to ask beforehand.
-
-#### Throws an `error` if there is no NSMicrophoneUsageDescription entry in `Info.plist`
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `checkRecordingPermissions`
-
-Checks if permissions were previously granted.
-
-#### Throws an `error` if there is no NSMicrophoneUsageDescription entry in `Info.plist`
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `requestNotificationPermissions`
-
-Brings up the system notification permissions pop-up on demand. The pop-up automatically shows if notification data
-is directly requested, but sometimes it is better to ask beforehand.
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `checkRecordingPermissions`
-
-Checks if permissions were previously granted.
-
-#### Returns promise of [`PermissionStatus`](/docs/system/audio-manager#permissionstatus) type, which is resolved after receiving answer from the system.
-
-### `getDevicesInfo`
-
-Checks currently used and available devices.
-
-#### Returns promise of [`AudioDevicesInfo`](/docs/system/audio-manager#audiodevicesinfo) type, which is resolved after receiving answer from the system.
-
-## Remarks
-
-### `AudioFocusType`
-
-Type definitions
-
-```typescript
-type AudioFocusType =
- | 'gain'
- | 'gainTransient'
- | 'gainTransientExclusive'
- | 'gainTransientMayDuck';
-```
-
-### `SessionOptions`
-
-Type definitions
-
-```typescript
-type IOSCategory =
- | 'record'
- | 'ambient'
- | 'playback'
- | 'multiRoute'
- | 'soloAmbient'
- | 'playAndRecord';
-
-type IOSMode =
- | 'default'
- | 'gameChat'
- | 'videoChat'
- | 'voiceChat'
- | 'measurement'
- | 'voicePrompt'
- | 'spokenAudio'
- | 'moviePlayback'
- | 'videoRecording';
-
-type IOSOption =
- | 'duckOthers'
- | 'allowAirPlay'
- | 'mixWithOthers'
- | 'allowBluetoothHFP'
- | 'defaultToSpeaker'
- | 'allowBluetoothA2DP'
- | 'overrideMutedMicrophoneInterruption'
- | 'interruptSpokenAudioAndMixWithOthers';
-
-interface SessionOptions {
- iosMode?: IOSMode;
- iosOptions?: IOSOption[];
- iosCategory?: IOSCategory;
- iosAllowHaptics?: boolean;
- // Has no effect when using PlaybackNotificationManager as it takes over the "Now playing" controls
- iosNotifyOthersOnDeactivation?: boolean;
-}
-```
-
-### `SystemEventName`
-
-Type definitions
-
-```typescript
-interface EventEmptyType {}
-
-interface EventTypeWithValue {
- value: number;
-}
-
-interface OnInterruptionEventType {
- type: 'ended' | 'began'; // if the interruption event has started or ended
- shouldResume: boolean; // if the interruption was temporary and we can resume the playback/recording
-}
-
-interface OnRouteChangeEventType {
- reason:
- | 'Unknown'
- | 'Override'
- | 'CategoryChange'
- | 'WakeFromSleep'
- | 'NewDeviceAvailable'
- | 'OldDeviceUnavailable'
- | 'ConfigurationChange'
- | 'NoSuitableRouteForCategory';
-}
-
-type SystemEvents = {
- volumeChange: EventTypeWithValue;
- interruption: OnInterruptionEventType;
- duck: EventEmptyType;
- routeChange: OnRouteChangeEventType;
-};
-
-type SystemEventName = keyof SystemEvents;
-type SystemEventCallback = (
- event: SystemEvents[Name]
-) => void;
-```
-
-### `AudioEventSubscription`
-
-Type definitions
-
-```typescript
-interface AudioEventSubscription {
- /** @internal */
- public readonly subscriptionId: string;
-
- public remove(): void; // used to remove the subscription
-}
-```
-
-### `PermissionStatus`
-
-Type definitions
-
-```typescript
-type PermissionStatus = 'Undetermined' | 'Denied' | 'Granted';
-```
-
-### `AudioDevicesInfo`
-
-Type definitions
-
-```typescript
-export interface AudioDeviceInfo {
- name: string;
- category: string;
-}
-
-export type AudioDeviceList = AudioDeviceInfo[];
-
-export interface AudioDevicesInfo {
- availableInputs: AudioDeviceList;
- availableOutputs: AudioDeviceList;
- currentInputs: AudioDeviceList; // iOS
- currentOutputs: AudioDeviceList; // iOS
-}
-```
diff --git a/packages/audiodocs/static/raw/system/playback-notification-manager.md b/packages/audiodocs/static/raw/system/playback-notification-manager.md
deleted file mode 100644
index 1d31e2c00..000000000
--- a/packages/audiodocs/static/raw/system/playback-notification-manager.md
+++ /dev/null
@@ -1,182 +0,0 @@
-# PlaybackNotificationManager
-
-The `PlaybackNotificationManager` provides media session integration and playback controls for your audio application. It manages system-level media notifications with controls like play, pause, next, previous, and seek functionality.
-
-:::info Platform Differences
-
-**iOS Requirements:**
-
-* Notification controls only appear when an active `AudioContext` is running
-* `show()` or `hide()` only update metadata - they don't control notification visibility
-* The notification automatically appears/disappears based on audio session state
-* To show: create and resume an AudioContext
-* To hide: suspend or close the AudioContext
-
-**Android:**
-
-* Notification visibility is directly controlled by `show()` and `hide()` methods
-* Works independently of AudioContext state
-
-> ## Example
->
-> ```tsx
-> // show notification
-> await PlaybackNotificationManager.show({
-> title: 'My Song',
-> artist: 'My Artist',
-> duration: 180,
-> state: 'paused',
-> });
->
-> // Listen for notification controls
-> const playListener = PlaybackNotificationManager.addEventListener(
-> 'playbackNotificationPlay',
-> () => {
-> // Handle play action
-> PlaybackNotificationManager.show({ state: 'playing' });
-> }
-> );
->
-> const pauseListener = PlaybackNotificationManager.addEventListener(
-> 'playbackNotificationPause',
-> () => {
-> // Handle pause action
-> PlaybackNotificationManager.show({ state: 'paused' });
-> }
-> );
->
-> const seekToListener = PlaybackNotificationManager.addEventListener(
-> 'playbackNotificationSeekTo',
-> (event) => {
-> // Handle seek to position (event.value is in seconds)
-> PlaybackNotificationManager.show({ elapsedTime: event.value });
-> }
-> );
->
-> // Update progress
-> PlaybackNotificationManager.show({ elapsedTime: 60 });
->
-> // Cleanup
-> playListener.remove();
-> pauseListener.remove();
-> seekToListener.remove();
-> PlaybackNotificationManager.hide();
-> ```
->
-> ## Methods
->
-> ### `show`
->
-> Display the notification with initial metadata.note iOS Behavior
-> On iOS, this method only sets the metadata. The notification controls will only appear when an `AudioContext` is actively running. Make sure to create and resume an AudioContext before calling `show()`.
-> info
-> Metadata is remembered between calls, so after initial passing the metadata to show function, you can only call it with elements that are supposed to change.
-> | Parameter | Type | Description |
-> | :-------: | :----------: | :----- |
-> | `info` | [`PlaybackNotificationInfo`](playback-notification-manager#playbacknotificationinfo) | Initial notification metadata |
->
-> #### Returns `Promise`.
->
-> ### `hide`
->
-> Hide the notification. Can be shown again later by calling `show()`.note iOS Behavior
-> On iOS, this method clears the metadata but does not hide the notification controls. To completely hide controls on iOS, you must suspend or close the AudioContext.
-> :::
-
-#### Returns `Promise`.
-
-### `enableControl`
-
-Enable or disable specific playback controls.
-
-| Parameter | Type | Description |
-| :-------: | :-----: | :------ |
-| `control` | [`PlaybackControlName`](playback-notification-manager#playbackcontrolname) | The control to enable/disable |
-| `enabled` | `boolean` | Whether the control should be enabled |
-
-#### Returns `Promise`.
-
-### `isActive`
-
-Check if the notification is currently active and visible.
-
-#### Returns `Promise`.
-
-### `addEventListener`
-
-Add an event listener for notification actions.
-
-| Parameter | Type | Description |
-| :---------: | :------: | :------- |
-| `eventName` | [`PlaybackNotificationEventName`](playback-notification-manager#playbacknotificationeventname) | The event to listen for |
-| `callback` | [`SystemEventCallback`](/docs/system/audio-manager#systemeventname--remotecommandeventname) | Callback function |
-
-#### Returns [`AudioEventSubscription`](/docs/system/audio-manager#audioeventsubscription).
-
-## Remarks
-
-### `PlaybackNotificationInfo`
-
-Type definitions
-
-```typescript
-interface PlaybackNotificationInfo {
- title?: string;
- artist?: string;
- album?: string;
-
- // Can be a URL or a local file path relative to drawable resources (Android) or bundle resources (iOS)
- artwork?: string | { uri: string };
- // ANDROID: small icon shown in the status bar
- androidSmallIcon?: string | { uri: string };
- duration?: number;
-
- // IOS: elapsed time does not update automatically, must be set manually on each state change
- elapsedTime?: number;
- speed?: number;
- state?: 'playing' | 'paused';
-}
-```
-
-### `PlaybackControlName`
-
-Type definitions
-
-```typescript
-type PlaybackControlName =
- | 'play'
- | 'pause'
- | 'stop'
- | 'nextTrack'
- | 'previousTrack'
- | 'skipForward'
- | 'skipBackward'
- | 'seekTo';
-```
-
-### `PlaybackNotificationEventName`
-
-Type definitions
-
-```typescript
-interface EventTypeWithValue {
- value: number;
-}
-
-interface PlaybackNotificationEvent {
- playbackNotificationPlay: EventEmptyType;
- playbackNotificationPause: EventEmptyType;
- playbackNotificationStop: EventEmptyType;
- playbackNotificationNextTrack: EventEmptyType;
- playbackNotificationPreviousTrack: EventEmptyType;
- playbackNotificationSkipForward: EventTypeWithValue;
- playbackNotificationSkipBackward: EventTypeWithValue;
- playbackNotificationSeekTo: EventTypeWithValue;
- playbackNotificationDismissed: EventEmptyType;
-}
-
-type PlaybackNotificationEventName = keyof PlaybackNotificationEvent;
-```
-
-```
-```
diff --git a/packages/audiodocs/static/raw/system/recording-notification-manager.md b/packages/audiodocs/static/raw/system/recording-notification-manager.md
deleted file mode 100644
index 0292a78fc..000000000
--- a/packages/audiodocs/static/raw/system/recording-notification-manager.md
+++ /dev/null
@@ -1,108 +0,0 @@
-# RecordingNotificationManager
-
-The `RecordingNotificationManager` provides system integration with [`Recorder`](/docs/inputs/audio-recorder).
-It can send events about pausing and resuming to your application.
-
-## Example
-
-```typescript
-RecordingNotificationManager.show({
- title: 'Recording app',
- contentText: 'Recording...',
- paused: false,
- smallIconResourceName: 'icon_to_display',
- pauseIconResourceName: 'pause_icon',
- resumeIconResourceName: 'resume_icon',
- color: 0xff6200,
-});
-
-const pauseEventListener = RecordingNotificationManager.addEventListener('recordingNotificationPause', () => {
- console.log('Notification pause action received');
-});
-const resumeEventListener = RecordingNotificationManager.addEventListener('recordingNotificationResume', () => {
- console.log('Notification resume action received');
-});
-
-pauseEventListener.remove();
-resumeEventListener.remove();
-RecordingNotificationManager.hide();
-```
-
-## Methods
-
-### `show`
-
-Shows the recording notification with the parameters.
-
-> **Info**
->
-> Metadata is saved between calls, so after the initial pass to the show method, you need only call it with elements that are supposed to change.
-
-| Parameter |Type| Description|
-| :-------: | :--: | :----|
-| `info` | [`RecordingNotificationInfo`](recording-notification-manager#recordingnotificationinfo) | Initial notification metadata |
-
-#### Returns `Promise`.
-
-> **Info**
->
-> For more details, go to [android developer page](https://developer.android.com/develop/ui/views/notifications#Templates).
-> Resource name is a path to resource plased in res/drawable folder. It has to be either .png file or .xml file, name is indicated without file extenstion. (photo.png -> photo).
-
-> **Caution**
->
-> If nothing is displayed, even though your name is correct, try decreasing size of your resource.
-> Notification can look vastly different on different android devices.
-
-### `hide`
-
-Hides the recording notification.
-
-#### Returns `Promise`.
-
-### `isActive`
-
-Checks if the notification is displayed.
-
-#### Returns `Promise`.
-
-### `addEventListener`
-
-Add an event listener for notification actions.
-
-| Parameter | Type | Description |
-| :---------: | :----: | :---------------------- |
-| `eventName` | [`RecordingNotificationEvent`](recording-notification-manager#recordingnotificationevent) | The event to listen for |
-| `callback` | ([\`RecordingNotificationEvent](recording-notification-manager#recordingnotificationevent)) => void | Callback function |
-
-#### Returns [`AudioEventSubscription`](/docs/system/audio-manager#audioeventsubscription).
-
-## Remarks
-
-### `RecordingNotificationInfo`
-
-Type definitions
-
-```typescript
-interface RecordingNotificationInfo {
- title?: string;
- contentText?: string;
- paused?: boolean; // flag indicating whether to display pauseIcon or resumeIcon
- smallIconResourceName?: string;
- largeIconResourceName?: string;
- pauseIconResourceName?: string;
- resumeIconResourceName?: string;
- color?: number; //
-}
-```
-
-### `RecordingNotificationEvent`
-
-Type definitions
-
-```typescript
-interface RecordingNotificationEvent {
- recordingNotificationPause: EventEmptyType;
- recordingNotificationResume: EventEmptyType;
-}
-```
diff --git a/packages/audiodocs/static/raw/types/channel-count-mode.md b/packages/audiodocs/static/raw/types/channel-count-mode.md
deleted file mode 100644
index 72343d04e..000000000
--- a/packages/audiodocs/static/raw/types/channel-count-mode.md
+++ /dev/null
@@ -1,17 +0,0 @@
-# ChannelCountMode
-
-`ChannelCountMode` type determines how the number of input channels affects the number of output channels in an audio node.
-
-**Acceptable values:**
-
-* `max`
-
- The number of channels is equal to the maximum number of channels of all connections. In this case, `channelCount` is ignored and only up-mixing happens.
-
-* `clamped-max`
-
-The number of channels is equal to the maximum number of channels of all connections, clamped to the value of `channelCount`(serves as the maximum permissible value).
-
-* `explicit`
-
- The number of channels is defined by the value of `channelCount`.
diff --git a/packages/audiodocs/static/raw/types/channel-interpretation.md b/packages/audiodocs/static/raw/types/channel-interpretation.md
deleted file mode 100644
index 1929271d6..000000000
--- a/packages/audiodocs/static/raw/types/channel-interpretation.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# ChannelInterpretation
-
-`ChannelInterpretation` type specifies how input channels are mapped out to output channels when the number of them are different.
-
-**Acceptable values:**
-
-* `speakers`
-
-Use set of standard mapping rules for all combinations of common input and output setups.
-
-* `discrete`
-
-Covers all other cases. Mapping depends on relationship between number of input channels and number of output channels.
-
-## Channels mapping table
-
-### `speakers`
-
-| Number of input channels | Number of output channels | Mixing rules |
-| :------------------------: | :------------------------- | :------------ |
-| 1 (Mono) | 2 (Stereo) | output.L = input.M output.R = input.M |
-| 1 (Mono) | 4 (Quad) | output.L = input.M output.R = input.M output.SL = 0 output.SR = 0 |
-| 1 (Mono) | 6 (5.1) | output.L = 0 output.R = 0 output.C = input.M output.LFE = 0 output.SL = 0 output.SR = 0 |
-| 2 (Stereo) | 1 (Mono) | output.M = 0.5 \* (input.L + input.R) |
-| 2 (Stereo) | 4 (Quad) | output.L = input.L output.R = input.R output.SL = 0 output.SR = 0 |
-| 2 (Stereo) | 6 (5.1) | output.L = input.L output.R = input.R output.C = 0 output.LFE = 0 output.SL = 0 output.SR = 0 |
-| 4 (Quad) | 1 (Mono) | output.M = 0.25 \* (input.L + input.R + input.SL + input.SR) |
-| 4 (Quad) | 2 (Stereo) | output.L = 0.5 \* (input.L + input.SL) output.R = 0.5 \* (input.R + input.SR) |
-| 4 (Quad) | 6 (5.1) | output.L = input.L output.R = input.R output.C = 0 output.LFE = 0 output.SL = input.SL output.SR = input.SR |
-| 6 (5.1) | 1 (Mono) | output.M = 0.7071 \* (input.L + input.R) + input.C + 0.5 \* (input.SL + input.SR) |
-| 6 (5.1) | 2 (Stereo) | output.L = input.L + 0.7071 \* (input.C + input.SL) output.R = input.R + 0.7071 \* (input.C + input.SR) |
-| 6 (5.1) | 4 (Quad) | output.L = input.L + 0.7071 \* input.C output.R = input.R + 0.7071 \* input.C output.SL = input.SL output.SR = input.SR |
-
-### `discrete`
-
-| Number of input channels | Number of output channels | Mixing rules |
-| :------------------------: | :------------------------- | :------------ |
-| x | y where y > x | Fill each output channel with its counterpart(channel with same number), rest of output channels are silent channels |
-| x | y where y \< x | Fill each output channel with its counterpart(channel with same number), rest of input channels are skipped |
diff --git a/packages/audiodocs/static/raw/types/oscillator-type.md b/packages/audiodocs/static/raw/types/oscillator-type.md
deleted file mode 100644
index 24d5ed0dd..000000000
--- a/packages/audiodocs/static/raw/types/oscillator-type.md
+++ /dev/null
@@ -1,19 +0,0 @@
-# OscillatorType
-
-`OscillatorType` is a string that specifies shape of an oscillator wave
-
-```jsx
-type OscillatorType =
- | 'sine'
- | 'square'
- | 'sawtooth'
- | 'triangle'
- | 'custom';
-```
-
-Below you can see possible names with shapes corresponding to them.
-
-
-## `custom`
-
-This value can't be set explicitly, but it allows user to set any shape. See [`setPeriodicWave`](/docs/sources/oscillator-node#setperiodicwave) for reference.
diff --git a/packages/audiodocs/static/raw/utils/decoding.md b/packages/audiodocs/static/raw/utils/decoding.md
deleted file mode 100644
index a12458452..000000000
--- a/packages/audiodocs/static/raw/utils/decoding.md
+++ /dev/null
@@ -1,92 +0,0 @@
-# Decoding
-
-You can decode audio data independently, without creating an AudioContext, using the exported functions [`decodeAudioData`](/docs/utils/decoding#decodeaudiodata) and
-[`decodePCMInBase64`](/docs/utils/decoding#decodepcminbase64).
-
-> **Warning**
->
-> Decoding on the web has to be done via `AudioContext` only.
-
-If you already have an audio context, you can decode audio data directly using its [`decodeAudioData`](/docs/core/base-audio-context#decodeaudiodata) function;
-the decoded audio will then be automatically resampled to match the context's `sampleRate`.
-
-> **Caution**
->
-> Supported file formats:
->
-> * flac
-> * mp3
-> * ogg
-> * opus
-> * wav
-> * aac
-> * m4a
-> * mp4
->
-> Last three formats are decoded with ffmpeg on the mobile, [see for more info](/docs/other/ffmpeg-info).
-
-### `decodeAudioData`
-
-Decodes audio data from either a file path or an ArrayBuffer. The optional `sampleRate` parameter lets you resample the decoded audio;
-if not provided, the original sample rate from the file is used.
-
-Parameter
-Type
-Description
-
-input
-ArrayBuffer
-ArrayBuffer with audio data.
-
-string
-Path to remote or local audio file.
-
-number
-Asset module id.
-
-sampleRate
-number
-Target sample rate for the decoded audio.
-
-fetchOptions
-[RequestInit](https://github.com/facebook/react-native/blob/ac06f3bdc76a9fd7c65ab899e82bff5cad9b94b6/packages/react-native/src/types/globals.d.ts#L265)
-Additional headers parameters when passing url to fetch.
-
-#### Returns `Promise`.
-
-> **Caution**
->
-> If you are passing number to decode function, bear in mind that it uses Image component provided
-> by React Native internally. By default only support .mp3, .wav, .mp4, .m4a, .aac audio file formats.
-> If you want to use other types, refer to [this section](https://reactnative.dev/docs/images#static-non-image-resources) for more info.
-
-Example decoding remote URL
-
-```tsx
-import { decodeAudioData } from 'react-native-audio-api';
-
-const url = ... // url to an audio
-
-const buffer = await decodeAudioData(url);
-```
-
-### `decodePCMInBase64`
-
-Decodes base64-encoded PCM audio data.
-
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `base64String` | `string` | Base64-encoded PCM audio data. |
-| `inputSampleRate` | `number` | Sample rate of the input PCM data. |
-| `inputChannelCount` | `number` | Number of channels in the input PCM data. |
-| `isInterleaved` | `boolean` | Whether the PCM data is interleaved. Default is `true`. |
-
-#### Returns `Promise`
-
-Example decoding with data in base64 format
-
-```tsx
-const data = ... // data encoded in base64 string
-// data is interleaved (Channel1, Channel2, Channel1, Channel2, ...)
-const buffer = await decodeAudioData(data, 4800, 2, true);
-```
diff --git a/packages/audiodocs/static/raw/utils/time-stretching.md b/packages/audiodocs/static/raw/utils/time-stretching.md
deleted file mode 100644
index 8a698fe3c..000000000
--- a/packages/audiodocs/static/raw/utils/time-stretching.md
+++ /dev/null
@@ -1,28 +0,0 @@
-# Time stretching
-
-You can change the playback speed of an audio buffer independently, without creating an AudioContext, using the exported function [`changePlaybackSpeed`](/docs/utils/decoding#decodeaudiodata).
-
-### `changePlaybackSpeed`
-
-Changes the playback speed of an audio buffer.
-
-| Parameter | Type | Description |
-| :----: | :----: | :-------- |
-| `input` | `AudioBuffer` | The audio buffer whose playback speed you want to change. |
-| `playbackSpeed` | `number` | The factor by which to change the playback speed. Values between \[1.0, 2.0] speed up playback, values between \[0.5, 1.0] slow it down. |
-
-#### Returns `Promise`.
-
-Example usage
-
-```tsx
-const url = ... // url to an audio
-const sampleRate = 48000
-
-const buffer = await decodeAudioData(url, sampleRate)
- .then((audioBuffer) => changePlaybackSpeed(audioBuffer, 1.25))
- .catch((error) => {
- console.error('Error decoding audio data source:', error);
- return null;
- });
-```
diff --git a/packages/audiodocs/static/raw/worklets/introduction.md b/packages/audiodocs/static/raw/worklets/introduction.md
deleted file mode 100644
index c3104bbbd..000000000
--- a/packages/audiodocs/static/raw/worklets/introduction.md
+++ /dev/null
@@ -1,71 +0,0 @@
-import { MobileOnly } from '@site/src/components/Badges';
-
-# RNWorklets Support
-
-The `RNWorklets` library was originally part of Reanimated until version 4.0.0; since then, it has become a separate library.
-
-To use the worklet features provided by `react-native-audio-api`, you need to install this library:
-
-```bash
-npm install react-native-worklets
-```
-> **Note**: Supported versions of `react-native-worklets` are [0.6.x, 0.7.x]. They are checked and updated manually with each release. Nightly versions are always supported but your build may fail.
-
-If the library is not installed, you will encounter runtime errors when trying to use features that depend on worklets and do not have documented fallback implementations.
-
-## What is a worklet?
-
-You can read more about worklets in the [RNWorklets documentation](https://docs.swmansion.com/react-native-worklets/).
-
-Simply put, a worklet is a piece of code that can be executed on a runtime different from the main JavaScript runtime (or more formally, the runtime on which the code was created).
-
-## What kind of worklets are used in react-native-audio-api?
-
-We support two types of worklet runtimes, each optimized for different use cases:
-
-### UIRuntime
-Worklets executed on the UI runtime provided by the `RNWorklets` library. This allows the use of Reanimated utilities and features inside the worklets. The main goal is to enable seamless integration with the UI - for example, creating animations from audio data.
-
-**Use UIRuntime when:**
-- You need to update UI elements from audio data
-- Creating visualizations or animations based on audio
-- Integrating with Reanimated shared values
-- Performance is less critical than UI responsiveness
-
-### AudioRuntime
-Worklets executed on the audio rendering thread for maximum performance and minimal latency. This runtime is optimized for real-time audio processing where timing is critical.
-
-**Use AudioRuntime when:**
-- Performance and low latency are crucial
-- Processing audio in real-time without dropouts
-- Generating audio with precise timing
-- Audio processing doesn't need to interact with UI
-
-You can specify the runtime type when creating worklet nodes using the `workletRuntime` parameter.
-
-## How to use worklets in react-native-audio-api mindfully?
-
-Our API is specifically designed to support high throughput to enable audio playback at 44.1Hz, which is the default frequency for most modern devices.
-
-However, this introduces several limitations on what can be done inside a worklet. Since a worklet must be executed on the JavaScript runtime, each execution introduces latency.
-
-$$ 44.1\text{Hz} \equiv 44100\text{ samples} \equiv 1\text{ s} $$
-
-This means the sample rate indicates how many frames are processed in one second. Most features that allow using worklets as callbacks should also allow setting `bufferLength` for worklet input.
-
-If you set `bufferLength` to 128 (which is the default internal buffer size of our API used to process the graph), you must remember that your worklet should not take more than:
-
-$$ 1\text{ s} = 1000\text{ ms} $$
-
-$$ \frac{44100}{128} \approx 344 $$
-
-$$ \frac{1000\text{ ms}}{344} \approx 2.9\text{ ms} $$
-
-This means that if your worklet, plus the rest of the processing, takes more than 2.9ms, you may start to experience audio dropouts or other playback issues.
-
-### Recommendations
-
-- Use a larger `bufferLength`, like 256, 512 or even 1024 if you don't need more than 40fps.
-- Avoid blocking operations in the worklet (e.g., calling APIs - use JS callbacks for these instead).
-- Do not overuse worklets. Before creating 5 or 6, consider if it can be done with a single one. Creating chained nodes that invoke worklets increases latency linearly.
-- Measure performance and memory usage, and check logs to ensure you are not dropping frames.
diff --git a/packages/audiodocs/static/raw/worklets/worklet-node.md b/packages/audiodocs/static/raw/worklets/worklet-node.md
deleted file mode 100644
index 072a0f00e..000000000
--- a/packages/audiodocs/static/raw/worklets/worklet-node.md
+++ /dev/null
@@ -1,84 +0,0 @@
-# WorkletNode
-
-> **Warning**
->
-> This node is dependent on `react-native-worklets` and you need to install them in order to use this node. Refer to [getting-started page](/docs/fundamentals/getting-started#possible-additional-dependencies) for more info.
-
-The `WorkletNode` interface represents a node in the audio processing graph that can execute a worklet.
-
-Worklets are a way to run JavaScript code in the audio rendering thread, allowing for low-latency audio processing. For more information, see our introduction [Introduction to worklets](/docs/worklets/worklets-introduction).
-This node lets you execute a worklet on the UI thread. bufferLength specifies the size of the buffer that will be passed to the worklet on each call. The inputChannelCount specifies the number of channels that will be passed to the worklet.
-
-## Constructor
-
-```tsx
-constructor(
- context: BaseAudioContext,
- runtime: AudioWorkletRuntime,
- callback: (audioData: Array, channelCount: number) => void,
- bufferLength: number,
- inputChannelCount: number)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createWorkletNode(worklet, bufferLength, inputChannelCount, workletRuntime)`](/docs/core/base-audio-context#createworkletnode-)
-
-## Example
-
-```tsx
-import { AudioContext, AudioRecorder, AudioManager } from 'react-native-audio-api';
-
-AudioManager.setAudioSessionOptions({
- iosCategory: "playAndRecord",
- iosMode: "measurement",
- iosOptions: ["mixWithOthers"],
-})
-
-// This example shows how we can use a WorkletNode to process microphone audio data in real-time.
-async function App() {
- const recorder = new AudioRecorder();
-
- const audioContext = new AudioContext({ sampleRate: 16000 });
- const worklet = (audioData: Array, inputChannelCount: number) => {
- 'worklet';
- // here you have access to the number of input channels and the audio data
- // audio data is a two dimensional array where first index is the channel number and second is buffer of exactly bufferLength size
- // !IMPORTANT: here you can only read audio data any modifications will not be reflected in the audio output of this node
- // !VERY IMPORTANT: please read the Known Issue section below
- };
- const workletNode = audioContext.createWorkletNode(worklet, 1024, 2, 'UIRuntime');
- const adapterNode = audioContext.createRecorderAdapter();
-
- const canSetAudioSessionActivity = await AudioManager.setAudioSessionActivity(true);
- if (!canSetAudioSessionActivity) {
- throw new Error("Could not activate the audio session");
- }
- adapterNode.connect(workletNode);
- workletNode.connect(audioContext.destination);
- recorder.connect(adapterNode);
- recorder.start();
- audioContext.resume();
-}
-```
-
-## Properties
-
-It has no own properties but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Methods
-
-It has no own methods but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Known Issue
-
-It might happen that the worklet side effect is not visible on the UI (when you are using UIRuntime kind). For example you have some animated style which depends on some shared value modified in the worklet.
-This is happening because microtask queue is not always being flushed properly, bla bla bla...
-
-To workaround this issue just add this line at the end of your worklet callback function:
-
-```ts
-requestAnimationFrame(() => {});
-```
-
-This will ensure that microtask queue is flushed and your UI will be updated properly. But be aware that this might have some performance implications so it is not included by default.
-So use this only after confirming that your worklet side effects are not visible on the UI.
diff --git a/packages/audiodocs/static/raw/worklets/worklet-processing-node.md b/packages/audiodocs/static/raw/worklets/worklet-processing-node.md
deleted file mode 100644
index 6b36e139f..000000000
--- a/packages/audiodocs/static/raw/worklets/worklet-processing-node.md
+++ /dev/null
@@ -1,160 +0,0 @@
-# WorkletProcessingNode
-
-> **Warning**
->
-> This node is dependent on `react-native-worklets` and you need to install them in order to use this node. Refer to [getting-started page](/docs/fundamentals/getting-started#possible-additional-dependencies) for more info.
-
-The `WorkletProcessingNode` interface represents a node in the audio processing graph that can process audio using a worklet function. Unlike [`WorkletNode`](/docs/worklets/worklet-node) which only provides read-only access to audio data, `WorkletProcessingNode` allows you to modify the audio signal by providing both input and output buffers.
-
-This node lets you execute a worklet that receives input audio data and produces output audio data, making it perfect for creating custom audio effects, filters, and processors. The worklet processes the exact number of frames provided by the audio system in each call.
-
-For more information about worklets, see our [Introduction to worklets](/docs/worklets/worklets-introduction).
-
-## Constructor
-
-```tsx
-constructor(
- context: BaseAudioContext,
- runtime: AudioWorkletRuntime,
- callback: (
- inputData: Array,
- outputData: Array,
- framesToProcess: number,
- currentTime: number
- ) => void)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createWorkletProcessingNode(worklet, workletRuntime)`](/docs/core/base-audio-context#createworkletprocessingnode-)
-
-## Example
-
-```tsx
-import { AudioContext, AudioRecorder } from 'react-native-audio-api';
-
-// This example shows how to create a simple gain effect using WorkletProcessingNode
-function App() {
- const recorder = new AudioRecorder({
- sampleRate: 16000,
- bufferLengthInSamples: 16000,
- });
-
- const audioContext = new AudioContext({ sampleRate: 16000 });
-
- // Create a simple gain worklet that multiplies the input by a gain value
- const gainWorklet = (
- inputData: Array,
- outputData: Array,
- framesToProcess: number,
- currentTime: number
- ) => {
- 'worklet';
- const gain = 0.5; // 50% volume
-
- for (let ch = 0; ch < inputData.length; ch++) {
- const input = inputData[ch];
- const output = outputData[ch];
-
- for (let i = 0; i < framesToProcess; i++) {
- output[i] = input[i] * gain;
- }
- }
- };
-
- const workletProcessingNode = audioContext.createWorkletProcessingNode(
- gainWorklet,
- 'AudioRuntime'
- );
- const adapterNode = audioContext.createRecorderAdapter();
-
- adapterNode.connect(workletProcessingNode);
- workletProcessingNode.connect(audioContext.destination);
- recorder.connect(adapterNode);
- recorder.start();
-}
-}
-```
-
-## Worklet Parameters Explanation
-
-The worklet function receives four parameters:
-
-### `inputData: Array`
-
-A two-dimensional array where:
-
-* First dimension represents the audio channel (0 = left, 1 = right for stereo)
-* Second dimension contains the input audio samples for that channel
-* You should **read** from these buffers to get the input audio data
-* The length of each `Float32Array` equals the `framesToProcess` parameter
-
-### `outputData: Array`
-
-A two-dimensional array where:
-
-* First dimension represents the audio channel (0 = left, 1 = right for stereo)
-* Second dimension contains the output audio samples for that channel
-* You must **write** to these buffers to produce the processed audio output
-* The length of each `Float32Array` equals the `framesToProcess` parameter
-
-### `framesToProcess: number`
-
-The number of audio samples to process in this call. This determines how many samples you need to process in each channel's buffer. This value will be at most 128.
-
-### `currentTime: number`
-
-The current audio context time in seconds when this worklet call begins. This represents the absolute time since the audio context was created.
-
-## Audio Processing Pattern
-
-A typical WorkletProcessingNode worklet follows this pattern:
-
-```tsx
-const audioProcessor = (
- inputData: Array,
- outputData: Array,
- framesToProcess: number,
- currentTime: number
-) => {
- 'worklet';
-
- for (let channel = 0; channel < inputData.length; channel++) {
- const input = inputData[channel];
- const output = outputData[channel];
-
- for (let sample = 0; sample < framesToProcess; sample++) {
- // Process each sample
- // Read from: input[sample]
- // Write to: output[sample]
- output[sample] = processAudioSample(input[sample]);
- }
- }
-};
-```
-
-## Properties
-
-It has no own properties but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Methods
-
-It has no own methods but inherits from [`AudioNode`](/docs/core/audio-node).
-
-## Performance Considerations
-
-Since `WorkletProcessingNode` processes audio in real-time, performance is critical:
-
-* Keep worklet functions lightweight and efficient
-* Avoid complex calculations that could cause audio dropouts
-* Process samples in-place when possible
-* Consider using lookup tables for expensive operations
-* Use `AudioRuntime` for better performance, `UIRuntime` for UI integration
-* Test on target devices to ensure smooth audio processing
-
-## Use Cases
-
-* **Audio Effects**: Reverb, delay, distortion, filters
-* **Audio Processing**: Compression, limiting, normalization
-* **Real-time Filters**: EQ, high-pass, low-pass, band-pass filters
-* **Custom Algorithms**: Noise reduction, pitch shifting, spectral processing
-* **Signal Analysis**: Feature extraction while passing audio through
diff --git a/packages/audiodocs/static/raw/worklets/worklet-source-node.md b/packages/audiodocs/static/raw/worklets/worklet-source-node.md
deleted file mode 100644
index 49efe2f04..000000000
--- a/packages/audiodocs/static/raw/worklets/worklet-source-node.md
+++ /dev/null
@@ -1,162 +0,0 @@
-# WorkletSourceNode
-
-> **Warning**
->
-> This node is dependent on `react-native-worklets` and you need to install them in order to use this node. Refer to [getting-started page](/docs/fundamentals/getting-started#possible-additional-dependencies) for more info.
-
-The `WorkletSourceNode` interface represents a scheduled source node in the audio processing graph that generates audio using a worklet function. It extends [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node), providing the ability to start and stop audio generation at specific times.
-
-This node allows you to generate audio procedurally using JavaScript worklets, making it perfect for creating custom synthesizers, audio generators, or real-time audio effects that produce sound rather than just process it.
-
-For more information about worklets, see our [Introduction to worklets](/docs/worklets/worklets-introduction).
-
-## Constructor
-
-```tsx
-constructor(
- context: BaseAudioContext,
- runtime: AudioWorkletRuntime,
- callback: (
- audioData: Array,
- framesToProcess: number,
- currentTime: number,
- startOffset: number
- ) => void)
-```
-
-Or by using `BaseAudioContext` factory method:
-[`BaseAudioContext.createWorkletSourceNode(worklet, workletRuntime)`](/docs/core/base-audio-context#createworkletsourcenode-)
-
-## Example
-
-```tsx
-import { AudioContext } from 'react-native-audio-api';
-
-function App() {
- const audioContext = new AudioContext({ sampleRate: 44100 });
-
- // Create a simple sine wave generator worklet
- const sineWaveWorklet = (
- audioData: Array,
- framesToProcess: number,
- currentTime: number,
- startOffset: number
- ) => {
- 'worklet';
-
- const frequency = 440; // A4 note
- const sampleRate = 44100;
-
- // Generate audio for each channel
- for (let channel = 0; channel < audioData.length; channel++) {
- for (let i = 0; i < framesToProcess; i++) {
- // Calculate the absolute time for this sample
- const sampleTime = currentTime + (startOffset + i) / sampleRate;
-
- // Generate sine wave
- const phase = 2 * Math.PI * frequency * sampleTime;
- audioData[channel][i] = Math.sin(phase) * 0.5; // 50% volume
- }
- }
- };
-
- const workletSourceNode = audioContext.createWorkletSourceNode(
- sineWaveWorklet,
- 'AudioRuntime'
- );
-
- // Connect to output and start playback
- workletSourceNode.connect(audioContext.destination);
- workletSourceNode.start(); // Start immediately
-
- // Stop after 2 seconds
- setTimeout(() => {
- workletSourceNode.stop();
- }, 2000);
-}
-```
-
-## Worklet Parameters Explanation
-
-The worklet function receives four parameters:
-
-### `audioData: Array`
-
-A two-dimensional array where:
-
-* First dimension represents the audio channel (0 = left, 1 = right for stereo)
-* Second dimension contains the audio samples for that channel
-* You must **write** audio data to these buffers to generate sound
-* The length of each `Float32Array` equals `framesToProcess`
-
-### `framesToProcess: number`
-
-The number of audio samples to generate in this call. This determines how many samples you need to fill in each channel's buffer.
-
-### `currentTime: number`
-
-The current audio context time in seconds when this worklet call begins. This represents the absolute time since the audio context was created.
-
-### `startOffset: number`
-
-The sample offset within the current processing block where your generated audio should begin. This is particularly important for precise timing when the node starts or stops mid-block.
-
-## Understanding `startOffset` and `currentTime`
-
-The relationship between `currentTime` and `startOffset` is crucial for generating continuous audio:
-
-```tsx
-const worklet = (audioData, framesToProcess, currentTime, startOffset) => {
- 'worklet';
-
- const sampleRate = 44100;
-
- for (let i = 0; i < framesToProcess; i++) {
- // Calculate the exact time for this sample
- const sampleTime = currentTime + (startOffset + i) / sampleRate;
-
- // Use sampleTime for phase calculations, LFOs, envelopes, etc.
- const phase = 2 * Math.PI * frequency * sampleTime;
- audioData[0][i] = Math.sin(phase);
- }
-};
-```
-
-**Key points:**
-
-* `currentTime` represents the audio context time at the start of the processing block
-* `startOffset` tells you which sample within the block to start generating audio
-* The absolute time for sample `i` is: `currentTime + (startOffset + i) / sampleRate`
-* This ensures phase continuity and precise timing across processing blocks
-
-## Properties
-
-It has no own properties but inherits from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node).
-
-## Methods
-
-It has no own methods but inherits from [`AudioScheduledSourceNode`](/docs/sources/audio-scheduled-source-node):
-
-## Performance Considerations
-
-Since `WorkletSourceNode` generates audio in real-time, performance is critical:
-
-* Keep worklet functions lightweight and efficient
-* Avoid complex calculations that could cause audio dropouts
-* Consider using lookup tables for expensive operations like trigonometric functions
-* Test on target devices to ensure smooth audio generation
-* Use `AudioRuntime` for better performance, `UIRuntime` for UI integration
-
-## Use Cases
-
-* **Custom Synthesizers**: Generate waveforms, apply modulation, create complex timbres
-* **Audio Generators**: White noise, pink noise, test tones, sweeps
-* **Procedural Audio**: Dynamic soundscapes, generative music
-* **Real-time Effects**: Audio that responds to user input or external data
-* **Educational Tools**: Demonstrate audio synthesis concepts interactively
-
-## See Also
-
-* [WorkletNode](/docs/worklets/worklet-node) - For processing existing audio with worklets
-* [Introduction to worklets](/docs/worklets/worklets-introduction) - Understanding worklet fundamentals
-* [AudioScheduledSourceNode](/docs/sources/audio-scheduled-source-node) - Base class for scheduled sources
diff --git a/packages/audiodocs/yarn.lock b/packages/audiodocs/yarn.lock
index 8fb8fa8da..c01835b04 100644
--- a/packages/audiodocs/yarn.lock
+++ b/packages/audiodocs/yarn.lock
@@ -3424,10 +3424,10 @@
"@svgr/plugin-jsx" "8.1.0"
"@svgr/plugin-svgo" "8.1.0"
-"@swmansion/t-rex-ui@1.3.0":
- version "1.3.0"
- resolved "https://registry.yarnpkg.com/@swmansion/t-rex-ui/-/t-rex-ui-1.3.0.tgz#6d2a95b3eab58fd738ad885bb509f05bed59a9fc"
- integrity sha512-xh3DYPUYekM+0rmBQtTPyDsy+QaapjRf+7zknqssdc9UsK9yg4MFa/nMEVprNwH179ysufAhVdq9dgp68KvrIw==
+"@swmansion/t-rex-ui@1.3.1":
+ version "1.3.1"
+ resolved "https://registry.yarnpkg.com/@swmansion/t-rex-ui/-/t-rex-ui-1.3.1.tgz#b16e29f15f863f1e340217592138317829583949"
+ integrity sha512-F4MoFk5Mc9vaAtl4zUFPAwMjOlYTBmxsIk+imeZHcx9m1gx3uCzFYR+svOXMxWcp9ghh1Rj/D8Zf6rFu/Ocg7A==
dependencies:
"@docusaurus/core" "3.9.2"
"@docusaurus/module-type-aliases" "3.9.2"