🛡️audio model API

reflection - AudioCraft API Documentation


reflection is a project built on the AudioCraft framework that provides powerful tools for audio processing, synthesis, and analysis. This documentation covers the API provided by the reflection project, allowing developers to work with audio data and perform various transformations and effects.


To install the reflection project, ensure you have the necessary dependencies, including AudioCraft. You can install reflection using the following command:

sh复制代码pip install reflection-audiocraft

Basic Concepts

Before using the API, it is essential to understand some basic concepts:

  • AudioContext: Manages the global state for audio operations.

  • AudioModule: Represents a collection of audio nodes and processors.

  • AudioBuffer: Holds audio data in memory.

  • AudioNode: Represents an audio source, processor, or destination.

  • AudioProcessor: Performs custom audio processing.

API Reference


class reflection.AudioContext

The AudioContext class encapsulates the global state used by reflection.

  • Methods:

    • __init__(): Initialize a new audio context.

    • create_buffer(num_channels: int, length: int, sample_rate: float) -> AudioBuffer: Create a new audio buffer.

    • decode_audio_data(data: bytes) -> AudioBuffer: Decode audio data from a byte array into an audio buffer.

    • dispose(): Dispose of the context and free associated resources.


class reflection.AudioModule

The AudioModule class represents a collection of audio nodes and processors.

  • Methods:

    • __init__(context: AudioContext): Create a new audio module within the given context.

    • add_node(node: AudioNode): Add an audio node to the module.

    • remove_node(node: AudioNode): Remove an audio node from the module.

    • connect_nodes(source: AudioNode, destination: AudioNode): Connect two audio nodes.

    • disconnect_nodes(source: AudioNode, destination: AudioNode): Disconnect two audio nodes.


class reflection.AudioBuffer

The AudioBuffer class holds audio data in memory.

  • Methods:

    • __init__(num_channels: int, length: int, sample_rate: float): Create a new audio buffer.

    • get_channel_data(channel: int) -> List[float]: Get the audio data for a specific channel.

    • set_channel_data(channel: int, data: List[float]): Set the audio data for a specific channel.

    • get_sample_rate() -> float: Get the sample rate of the audio buffer.


class reflection.AudioNode

The AudioNode class represents an audio source, processor, or destination.

  • Methods:

    • __init__(context: AudioContext): Create a new audio node within the given context.

    • connect(destination: AudioNode): Connect this node to another audio node.

    • disconnect(destination: AudioNode): Disconnect this node from another audio node.

    • start(): Start processing or generating audio.

    • stop(): Stop processing or generating audio.


class reflection.AudioProcessor

The AudioProcessor class performs custom audio processing.

  • Methods:

    • __init__(context: AudioContext): Create a new audio processor within the given context.

    • process(input_buffer: AudioBuffer, output_buffer: AudioBuffer): Process the input buffer and store the result in the output buffer.

    • set_parameter(name: str, value: float): Set a parameter for the audio processor.

    • get_parameter(name: str) -> float: Get the value of a parameter for the audio processor.


Creating an Audio Context and Buffer

from reflection import AudioContext, AudioBuffer

# Create a new audio context
context = AudioContext()

# Create a new audio buffer with 2 channels, length of 44100 samples, and a sample rate of 44100 Hz
buffer = context.create_buffer(2, 44100, 44100.0)

# Get and set channel data
left_channel_data = buffer.get_channel_data(0)
right_channel_data = buffer.get_channel_data(1)
buffer.set_channel_data(0, [0.0] * 44100)
buffer.set_channel_data(1, [0.0] * 44100)

Connecting Audio Nodes

from reflection import AudioContext, AudioModule, AudioNode

# Create a new audio context
context = AudioContext()

# Create an audio module
module = AudioModule(context)

# Create audio nodes
source_node = AudioNode(context)
destination_node = AudioNode(context)

# Add nodes to the module

# Connect the source node to the destination node
module.connect_nodes(source_node, destination_node)

# Start processing

Custom Audio Processing

from reflection import AudioContext, AudioBuffer, AudioProcessor

# Create a new audio context
context = AudioContext()

# Create an audio processor
processor = AudioProcessor(context)

# Define input and output buffers
input_buffer = context.create_buffer(2, 44100, 44100.0)
output_buffer = context.create_buffer(2, 44100, 44100.0)

# Process the audio data
processor.process(input_buffer, output_buffer)

# Set and get parameters
processor.set_parameter("gain", 1.0)
gain = processor.get_parameter("gain")


Contributions to the reflection project are welcome. Please follow the standard GitHub workflow for contributing:

  1. Fork the repository.

  2. Create a new branch for your feature or bugfix.

  3. Commit your changes and push them to your branch.

  4. Create a pull request.

Ensure your code follows the project's coding standards and includes appropriate tests.


The reflection project is licensed under the MIT License. See the LICENSE file for more details.

This documentation provides an overview of the reflection API for audio processing. For more detailed information and advanced usage, please refer to the source code and additional documentation in the project's repository.

Last updated