🛡️video model API
reflection - StreamingT2V API Documentation
Introduction
reflection is a project built on the StreamingT2V framework, offering powerful tools for converting text input into video output in a streaming fashion. This documentation covers the API provided by the reflection project, which allows developers to convert text to video, manipulate video segments, and stream the resulting videos.
Installation
To install the reflection project, you can use the following command:
pip install reflection-streamingt2v
Basic Concepts
Before using the API, it is essential to understand some basic concepts:
T2VContext: Manages the global state and configuration for text-to-video operations.
T2VStream: Represents a streaming session for text-to-video conversion.
TextToVideoProcessor: Processes text input and generates video segments.
VideoSegment: Represents a segment of video generated from a portion of text.
VideoOutput: Manages the output stream of the video.
API Reference
T2VContext
class reflection.T2VContext
The T2VContext
class encapsulates the global state and configuration for the reflection project.
Methods:
__init__(config: dict)
: Initialize a new context with the given configuration.dispose()
: Dispose of the context and free associated resources.
T2VStream
class reflection.T2VStream
The T2VStream
class represents a streaming session for text-to-video conversion.
Methods:
__init__(context: T2VContext)
: Create a new streaming session within the given context.start()
: Start the streaming session.stop()
: Stop the streaming session.send_text(text: str)
: Send text input to the streaming session.receive_video_segment() -> VideoSegment
: Receive a video segment generated from the text input.
TextToVideoProcessor
class reflection.TextToVideoProcessor
The TextToVideoProcessor
class processes text input and generates video segments.
Methods:
__init__(context: T2VContext)
: Create a new text-to-video processor within the given context.process_text(text: str) -> VideoSegment
: Process the given text and generate a video segment.set_parameter(name: str, value: any)
: Set a parameter for the text-to-video processor.get_parameter(name: str) -> any
: Get the value of a parameter for the text-to-video processor.
VideoSegment
class reflection.VideoSegment
The VideoSegment
class represents a segment of video generated from a portion of text.
Methods:
__init__(data: bytes, metadata: dict)
: Create a new video segment with the given data and metadata.get_data() -> bytes
: Get the binary data of the video segment.get_metadata() -> dict
: Get the metadata of the video segment.
VideoOutput
class reflection.VideoOutput
The VideoOutput
class manages the output stream of the video.
Methods:
__init__(output_path: str)
: Create a new video output to the specified path.write_segment(segment: VideoSegment)
: Write a video segment to the output.close()
: Close the video output stream.
Examples
Creating a Context and Starting a Stream
from reflection import T2VContext, T2VStream
# Create a new context with configuration
config = {"resolution": "1080p", "frame_rate": 30}
context = T2VContext(config)
# Create a new streaming session
stream = T2VStream(context)
# Start the streaming session
stream.start()
# Send text input to the streaming session
stream.send_text("Once upon a time, in a faraway land...")
# Receive a video segment generated from the text input
video_segment = stream.receive_video_segment()
# Stop the streaming session
stream.stop()
# Dispose of the context
context.dispose()
Processing Text to Video
from reflection import T2VContext, TextToVideoProcessor
# Create a new context with configuration
config = {"resolution": "720p", "frame_rate": 24}
context = T2VContext(config)
# Create a text-to-video processor
processor = TextToVideoProcessor(context)
# Process text input and generate a video segment
text = "The quick brown fox jumps over the lazy dog."
video_segment = processor.process_text(text)
# Get the video data and metadata
video_data = video_segment.get_data()
metadata = video_segment.get_metadata()
# Dispose of the context
context.dispose()
Writing Video Output
from reflection import T2VContext, T2VStream, VideoOutput
# Create a new context with configuration
config = {"resolution": "1080p", "frame_rate": 30}
context = T2VContext(config)
# Create a new streaming session
stream = T2VStream(context)
# Start the streaming session
stream.start()
# Create a video output
output = VideoOutput("output_video.mp4")
# Send text input to the streaming session
stream.send_text("Once upon a time, in a faraway land...")
# Receive and write video segments to the output
while True:
video_segment = stream.receive_video_segment()
if video_segment is None:
break
output.write_segment(video_segment)
# Stop the streaming session
stream.stop()
# Close the video output stream
output.close()
# Dispose of the context
context.dispose()
Contributing
Contributions to the reflection project are welcome. Please follow the standard GitHub workflow for contributing:
Fork the repository.
Create a new branch for your feature or bugfix.
Commit your changes and push them to your branch.
Create a pull request.
Ensure your code follows the project's coding standards and includes appropriate tests.
License
The reflection project is licensed under the MIT License. See the LICENSE file for more details.
This documentation provides an overview of the reflection API for streaming text-to-video conversion. For more detailed information and advanced usage, please refer to the source code and additional documentation in the project's repository.
Last updated