Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions bindings/python/fluss/__init__.pyi
Original file line number Diff line number Diff line change
Expand Up @@ -790,6 +790,26 @@ class LogScanner:
or timeout expires.
"""
...
def to_arrow_batch_reader(self) -> pa.RecordBatchReader:
"""Create a lazy Arrow RecordBatchReader that reads until latest offsets.

Returns a ``pyarrow.RecordBatchReader`` that lazily polls batches one at
a time (streaming). Prefer this when you want to process batches without
holding the full result in memory at once.

Do not call ``poll_arrow`` / ``poll_record_batch`` on this scanner while
iterating the reader; they share the same underlying scanner state.
Overlapping calls are not supported. Use one active
polling/consumption path at a time.

Requires a batch-based scanner (created with ``new_scan().create_record_batch_log_scanner()``).
You must call ``subscribe()``, ``subscribe_buckets()``, ``subscribe_partition()``,
or ``subscribe_partition_buckets()`` first.

Returns:
``pyarrow.RecordBatchReader`` yielding ``RecordBatch`` objects.
"""
...
def to_pandas(self) -> pd.DataFrame:
"""Convert all data to Pandas DataFrame.

Expand All @@ -802,6 +822,10 @@ class LogScanner:
def to_arrow(self) -> pa.Table:
"""Convert all data to Arrow Table.

Batches are collected in Rust then combined into one table (no per-batch
Python iteration). Do not interleave with ``poll_arrow`` / ``poll_record_batch``
for the same subscription session; overlapping use is not supported.

Requires a batch-based scanner (created with new_scan().create_record_batch_log_scanner()).
Reads from currently subscribed buckets until reaching their latest offsets.

Expand Down
Loading
Loading