-
Notifications
You must be signed in to change notification settings - Fork 6.3k
[data] Add Dataset.write_datasink_lazy to support intermediate outputs. #52094
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: basveeling <[email protected]>
Signed-off-by: basveeling <[email protected]>
One point to be discussed is if and how this should handle the |
Signed-off-by: basveeling <[email protected]>
Signed-off-by: basveeling <[email protected]>
This is quite nice, thanks for the contribution! Will take a quick look. |
self, | ||
datasink: Datasink, | ||
*, | ||
prefilter_fn: Optional[Callable[[Block], Block]] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we remove the prefilter_fn for now to maintain consistency with write_datasink?
def generate_lazy_write_fn( | ||
datasink_or_legacy_datasource: Union[Datasink, Datasource], | ||
prefilter_fn: Optional[Callable[[Block], Block]] = None, | ||
**write_args, | ||
) -> Callable[[Iterator[Block], TaskContext], Iterator[Block]]: | ||
def fn(blocks: Iterator[Block], ctx: TaskContext) -> Iterator[Block]: | ||
"""Writes the blocks to the given datasink or legacy datasource. | ||
|
||
Outputs the original blocks to be written.""" | ||
# Create a copy of the iterator, so we can return the original blocks. | ||
it1, it2 = itertools.tee(blocks, 2) | ||
if isinstance(datasink_or_legacy_datasource, Datasink): | ||
# Apply the prefilter function to each block before writing | ||
if prefilter_fn is not None: | ||
it1 = (prefilter_fn(block) if len(block) else block for block in it1) | ||
ctx.kwargs["_datasink_write_return"] = datasink_or_legacy_datasource.write( | ||
it1, ctx | ||
) | ||
else: | ||
datasink_or_legacy_datasource.write(it1, ctx, **write_args) | ||
|
||
return it2 | ||
|
||
return fn | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't much different than generate_write_fn right?
# TODO: figure out how to handle on_write_complete() | ||
return MapOperator.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah indeed, we'll want to figure this part out (and on_write_failed
)
@raulchen - any thoughts on how to handle this properly? main questions:
|
Thanks for your contribution. This is a nice feature.
|
This PR proposes extending the dataset api with
Dataset.write_datasink_lazy
. I'd be happy to discuss and welcome any comments and can finalize the PR with additional tests and documentation if there's interest.Why are these changes needed?
Some Ray Data pipelines benefit from writing intermediate outputs, for example:
This was partly inspired by https://deepseek-ai.github.io/smallpond/'s ability to handle multiple outputs using https://deepseek-ai.github.io/smallpond/generated/smallpond.dataframe.Session.wait.html#smallpond.dataframe.Session.wait . This PR takes a different approach by providing a write node that passes data through transparantly for further processing.
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.