Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[yaml] add RunInference support with VertexAI #33406

Merged
merged 6 commits into from
Dec 26, 2024

Conversation

Polber
Copy link
Contributor

@Polber Polber commented Dec 17, 2024

This PR adds support for calling RunInference with preliminary support for VertexAI through the use of the VertexAIModelHandlerJSON ModelHandler.

In order to implement more robust validation for the transform, the SafeLineLoader was refactored out of yaml_transform.py into a separate yaml_utils.py file to avoid import cycles.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@Polber
Copy link
Contributor Author

Polber commented Dec 17, 2024

R: @robertwb

@Polber
Copy link
Contributor Author

Polber commented Dec 17, 2024

R: @damccorm

Copy link
Contributor

Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment assign set of reviewers

Copy link
Contributor

@robertwb robertwb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this'll be great!

sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
def underlying_handler(self):
return self._handler

def preprocess_fn(self, row):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't actually get called until runtime, right? Perhaps we should call this default_preprocess_fn() and have it return an lambda (or raise an error). And then change postprocess_fn for symmetry.

Copy link
Contributor Author

@Polber Polber Dec 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great suggestion, I ended up making the preprocess parameter required on the VertexAI handler, so the error is redundant in this case... but I think moving forward it could be useful, and the refactor in general is cleaner.

sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
@liferoad liferoad added this to the 2.62.0 Release milestone Dec 18, 2024
Signed-off-by: Jeffrey Kinard <[email protected]>
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
pass

def inference_output_type(self):
return RowTypeConstraint.from_fields([('example', Any), ('inference', Any),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we ask the handler for the type of inference? (Presumably the example type is that of the input as well.) Some handlers may not be able to provide this, but some can.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't generally today, but we could for a very limited set of handlers. The ones I can think of with predictable output types are hugging face pipelines and vLLM; the rest are all dependent on the model.

I think this is probably worth doing when we can, it probably requires some slight modification to the PredictionResult type, though, and might be worth scoping into a follow on PR.

pass

def inference_output_type(self):
return RowTypeConstraint.from_fields([('example', Any), ('inference', Any),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't generally today, but we could for a very limited set of handlers. The ones I can think of with predictable output types are hugging face pipelines and vLLM; the rest are all dependent on the model.

I think this is probably worth doing when we can, it probably requires some slight modification to the PredictionResult type, though, and might be worth scoping into a follow on PR.

sdks/python/apache_beam/yaml/yaml_ml.py Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
Signed-off-by: Jeffrey Kinard <[email protected]>
Signed-off-by: Jeffrey Kinard <[email protected]>
Signed-off-by: Jeffrey Kinard <[email protected]>
Copy link

codecov bot commented Dec 21, 2024

Codecov Report

Attention: Patch coverage is 55.39568% with 62 lines in your changes missing coverage. Please review.

Project coverage is 57.42%. Comparing base (d7502fa) to head (e8bc920).
Report is 52 commits behind head on master.

Files with missing lines Patch % Lines
sdks/python/apache_beam/yaml/yaml_ml.py 39.21% 62 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##             master   #33406      +/-   ##
============================================
+ Coverage     57.38%   57.42%   +0.03%     
  Complexity     1475     1475              
============================================
  Files           973      980       +7     
  Lines        154978   155311     +333     
  Branches       1076     1076              
============================================
+ Hits          88939    89187     +248     
- Misses        63829    63908      +79     
- Partials       2210     2216       +6     
Flag Coverage Δ
python 81.22% <55.39%> (-0.05%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM, just had one more suggestion - looks like linting is also broken

sdks/python/apache_beam/yaml/yaml_ml.py Outdated Show resolved Hide resolved
Signed-off-by: Jeffrey Kinard <[email protected]>
@Polber
Copy link
Contributor Author

Polber commented Dec 26, 2024

@damccorm Failing tests now appear to be unrelated

@damccorm damccorm merged commit 7e077dc into apache:master Dec 26, 2024
94 of 96 checks passed
@damccorm
Copy link
Contributor

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants