-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[yaml] add RunInference support with VertexAI #33406
Conversation
Signed-off-by: Jeffrey Kinard <[email protected]>
R: @robertwb |
R: @damccorm |
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control. If you'd like to restart, comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this'll be great!
def underlying_handler(self): | ||
return self._handler | ||
|
||
def preprocess_fn(self, row): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't actually get called until runtime, right? Perhaps we should call this default_preprocess_fn() and have it return an lambda (or raise an error). And then change postprocess_fn for symmetry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great suggestion, I ended up making the preprocess parameter required on the VertexAI handler, so the error is redundant in this case... but I think moving forward it could be useful, and the refactor in general is cleaner.
Signed-off-by: Jeffrey Kinard <[email protected]>
pass | ||
|
||
def inference_output_type(self): | ||
return RowTypeConstraint.from_fields([('example', Any), ('inference', Any), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we ask the handler for the type of inference? (Presumably the example type is that of the input as well.) Some handlers may not be able to provide this, but some can.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't generally today, but we could for a very limited set of handlers. The ones I can think of with predictable output types are hugging face pipelines and vLLM; the rest are all dependent on the model.
I think this is probably worth doing when we can, it probably requires some slight modification to the PredictionResult type, though, and might be worth scoping into a follow on PR.
pass | ||
|
||
def inference_output_type(self): | ||
return RowTypeConstraint.from_fields([('example', Any), ('inference', Any), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't generally today, but we could for a very limited set of handlers. The ones I can think of with predictable output types are hugging face pipelines and vLLM; the rest are all dependent on the model.
I think this is probably worth doing when we can, it probably requires some slight modification to the PredictionResult type, though, and might be worth scoping into a follow on PR.
Signed-off-by: Jeffrey Kinard <[email protected]>
Signed-off-by: Jeffrey Kinard <[email protected]>
Signed-off-by: Jeffrey Kinard <[email protected]>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #33406 +/- ##
============================================
+ Coverage 57.38% 57.42% +0.03%
Complexity 1475 1475
============================================
Files 973 980 +7
Lines 154978 155311 +333
Branches 1076 1076
============================================
+ Hits 88939 89187 +248
- Misses 63829 63908 +79
- Partials 2210 2216 +6
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM, just had one more suggestion - looks like linting is also broken
Signed-off-by: Jeffrey Kinard <[email protected]>
bf92cd8
to
e8bc920
Compare
@damccorm Failing tests now appear to be unrelated |
Thanks! |
This PR adds support for calling RunInference with preliminary support for VertexAI through the use of the
VertexAIModelHandlerJSON
ModelHandler.In order to implement more robust validation for the transform, the
SafeLineLoader
was refactored out ofyaml_transform.py
into a separateyaml_utils.py
file to avoid import cycles.Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.