Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test pr build #4768

Open
wants to merge 16 commits into
base: default
Choose a base branch
from
Open

test pr build #4768

wants to merge 16 commits into from

Conversation

zhaoqizqwang
Copy link
Contributor

Issue #, if available:

Description of changes:

Testing done:

Merge Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.

  • I have verified that my PR does not contain any new notebook/s which demonstrate a SageMaker functionality already showcased by another existing notebook in the repository
  • I have read the CONTRIBUTING doc and adhered to the guidelines regarding folder placement, notebook naming convention and example notebook best practices
  • I have updated the necessary documentation, including the README of the appropriate folder as well as the index.rst file
  • I have tested my notebook(s) and ensured it runs end-to-end
  • I have linted my notebook(s) and code using python3 -m black -l 100 {path}/{notebook-name}.ipynb

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

viclzhu and others added 16 commits June 24, 2024 15:22
aws#4678)

* Update SMP v2 notebooks to use latest PT2.3.1-TSM2.4.0 release.

* Update SMP v2 shared_scripts

* Update minimum sagemaker pysdk version to 2.224
* tutorials-after-initial-feedback

Added descriptive text to make the notebooks stand on their own.

* move athena notebook into dedicated folder

* renamed athena end2end notebooks

* moved pyspark notebook into dedicated directory

* minor change: consistent directory naming convention

* Added overview, headers, and explantory text

Tested the notebook end to end. Added more context for processing jobs and cleaning up. The output is visible in the cells.

* Added overview, headers, explanatory text

Also added troubleshooting note from further testing.

* fix directory locations for new notebooks

* clear notebook outputs

* added integration for ci test results

* updated formatting with black-nb

* update athena notebook: fix parse predictions

* fixed ci integration for pyspark-etl-training notebook

---------

Co-authored-by: Janosch Woschitz <[email protected]>
* Add SageMaker MLflow examples

* Add badges

* Add MLflow setup notebook; upgrade SageMaker Python SDK for deployment notebook

* Linting

* More linting changes

---------

Co-authored-by: Bobby Lindsey <[email protected]>
…e tuning" (https://sim.amazon.com/issues/ML-16440) (aws#4657)

* initial commit of using step decorator for bedrock fine tuning

* ran black command on the notebook

* Added CI badges

* Added CI badges

* fixed typo in notebook title

---------

Co-authored-by: Ashish Rawat <[email protected]>
Co-authored-by: Zhaoqi <[email protected]>
* Deleted 17 duplicate notebooks (aws#4685)

* Updated README, removed broken links and fixed markdown (aws#4687)

* New Folder Structure Implementation - Archived remaining geospatial example notebooks (aws#4691)

* Archived remaining geospatial example notebooks

* Removed geospatial from README.md

* Archived remaining workshop notebooks (aws#4692)

* Archived outdated example notebooks between 1-90 views (aws#4693)

---------

Co-authored-by: jsmul <[email protected]>
This reverts commit 970d88e due to broken blog links
* adding notebook for forecast to canvas workshop

* formatting the notebook using black
Fixed bucket names and external links. No change to underlying code or formatting.

Co-authored-by: sage-maker <[email protected]>
)

* SageMaker FasterAutoscaling Llama3-8B TGI, real-time endpoints

* Moved trigger autoscaling to shell script. Removed shell=True in subprocess.Popen

---------

Co-authored-by: Aditi Sharma <[email protected]>
* initial commit of using step decorator for bedrock fine tuning

* ran black command on the notebook

* Added CI badges

* Added CI badges

* fixed typo in notebook title

* added comments and reviewer feedback

---------

Co-authored-by: Ashish Rawat <[email protected]>
Co-authored-by: Zhaoqi <[email protected]>
* Removed bug where automatic permission attachment errors

* Adds notebook for monitoring llm with multiple eval libraries

---------

Co-authored-by: Brent Friedman <[email protected]>
Co-authored-by: nileshvd <[email protected]>
Update installed package version to fix broken notebook
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

# LLMPerf requires AWS Creds as ENV variables along with endpoint name
def trigger_auto_scaling(creds, region, endpoint_name, num_concurrent_requests):
# Set environment variables
os.environ["AWS_ACCESS_KEY_ID"] = creds.access_key
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommendation generated by Amazon CodeGuru Reviewer. Leave feedback on this recommendation by replying to the comment or by reacting to the comment using emoji.

Your code attempts to override an environment variable that is reserved by the Lambda runtime environment. This can lead to unexpected behavior and might break the execution of your Lambda function.

Learn more

Similar issue at line numbers 15, 16, and 17.

logger.error("Could not read file.")
return {}

if len(lines) == 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Recommendation generated by Amazon CodeGuru Reviewer. Leave feedback on this recommendation by replying to the comment or by reacting to the comment using emoji.

To check if a container or sequence (string, list, tuple) is empty, use if not val. Do not compare its length using if len(val) == 0

Learn more

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants