-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release: v1.5.2 #184
release: v1.5.2 #184
Conversation
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving the PR
For packages that have known test failures, how do we ensure there is no regression? If I understand correctly, the reason Why does the staleness report say that the latest upstream |
We would look into the failures, and ensure they are the documented failures we know about. This is <1% of the test suite, but needs to be fixed.
I agree downgrade should be avoided when possible, this is one of the tenants of the image- in this case, notebook actually isn't getting upgraded, it's staying the same. But in between this patch and the previous, notebook's owners went and changed the dependencies of the already released version, making our current closure incompatible. So this fix is finding the closure that would have been found previously, if notebook had the correct dependencies from the start.
Relevant is the key word here- for a patch release, the latest relevant upstream version refers to the latest patch for this given minor. This is because during a patch, we can only upgrade the package to a newer patch. We are only concerned if there is a newer version we could upgrade to, so in a minor release the latest relevant would be the latest 4.1.*. |
I don't think the percentage of the test suite that is passing is relevant. We aim to have zero regressions. A test failure is supposed to indicate a regression. If these failure are not indicating regressions, how do we know that, and how do we know that the functionality these tests are supposed to verify is not in fact regressing? Is there an alternative way to verify the functionality of these packages?
Understood. Just so we are on the same page, we are knowingly regressing on features. The JL 4.1 features are being removed in this patch release. This may impact users. I'm not sure if there is any process in place for notifying SageMaker customers about this ahead of time. If we expect this kind of problem to happen in the future, we should brainstorm alternative strategies for navigating this. For example, we could communicate with the
Thanks for clarifying. I notice that the staleness report also says the current version is 4.0.12, which is technically wrong. Just curious, did you have to do any manual hacks to get this working? If so, is that process documented? Asking about this in case there is an opportunity to avoid manual effort in the future for similar situations. |
In my initial PR for staleness report generator, I had three different columns - latest relevant patch/minor/major versions available in conda forge. I got a feedback that this might confuse folks. Hence we are dynamically displaying a single column now (based on the type of release) Since this is a patch version release , it will only look for latest relevant patch version in 4.0.x. it will ignore any version that is >=4.1.0 . This is because, even if 4.1 exists , a patch version release of SM distribution shouldn't bump the minor version . |
250527f
to
2e0a50d
Compare
scipy/pandas are known failures. I ran autogluon test separately and it passed, it seems the gpu autogluon test can run into some memory issues when running full suite. Might have to drop to one pytest worker, or bump up hardware.. |
2e0a50d
to
53d6ede
Compare
Staleness Report: 1.5.2(gpu)
Staleness Report: 1.5.2(cpu)
|
Rerunning full test suite, autogluon succeeds
|
saw a few additional test failures for pandas, but upon investigation it was because 2/29 wasn't recognized as a valid datetime 🐸 pandas-dev/pandas#57672 |
Issue #, if available:
Description of changes: Releasing 1.5.2.
NOTE - we are doing minor upgrade of notebook version, from 7.0.7->7.1.1
context: Previously, notebook depended on jupyterlab >=4.0.7,<5. Thus, we had jupyterlab 4.1.1 and notebook 7.0.7 in image v5.0.1.
Now, they (notebook maintainer) have gone and patched previously released dependencies (link) so now notebook 7.0.7 needs jupyterlab<4.1.0a0.. We are manually upgrading notebook to 7.1 to fix
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.