-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Inequality Constraints to SEBO #2938
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks - if you don't mind fixing that one small linter nit the rest lgtm!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually there are a couple of other lint failures as well: https://github.com/facebook/Ax/actions/runs/11468956837/job/31951921724?pr=2938
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2938 +/- ##
==========================================
- Coverage 95.82% 95.82% -0.01%
==========================================
Files 503 503
Lines 50792 50806 +14
==========================================
+ Hits 48671 48684 +13
- Misses 2121 2122 +1 ☔ View full report in Codecov by Sentry. |
Merged the changes from #2935 and then fixed the flake8 complaints. I just went ahead and fixed everything flake8 complained about in the repo rather than just the files I touched. So used to relying on pre-commit that I always forget to lint manually when there's not a hook. |
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, you will want to make sure that you take into account the inequality constraints when in get_batch_initial_conditions()
when you call this inside _optimize_with_homotopy()
The naive solution would be to set it to Adding something more involved would require a bit more thought. |
c3793bf
to
a560e9f
Compare
a560e9f
to
241ad71
Compare
Is SEBO available with parameter constraints in the latest version of Ax? |
No this was closed as I did a force-push and it broke the connection to this repo. I hope to get this sorted this week. |
@Balandat Finally getting back to this, in order to reduce the behaviour change and be backward compatible people using sebo + L0 I added pytorch/botorch#2636 to botorch and then we can use the botorch method to sample with constraints and then concat the points from the pareto front to that. with the L1 setup I believe that changing the behaviour to also make use of the pareto points + noise should help there so I shifted the logic such that the same initial batch conditions are used for both the L1 and L0 choices. |
failing tests here are due to the PR now depending on pytorch/botorch#2636 |
#2610) Summary: ## Motivation See #2609 ### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)? Yes Pull Request resolved: #2610 Test Plan: Basic testing of the code is easy the challenge is working out what the run on implications might be, will this break people's code? ## Related PRs facebook/Ax#2938 Reviewed By: Balandat Differential Revision: D66102868 Pulled By: saitcakmak fbshipit-source-id: b3491581a205b0fbe62edd670510e95f13e08177
Summary: ## Motivation In order to get facebook/Ax#2938 over the line with initial candidate generation that obey the constraints we want to use the existing tooling within `botorch`. The hard coded logic currently in Ax uses topk to downselect the sobol samples. To make a change there that will not impact existing users we then need to implement topk downselection in `botorch`. Pull Request resolved: #2636 Test Plan: TODO: - [x] add tests for initialize_q_batch_topk ## Related PRs facebook/Ax#2938. (#2610 was initially intended to play part of this solution but then I realized that the pattern I wanted to use was conflating repeats and the batch dimension.) Reviewed By: Balandat Differential Revision: D66413947 Pulled By: saitcakmak fbshipit-source-id: 39e71f5cc0468d554419fa25dd545d9ee25289dc
@saitcakmak @Balandat, I think this is now ready to be upstreamed now the related botorch PRs were merged. It should make no difference to existing users of L0 SEBO but there is a difference to anyone who might be using L1 SEBO as I moved the initial candidate selection based on the Pareto front to apply to both variants. I haven't benchmarked that change but it seemed to me that if was strange to not give already sparse solutions to the L1 optimizer if doing so was deemed necessary to getting good performance for the L0. |
@saitcakmak I would really appreciate it if this can be looked at/merged before the holidays, the request change is out of date now and can be cleared based on the work done in pytorch/botorch#2636 such that we can make use of the hit-and-run polytope sampling so that the random samples satisfy the constraints |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Sorry about the delay here @CompRhys, I'm at NeurIPS and so it's been a bit hectic. Sait can help get this over the finish line. |
ssd = search_space_digest | ||
bounds = _tensorize(ssd.bounds).t() | ||
|
||
optimizer_options["batch_initial_conditions"] = get_batch_initial_conditions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be set on optimizer_options_with_defaults
? We now construct optimizer_options_with_defaults
but don't pass it down to the optimizers.
I think this will also address the pyre failure:
fbcode/ax/models/torch/botorch_modular/sebo.py:246:8 Undefined attribute [16]: Optional type has no attribute __setitem__
.
optimizer_options["batch_initial_conditions"] = get_batch_initial_conditions( | |
optimizer_options_with_defaults["batch_initial_conditions"] = get_batch_initial_conditions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this is looks like there is a pyre pre-commit so will look at adding it to the pre-commit hooks https://github.com/facebook/pyre-check
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
…ions if doing L0 norm
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@saitcakmak merged this pull request in b259f1a. |
Following pytorch/botorch#2588 being merged there is now no reason to stop people using inequality constraints with SEBO. This PR removes the ValueError check and adjusts the tests accordingly.
Connected to #2790