-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: MBM sums up MOO outputs when given a single objective acquisition function #2519
Comments
Actually, in my (very limited) knowledge, isn't this how MOBO is supposed to work? If you look at the BoTorch documentation for MOBO, especially where the model is initialized, you find:
(in our case we are discussing Edit: This is the case for |
So, the part about constructing a multi-output surrogate model is correct. That should indeed happen. The issue is scalarizing the outputs from the model, using an arbitrary sum. We do support
The issue is doing this silently using arbitrary weights (well, they're just 1 for maximization and -1 for minimization) with acquisition functions that are not designed for multi-objective optimization. |
The same issue happens with the legacy models as well. It is a problem with the way we extract the objective weights from optimization config in For legacy single-objective models, these get passed to For MBM with single-objective acquisition functions, these are passed through |
Summary: This diff adds a validation that botorch_acqf_class is an MO acqf when `TorchOptConfig.is_moo is True`. This should eliminate bugs like facebook#2519, which can happen since the downstream code will otherwise assume SOO. Note that this only solves MBM side of the bug. Legacy code will still have the buggy behavior. Differential Revision: D64563992
If the generation strategy uses MBM with a single objective acquisition function on an MOO problem, the outputs are simply summed together in the acquisition function using a
ScalarizedPosteriorTransform
.Discovered while investigating #2514
Repro:
Notebook for Meta employees: N5489742
Setup the problem using AxClient
This runs fine and generates candidates.
Investigate arguments to acquisition function
This will raise an exception. Ignore it and check kwargs.
This is a
ScalarizedPosteriorTransform
with weightstensor([1., 1.], dtype=torch.float64)
.We can check opt config to verify that this is not an experiment setup issue.
Expected behavior
We can't do MOO using a single objective acquisition function. We should not be silently scalarizing the outputs. It should raise an informative error.
The text was updated successfully, but these errors were encountered: