-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out-of-design trials may cause repeated trials #1568
Comments
This shouldn't really be happening in the first place. Seems like the candidate generation produces outputs that slightly violate the rather tight constraints. From your constraint it looks like what you'd actually want is to just constrain the parameters to live on the simplex? Seems like the ideal solution would be to actually support equality constraints? cc @mpolson64 |
In principle getting slightly out-of-bounds outputs doesn't matter for my use cases as I can just try with a new trial. I'm willing to wait a little longer to get a valid output with these narrow constraints. However, getting stuck is a problem, I would need some way to nudge the generator to give me new point.
Yes, supporting equality constraints would allow more flexible use cases without workarounds. |
I'm having very similar issues with the optimization getting stuck to an out-of-design sample and generating that all over again as a new trial. |
I believe that the Sobol fallback on stuck optimization, which @saitcakmak is planning to work on, will help with this, so assigning this to him. |
I can give a small update on this if you put |
I have another case where this is surfacing, even though the constraints are relatively lax compared to what @ailitw has shown. While a stock branin function is no problem, when I modified this to illustrate the idea behind reparameterizing a linear equality constraint as an equality constraint per #727, I changed the function to add a term based on a hidden "x3". See the Colab reproducer, with some of it copied below for provenance. import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties
obj1_name = "branin"
def branin(x1, x2):
y = float(
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
+ 10
)
# Contrived way to incorporate the hidden x3 into the objective
y = y * (1 + 0.1 * x1 * x2 * (1 - x1 - x2))
return y
# Define total for compositional constraint, where x1 + x2 + x3 == total
total = 10.0
ax_client = AxClient(random_seed=42)
# note how lower bound of x1 is now 0.0 instead of -5.0, which is for the sake of illustrating a composition, where negative values wouldn't make sense
ax_client.create_experiment(
parameters=[
{"name": "x1", "type": "range", "bounds": [0.0, total]},
{"name": "x2", "type": "range", "bounds": [0.0, total]},
],
objectives={
obj1_name: ObjectiveProperties(minimize=True),
},
parameter_constraints=[
f"x1 + x2 <= {total}", # reparameterized compositional constraint, which is a type of sum constraint
],
)
for _ in range(21):
parameterization, trial_index = ax_client.get_next_trial()
# extract parameters
x1 = parameterization["x1"]
x2 = parameterization["x2"]
results = branin3(x1, x2)
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
best_parameters, metrics = ax_client.get_best_parameters() I end up with the following: ...
INFO 02-08 21:11:24] ax.service.ax_client: Generated new trial 19 with parameters {'x1': 5.465987, 'x2': 4.534013}.
[INFO 02-08 21:11:24] ax.service.ax_client: Completed trial 19 with data: {'branin': (-595.518689, None)}.
[INFO 02-08 21:11:24] ax.modelbridge.base: Leaving out out-of-design observations for arms: 16_0
[INFO 02-08 21:11:24] ax.modelbridge.torch: The observations are identical to the last set of observations used to fit the model. Skipping model fitting.
[INFO 02-08 21:11:30] ax.service.ax_client: Generated new trial 20 with parameters {'x1': 5.465987, 'x2': 4.534013}.
[INFO 02-08 21:11:30] ax.service.ax_client: Completed trial 20 with data: {'branin': (-595.518689, None)}.
[INFO 02-08 21:11:30] ax.modelbridge.base: Leaving out out-of-design observations for arms: 16_0 In each case, the constraint is only violated slightly. |
This is the best workaround I could come up with so far, since I'm not sure how to adjust the parameterization associated with the current trial in the Service API, and I wasn't sure if marking the trial as failed or abandoned and then adding a point effectively right next to it might throw things off. This isn't ideal, because it means I attach the same trial twice, once with the original invalid parameters and then again with adjusted parameters. import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties
obj1_name = "branin"
def branin3(x1, x2, x3):
y = float(
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
+ 10
)
# Contrived way to incorporate x3 into the objective
y = y * (1 + 0.1 * x1 * x2 * x3)
return y
total = 10.0
ax_client = AxClient(random_seed=42)
ax_client.create_experiment(
parameters=[
{"name": "x1", "type": "range", "bounds": [0.0, total]},
{"name": "x2", "type": "range", "bounds": [0.0, total]},
],
objectives={
obj1_name: ObjectiveProperties(minimize=True),
},
parameter_constraints=[
f"x1 + x2 <= {total}", # compositional constraint
],
)
for _ in range(21):
parameterization, trial_index = ax_client.get_next_trial()
# # calculate x3 based on compositional constraint, x1 + x2 + x3 == 1
x1 = parameterization["x1"]
x2 = parameterization["x2"]
x3 = total - (x1 + x2)
results = branin3(x1, x2, x3)
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
# If x1 + x2 is slightly greater than total within a certain tolerance, adjust it
if x1 + x2 > total:
excess = (x1 + x2) - total
# Adjust x1 and x2 proportionally to their current values
x1 -= excess * (x1 / (x1 + x2))
x2 -= excess * (x2 / (x1 + x2))
x3 = total - (x1 + x2)
results = branin3(x1, x2, x3)
parameterization, trial_index = ax_client.attach_trial({"x1": x1, "x2": x2})
ax_client.complete_trial(trial_index=trial_index, raw_data=results)
best_parameters, metrics = ax_client.get_best_parameters() |
I have a setting where parameter constraints and the number of digits cause the search space to be quite narrow and as a result some trials from
ax_client.get_next_trial()
end up being out-of-design. Whenever that happens all subsequent trials end up being the same exact point and I'm unable to get new trials.I suspect this has something to do with filtering out-of-design trials, when those points are filtered the acquisition function optimization just completes exactly as in the last step that produced the out-of-design trial. I have tried setting the trials as abandoned with
ax_client.abandon_trial(trial_index=trial_index)
but it seems abandoned trials are also filtered to not include out-of-design points.Below is a minimal example to reproduce the issue:
The text was updated successfully, but these errors were encountered: