You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm looking at an example with a SingleTaskMultiFidelityGP, evaluating acquisition values where both the x and the objective are at fidelities other than the highest fidelity. This produces NaN acquisition values and causes optimize_acqf to error out. While optimizing for a fidelity other than the highest may not make sense, this also happens when optimizing qMultiFidelityKnowledgeGradient for the highest fidelity. I'm seeing the following behavior:
The posterior variance is sometimes initially computed as negative and then clipped to 1e-10. When computing it, I get gpytorch/distributions/multivariate_normal.py:319: NumericalWarning: Negative variance values detected. This is likely due to numerical instabilities. Rounding negative variances up to 1e-10.
qLogEI returns a NaN at the same locations as the posterior variance can be negative.
optimize_acqf with a FixedFeatureAcquisitionFunction, fixing the fidelity to 0 and using qLogEI, errors out.
Following the setup of the multi-fidelity tutorial, optimizing qMultiFidelityKnowledgeGradient for the highest fidelity.
What the posterior looks like:
acqf values if we were to just work with fidelity=0:
Numerical inaccuracy is not uncommon in optimization; however, this typically should not lead to exceptions, since multi-restart optimization may allow for finding an optimum nonetheless. In this case, it is clear there is an optimum, so optimize_acqf should find it.
cc @SebastianAment re qLogEI having a "hole". The model actually seems fine here (thanks @esantorella for the great diagnostics), so this is probably just b/c the incumbent is so high (8.8638 in this case if I got that right from the other issue, by far the largest observed value).
As a first step I would recommend using qLogNoisyExpectedImprovement here, which usually has better numerical behavior.
Thanks to ToennisStef for raising this in #2393.
🐛 Bug
I'm looking at an example with a
SingleTaskMultiFidelityGP
, evaluating acquisition values where both thex
and the objective are at fidelities other than the highest fidelity. This produces NaN acquisition values and causesoptimize_acqf
to error out. While optimizing for a fidelity other than the highest may not make sense, this also happens when optimizingqMultiFidelityKnowledgeGradient
for the highest fidelity. I'm seeing the following behavior:gpytorch/distributions/multivariate_normal.py:319: NumericalWarning: Negative variance values detected. This is likely due to numerical instabilities. Rounding negative variances up to 1e-10.
FixedFeatureAcquisitionFunction
, fixing the fidelity to 0 and using qLogEI, errors out.qMultiFidelityKnowledgeGradient
for the highest fidelity.What the posterior looks like:
acqf values if we were to just work with fidelity=0:
To reproduce
See gist for full code. It ends with
Alternatively, skipping the cost function setup, the same error can be produced more simply with
** Stack trace/error message **
Expected Behavior
Numerical inaccuracy is not uncommon in optimization; however, this typically should not lead to exceptions, since multi-restart optimization may allow for finding an optimum nonetheless. In this case, it is clear there is an optimum, so
optimize_acqf
should find it.System information
Please complete the following information:
The text was updated successfully, but these errors were encountered: