-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GENERAL SUPPORT]: Running into user warning exception stating an objective was not 'observed' #2794
Comments
Hello there! Could you also provide your implementation for |
Yes! I attached the two functions below:
|
Thanks! Your If it does, can you also provide the full stack trace of the error? |
I believe I have the most up-to-date version and the issue is still happening. I can still run my trials fine but I don't know if this will affect optimization. Here is the trace I'm seeing:
|
This is what I'm importing as well |
Thank you for the logs! The warning is being logged here https://github.com/facebook/Ax/blob/main/ax/modelbridge/cross_validation.py#L433-L469. The That being said, the exception you're seeing is coming from https://github.com/facebook/Ax/blob/main/ax/modelbridge/torch.py#L348. I would add a breakpoint on that exception and inspect the value of |
Thanks for getting back to me @Cesar-Cardoso! How would I inspect the value of X? In the above example I only fed it one trial with an X value of 1.5 and a Y value of 2.5. I'm not sure how this would throw an exception. Also I'm considering getting rid of the thresholds for both of these objectives. Would this affect the model if I force it to only run 100 trials per experiment? If that were the case, at the end of every experiment would it just come down to evaluating |
The simplest way is probably to just add a
in https://github.com/facebook/Ax/blob/main/ax/modelbridge/torch.py#L348-L349 and see what you get when this exception is logged, and go from there. But again, this is all happening in our analytics logic which is only there for you to assess model performance / debugging. I'm not sure I understand your second question well. Yes, eliminating the thresholds will affect modeling as more of the search space will be considered, including points in the Pareto front. And yes, attaching the parameters and outcomes from the first 100 trials and then running 100 more should yield you the same results than running them all at once. |
Okay, trying to complete a trial gave these additional messages. I will try to add in the print line as you mentioned @Cesar-Cardoso and see what might be going on:
|
Update: |
So all of the exception/warning messages go away if I start with two initial trials instead of one. I'm wondering if the optimization will continue to work if I only use one initial trial |
Also, is there a smarter way to limit my parameter search space to only be in increments of 0.5 or to just one decimal space? I've considered changing to choice parameters and doing something like np.arange(1, 10.5, 0.5).tolist() but was wondering if a different model would be more effective if I made all my parameters like this. To clarify my question: Thank you! |
Question
Hello,
I'm trying to run a MOBO experiment using the Service API and was running into an issue. It seems like everything is running properly but I'm getting warnings thrown which is making me wonder where the problem is coming from.
The specific message is:
I've attached my code below for further context. This takes place in a Jupyter Notebook and allows the user to manually input an initial reference trial and the rest of the experiment is done through a user dialog for ten trials. The surrogate is a SingleTaskGP and the Acquisition Function is qNEHVI (not sure if implemented correctly).
Please provide any relevant code snippet if applicable.
Code of Conduct
The text was updated successfully, but these errors were encountered: