-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Implicit assumption of double precision can cause failures when single precision is used #2596
Comments
NOTE: I have verified locally that changing the line of code that I mentioned under "Additional context" solves the problem. |
Yep, that makes sense. If
That would be very welcome! |
Great, then I'll draft the PR :) |
Great. Btw, have you been using BoTorch more generally with MPS on Mac? I have tried this in the past but a lot of operations that we (and gpytorch) use under the hood weren't supported by MPS at the time, curious if that has changed more recently. |
I am currently trying to bring GPU support to a package that I co-develop (happy to share the link if shameless self-advertisement is fine 😆 ) that uses BoTorch in the backend. This issue here was the first that I encountered, and I haven't investigated in more detail until now. |
@Balandat I found another occurrence of the same issue: import numpy as np
import torch
from botorch.optim.utils.numpy_utils import set_tensors_from_ndarray_1d
from torch import tensor
torch.set_default_device("mps")
tensors = (tensor([-2.2532]),)
array = np.asarray([-2.25321913], dtype=np.float64)
set_tensors_from_ndarray_1d(tensors, array) This fails since the botorch/botorch/optim/utils/numpy_utils.py Lines 113 to 128 in 9d37e90
The error happens in the last line, and replacing this line either by torch.as_tensor(...) or the default value to a whacky lambda expression as_tensor: Callable[[ndarray], Tensor] = lambda x: torch.as_tensor(x, dtype=torch.get_default_dtype()) seems to solve the issue (at least on the BoTorich version I pointed out here).
It seems like there are several places where there is an implicit assumption on using double precision/not the default one if something is enforced. Should I just create small PRs whenever I find these or is there a better way to go forward? |
The main reason this implicit assumption is present in various places in the codebase is that we've found that when working with GPs we often end up with rather ill-conditioned matrices, and performing things like matrix decompositions or linear solves on those with FP32 precision can be quite hairy and result in poor accuracy and often also outright failures.
That would be great and I think it makes sense to fix those as they come up. |
Actually, regarding the issue about
Looking at the code I don't actually ever see the |
Regarding the first issue, with the bounds in test functions, why use |
Regarding
This makes sense to me. |
@esantorella Regarding the first point: I'll try to design a solution for this in the currently existing PR #2597 and I think it makes sense to continue the discussion there. |
That would be great and yes this would best happen in a separate PR. |
…2597) Summary: ## Motivation This PR replaces the hard-coded double precision that was used in the initialization of `test_functions/base.py` to use `torch.get_default_dtype()` instead. See #2596 for more details. ### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)? Yes, I have read it. Pull Request resolved: #2597 Test Plan: I ran code formatting via `ufmt` and checked the code via `pytest -ra`. All tests related to the changes here were adjusted in the second commit of the branch. Locally, two tests failed for me, but it seems to me that these are not related to the fix implemented here. If it turns out they are, I'd be more than happy to further adjust. ## Related PRs None, but #2596 is related. Reviewed By: saitcakmak Differential Revision: D65066231 Pulled By: Balandat fbshipit-source-id: 4beac1fc9a1e5094fd4806958ac2441a12506eb7
test_functions/base.py
are hard-coded to double precisionSummary: ## Motivation This PR removes the `as_tensor` argument of the `set_tensors_from_ndarray_1d` function in `numpy_utils.py`. The reason for this change is that the previous implementation - messed up the float precision: That is, even when trying to use `float32` precision, this always transformed floats into `float64` precision - did only use the default function anyway. This change was discussed previously with Balandat and esantorella in #2596 . ### Have you read the [Contributing Guidelines on pull requests](https://github.com/pytorch/botorch/blob/main/CONTRIBUTING.md#pull-requests)? Yes, I have read it. Pull Request resolved: #2615 Test Plan: I checked the format using `ufmt format .` and verified that all tests are still running using `pytest -ra`. ## Related PRs This PR is related to #2597 and #2596, fixing a similar issue (`float32` being broken by some internal calculations). Reviewed By: Balandat Differential Revision: D65537491 Pulled By: saitcakmak fbshipit-source-id: f178a761691ce092a1b06d020158ea367fc6c99e
🐛 Bug
Within
test_functions/base.py
, the bounds are hard-coded to double precision. This makes it impossible to use single-precision which is necessary for e.g. MPS support on Mac.To reproduce
This yields the following error message (paths shortened for better readability):
Expected Behavior
The initialization of the
BaseTestProblem
class should not enforce double precision in the buffer for the bounds but probably usetorch.get_default_dtype()
System information
Please complete the following information:
Additional context
These seem to be the problematic lines:
botorch/botorch/test_functions/base.py
Lines 49 to 51 in 9d37e90
If the error is in fact just this one line of code, I'd be more than happy to create a mini Pull Request if it makes sense :)
The text was updated successfully, but these errors were encountered: