Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RadialBasis Float64 faster the Float32 #23

Open
agoscinski opened this issue Jul 22, 2023 · 0 comments
Open

RadialBasis Float64 faster the Float32 #23

agoscinski opened this issue Jul 22, 2023 · 0 comments

Comments

@agoscinski
Copy link
Collaborator

@abmazitov observed that float64 is faster than float32 for alchemical expansion. I observed the same with #21 The difference was more severe depending when there was no warmup of the benchmarks (using just --quick). I think it is because the NeighborlistTransformer takes the type of the given numpy arrays which is usually float64, then in the radial basis the one hot encoding is using torch.get_default_ytype() which is usally float32 https://github.com/frostedoyster/torch_spex/blob/08cfe0d296a1296b1b05596a868639df9a9ba6d1/torch_spex/radial_basis.py#L44
I assume the type conversion when the two types meet causes the #21 issue.

Temporary fix:
Use torch.set_default_dtype(torch.float64) before running the code

Real fix:
I want to take the chance and integrate asv benchmarks so we can actually track the change in performance when fixing tihs. Because the fix should be rather trivial (e.g. using dtype of r). I started this PR #21

@agoscinski agoscinski changed the title RadialBasis Float64 faster the Floa32 RadialBasis Float64 faster the Float32 Jul 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant