Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase cache size #2264

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Increase cache size #2264

wants to merge 1 commit into from

Conversation

anmyachev
Copy link
Contributor

@anmyachev anmyachev commented Sep 16, 2024

Checking the impact

CI: https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/10890455942
CI: https://github.com/intel/intel-xpu-backend-for-triton/actions/runs/10891992012 (without IPEX)

AFAIK clearing 256MB L2 cache isn't enough for our main benchmarking system. The performance numbers get a little worse when increasing this way, however it seems we should do it (not necessary, since we only use one tile).

Signed-off-by: Anatoly Myachev <[email protected]>
# before each kernel call to make sure that the L2 cache
# doesn't contain any input data before the run
cache_size = 256 * 1024 * 1024
cache_size = 512 * 1024 * 1024
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This increases the difference from the upstream, so it will be necessary to come up with a mechanism in Triton itself to parameterize this value.

Note: we can't make this change only in benchmark_testing.py because testing.py file is used for benchmarking upstream PyTorch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyTorch passes device properties. Wondering if we can use info::device::global_mem_cache_size to determine the size of the cache here somehow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants