Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log PT2 chromium events to scuba #2424

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Log PT2 chromium events to scuba #2424

wants to merge 1 commit into from

Conversation

jamesjwu
Copy link
Contributor

Summary:
X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:

  • Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
  • We should definitely log frame id, compile id, etc
  • We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
  • idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Differential Revision: D61392607

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

facebook-github-bot pushed a commit that referenced this pull request Aug 19, 2024
Summary:

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing. 

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc 
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

jamesjwu added a commit to pytorch/pytorch that referenced this pull request Aug 19, 2024
Summary:
X-link: pytorch/benchmark#2424

Pull Request resolved: #133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Test Plan:
All of the above views are run with nanogpt benchmark:

```
buck run mode/opt caffe2/benchmarks/dynamo:torchbench -- --training --backend=inductor --only nanogpt --performance
```

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

jamesjwu added a commit that referenced this pull request Aug 19, 2024
Summary:
Pull Request resolved: #2424

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@jamesjwu jamesjwu temporarily deployed to docker-s3-upload August 19, 2024 20:44 — with GitHub Actions Inactive
@jamesjwu jamesjwu temporarily deployed to docker-s3-upload August 19, 2024 20:44 — with GitHub Actions Inactive
jamesjwu added a commit to pytorch/pytorch that referenced this pull request Aug 19, 2024
Summary:
X-link: pytorch/benchmark#2424

Pull Request resolved: #133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Test Plan:
All of the above views are run with nanogpt benchmark:

```
buck run mode/opt caffe2/benchmarks/dynamo:torchbench -- --training --backend=inductor --only nanogpt --performance
```

Reviewed By: ezyang

Differential Revision: D61392607
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request Aug 20, 2024
Summary:
X-link: pytorch/benchmark#2424


This diff implements a bunch of views for internal scuba viewing. 

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc 
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Test Plan:
All of the above views are run with nanogpt benchmark:

```
buck run mode/opt caffe2/benchmarks/dynamo:torchbench -- --training --backend=inductor --only nanogpt --performance
```

Reviewed By: ezyang

Differential Revision: D61392607
facebook-github-bot pushed a commit that referenced this pull request Aug 20, 2024
Summary:

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing. 

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc 
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

jamesjwu added a commit to pytorch/pytorch that referenced this pull request Aug 20, 2024
Summary:
X-link: pytorch/benchmark#2424

Pull Request resolved: #133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Test Plan:
All of the above views are run with nanogpt benchmark:

```
buck run mode/opt caffe2/benchmarks/dynamo:torchbench -- --training --backend=inductor --only nanogpt --performance
```

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

jamesjwu added a commit that referenced this pull request Aug 20, 2024
Summary:
Pull Request resolved: #2424

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@jamesjwu jamesjwu temporarily deployed to docker-s3-upload August 20, 2024 00:52 — with GitHub Actions Inactive
@jamesjwu jamesjwu temporarily deployed to docker-s3-upload August 20, 2024 00:52 — with GitHub Actions Inactive
facebook-github-bot pushed a commit that referenced this pull request Aug 20, 2024
Summary:

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing. 

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc 
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

facebook-github-bot pushed a commit that referenced this pull request Aug 20, 2024
Summary:

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing. 

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc 
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

facebook-github-bot pushed a commit that referenced this pull request Aug 20, 2024
Summary:

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing. 

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc 
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

jamesjwu added a commit that referenced this pull request Aug 20, 2024
Summary:
Pull Request resolved: #2424

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@jamesjwu jamesjwu temporarily deployed to docker-s3-upload August 20, 2024 19:08 — with GitHub Actions Inactive
@jamesjwu jamesjwu temporarily deployed to docker-s3-upload August 20, 2024 19:08 — with GitHub Actions Inactive
Summary:
Pull Request resolved: #2424

X-link: pytorch/pytorch#133859

This diff implements a bunch of views for internal scuba viewing.

TODOS that I might punt to another diff:
- Saving cache stats via counter is definitely sus here, but there's not really a good way to track "fx graph cache hit for this compile phase" right now. Will think about this more.
- We should definitely log frame id, compile id, etc
- We should definitely be logging configs. That way, we can A/B test based on whether a config is turned on.
- idk what I'm doing with compile_uuid yet, but it's useful when you want to look at samples for a single run. I think if we had mast job info this field is not needed, but it's nice to be able to drill down to a single run and get its chrome trace view or icicle view, so idk

Reviewed By: ezyang

Differential Revision: D61392607
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D61392607

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants