-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: added benchmarks for CESQL #1050
Conversation
Signed-off-by: Calum Murray <[email protected]>
LGTM but I've never really used the go benchmark stuff before... so a potentially silly question, I assume this will print stats about the testcases (e.g. timing, etc...), but is there a mechanism in all of this that compares the values with the state of the code before whatever PR is being tested? How will people know if a PR impacts performance dramatically w/o the historical info? |
@duglin we tried to think this through in Knative and didn't come to a great conclusion. The main challenge with historical info is that it is really dependent on the machine the code is running on. We found that the github action runners were too noisy for good benchmarking. So, our solution was to just run the benchmarks once on the |
Nope - that seems reasonable. Would be really cool if the PR testing infra could do that on each PR and flag it when things look bad. But that's just a dream :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
cc @cloudevents/sdk-go-maintainers |
cc @embano1 |
rebase needed |
Fixes #927
This PR adds benchmarks to run on every TCK test case that we have, as that seemed like the easiest way of getting a good set of test cases. For each scenario, this benchmarks:
Since all these cases are already being tested for correctness, this does not bother checking if the results are correct