Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow units and precision in benchmark #266

Merged
merged 1 commit into from
Jul 13, 2016
Merged

Allow units and precision in benchmark #266

merged 1 commit into from
Jul 13, 2016

Conversation

jveski
Copy link
Contributor

@jveski jveski commented Jul 13, 2016

Addresses issue #176

This change adds support for benchmark values of varying units by adding a RecordValueWithPrecision method.

A RecordValueWithPrecision method is added to the Benchmarker interface.
The method behaves similarly to RecordValue, but allows the caller to
additionally provide a measurement unit and float truncate precision.

The stenographer is also updated to facilitate variable precision.
@jveski
Copy link
Contributor Author

jveski commented Jul 13, 2016

@onsi you mentioned a TimeWithPrecision function in #176 - let me know if you’d like me to implement it here

@onsi
Copy link
Owner

onsi commented Jul 13, 2016

This is great! thanks!

if you are up for the TimeWithPrecision work go for it - though I don't think it's strictly necessry

@onsi onsi merged commit 01a0bc3 into onsi:master Jul 13, 2016
@kralicky
Copy link
Contributor

@onsi Was just looking for a TimeWithPrecision function and found this thread, have you taken a look at #732 above? Would be excellent to have this available!

@onsi
Copy link
Owner

onsi commented Mar 10, 2021

i'm actually planning on deprecating the benchmark behavior as part of the v2 work currently in flight. my sense is it's not widely used, adds complexity the codebase, and doesn't need to be part of core ginkgo (probably better off as a separate library that is focused on benchmarking and grows more featureful with time.) With that said if you and/or others are using it extensively I'd love to learn more about the usecase/value it's bringing and how painful removing it would be.

@kralicky
Copy link
Contributor

Since I am using ginkgo for all my tests the measure functionality integrates nicely where needed. it helps to be able to replace an It() with a Measure() here and there to be able to benchmark some sections without having to rewrite code or move it elsewhere. I do agree that turning it into a separate library would be an improvement though.

@onsi
Copy link
Owner

onsi commented Mar 10, 2021 via email

@onsi
Copy link
Owner

onsi commented Mar 10, 2021

as i think about replacing the functionality - i'm leaning towards adding a new Gomega subpackage that specializes in measurements/benchmarking and is less tightly coupled to Ginkgo. that would give folks a relatively straightforward migration path and resolve my main concerns with it being in Ginkgo core. could you see that working for your usecase?

@kralicky
Copy link
Contributor

The main project I am working on is private (unfinished) but I will get back to you this weekend with a gist link to some sample code.
I think a gomega subpackage is a good idea. A drop-in replacement for my use case would be something like

It("Should do the thing", func() {
  Measure("the thing, func() {
    // some code
  }, iterations)
})

Whereas before my code would look like

Measure("Doing the thing", func(b Benchmarker) {
  b.Time("the thing", func() {
    // some code
  })
}, iterations)

This could let you have matchers like Expect(...).To(TakeOnAverage("<", 1*time.Second)) or something.

@onsi
Copy link
Owner

onsi commented Mar 10, 2021

Thanks @cobalt77 ! yes I'd imagine something along the lines of what you're describing along with a suite of associated matchers.

A couple more questions as I think about the user experience:

  • do you find the benchmark report that Ginkgo emits to be useful? do you actually look at it when tests run?
  • or is it more valuable to analyze the results and make assertions like TakeOnAverage("<"...)? and only then see the results?
  • have you generally used timing benchmarks? Or do you also use RecordValue()?

@kralicky
Copy link
Contributor

My main use is to observe benchmarks just to get an insight into performance rather than asserting that things take under a particular amount of time, but I can see how that might be useful in some cases depending on the tests.

The fast/slow/average outputs are great and can be very helpful in identifying which sections of code might be causing a bottleneck. For example a benchmark helped me identify a bug that was caused by writing into a buffered channel much faster than I could read from it. I had a RecordValue on len(channel) and the output showed a huge spike in the maximum value compared to the average which clued me in to the problem. So based on that I think having matchers that can check for outliers, max or min being unusually far away from the average, checking that measurements trend in a certain direction, things of that nature, would be quite helpful.

I mostly use Measure, but RecordValue can come in handy. Another use case for me is using Measure in place of a for loop to get benchmark output for free. i.e. replacing

It("should fill the container" func() {
  for i := 0; i < 100; i++ {
    container.Add(i)
  }
  Expect(container.Len()).To(Equal(100))
})

with

i := 0
Measure("filling the container", func(b Benchmarker) {
  b.Time("adding an item", func() {
    container.Add(i)
  })
  i++
}, 100)
It("should contain 100 items", func() {
  Expect(container.Len()).To(Equal(100))
})

Assuming container.Add() could potentially be expensive, this can produce useful output.

@kralicky
Copy link
Contributor

I've also used Time() to check how long Eventually() takes to complete. Here's an actual code sample:

b.Time("Handling provider add callbacks", func() {
	testCh := make(chan struct{}, len(providers))
	// OnProviderAdded registers a callback (non blocking)
	l.OnProviderAdded(func(ctx context.Context, uuid string) {
		testCh <- struct{}{}
	})
	Eventually(func() int {
		return len(testCh)
	}, 1*time.Second, 1*time.Millisecond).Should(Equal(len(providers)))
})

This would bench how long it took for OnProviderAdded to be called len(providers) times.

@onsi
Copy link
Owner

onsi commented Mar 10, 2021 via email

@onsi
Copy link
Owner

onsi commented May 27, 2021

hey @kralicky if you're interested I've put together a beta version of a new benchmarking package in Gomega. It's called gmeasure and is in the latest Gomega release. The package is documented but I'm working on additional documentation with richer examples for the onsi.github.io/gomega docsite. Let me know if you have any thoughts/feedback. It's a slightly different mental model but I think will provide a ton more flexibility going forward.

@kralicky
Copy link
Contributor

I will definitely check it out and let you know!

@onsi
Copy link
Owner

onsi commented May 27, 2021

thanks! i'll let you know when the docs are finished too :)

@onsi
Copy link
Owner

onsi commented May 29, 2021

docs are up now at https://onsi.github.io/gomega/#gmeasure-benchmarking-code

@kralicky
Copy link
Contributor

@onsi Tried out gmeasure a bit here, looks pretty good so far! One thing I would recommend would be a way to control the sample rate in the SamplingConfig. Looks like it just samples as fast as possible right now. My intuition was that if you specified both a sample count and a duration, it would perform N samples in the given duration, spacing them out evenly.

@onsi
Copy link
Owner

onsi commented Jun 16, 2021

hey @kralicky - super cool to see how you're using this! Looks great and please keep the feedback coming!

I'll add a MinSamplingInterval to the SamplingConfig - if a sample takes less time than MinSamplingInterval then gmeasure will wait until at least MinSamplingInterval has elapsed before collecting the next sample; otherwise the next sample is taken immediately.

The sample count and duration are intended to be maximums so you can say stuff like "do this 10 times or for up to 1 minute - whichever comes first"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants