Skip to content

Commit dea246d

Browse files
authored
Merge pull request #217 from tecosaur/prettier-trial-display
Overhaul display of a Trial
2 parents 91ccce0 + 600cca3 commit dea246d

File tree

4 files changed

+338
-202
lines changed

4 files changed

+338
-202
lines changed

README.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -50,18 +50,17 @@ julia> using BenchmarkTools
5050
# The `setup` expression is run once per sample, and is not included in the
5151
# timing results. Note that each sample can require multiple evaluations
5252
# benchmark kernel evaluations. See the BenchmarkTools manual for details.
53-
julia> @benchmark sin(x) setup=(x=rand())
54-
BenchmarkTools.Trial:
55-
memory estimate: 0 bytes
56-
allocs estimate: 0
57-
--------------
58-
minimum time: 4.248 ns (0.00% GC)
59-
median time: 4.631 ns (0.00% GC)
60-
mean time: 5.502 ns (0.00% GC)
61-
maximum time: 60.995 ns (0.00% GC)
62-
--------------
63-
samples: 10000
64-
evals/sample: 1000
53+
julia> @benchmark sort(data) setup=(data=rand(10))
54+
BechmarkTools.Trial: 10000 samples with 972 evaluations.
55+
Range (min max): 69.399 ns 1.066 μs ┊ GC (min max): 0.00% 0.00%
56+
Time (median): 83.850 ns ┊ GC (median): 0.00%
57+
Time (mean ± σ): 89.471 ns ± 53.666 ns ┊ GC (mean ± σ): 3.25% ± 5.16%
58+
59+
▁▄▇█▇▆▃▁
60+
▂▁▁▂▂▃▄▆████████▆▅▄▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
61+
69.4 ns Histogram: frequency by time 145 ns (top 1%)
62+
63+
Memory estimate: 160 bytes, allocs estimate: 1.
6564
```
6665

6766
For quick sanity checks, one can use the [`@btime` macro](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#benchmarking-basics), which is a convenience wrapper around `@benchmark` whose output is analogous to Julia's built-in [`@time` macro](https://docs.julialang.org/en/v1/base/base/#Base.@time):

docs/src/index.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -16,18 +16,17 @@ julia> using BenchmarkTools
1616
# The `setup` expression is run once per sample, and is not included in the
1717
# timing results. Note that each sample can require multiple evaluations
1818
# benchmark kernel evaluations. See the BenchmarkTools manual for details.
19-
julia> @benchmark sin(x) setup=(x=rand())
20-
BenchmarkTools.Trial:
21-
memory estimate: 0 bytes
22-
allocs estimate: 0
23-
--------------
24-
minimum time: 4.248 ns (0.00% GC)
25-
median time: 4.631 ns (0.00% GC)
26-
mean time: 5.502 ns (0.00% GC)
27-
maximum time: 60.995 ns (0.00% GC)
28-
--------------
29-
samples: 10000
30-
evals/sample: 1000
19+
julia> @benchmark sort(data) setup=(data=rand(10))
20+
BechmarkTools.Trial:
21+
10000 samples with 968 evaulations took a median time of 90.902 ns (0.00% GC)
22+
Time (mean ± σ): 94.936 ns ± 47.797 ns (GC: 2.78% ± 5.03%)
23+
Range (min max): 77.655 ns 954.823 ns (GC: 0.00% 87.94%)
24+
25+
▁▃▅▆▇█▇▆▅▂▁
26+
▂▂▃▃▄▅▆▇███████████▇▆▄▄▃▃▂▂▂▂▂▂▂▂▂▂▂▁▂▁▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂
27+
77.7 ns Histogram: frequency by time 137 ns
28+
29+
Memory estimate: 160 bytes, allocs estimate: 1.
3130
```
3231

3332
For quick sanity checks, one can use the [`@btime` macro](https://github.com/JuliaCI/BenchmarkTools.jl/blob/master/doc/manual.md#benchmarking-basics), which is a convenience wrapper around `@benchmark` whose output is analogous to Julia's built-in [`@time` macro](https://docs.julialang.org/en/v1/base/base/#Base.@time):

0 commit comments

Comments
 (0)