2nd chunk of `doc/manual/source/development/benchmarking.md`
f0f6f425e85ffdfc7d1578bf3bbcf225bdeaaf80467cdd6e0000000100000d45
Run benchmark executables directly. For example, to run store benchmarks:
```bash
./build/src/libstore-tests/nix-store-benchmarks
```
As more benchmark executables are added, run them similarly from their respective build directories.
### Filtering Benchmarks
Run specific benchmarks using regex patterns:
```bash
# Run only derivation parser benchmarks
./build/src/libstore-tests/nix-store-benchmarks --benchmark_filter="derivation.*"
# Run only benchmarks for hello.drv
./build/src/libstore-tests/nix-store-benchmarks --benchmark_filter=".*hello.*"
```
### Output Formats
Generate benchmark results in different formats:
```bash
# JSON output
./build/src/libstore-tests/nix-store-benchmarks --benchmark_format=json > results.json
# CSV output
./build/src/libstore-tests/nix-store-benchmarks --benchmark_format=csv > results.csv
```
### Advanced Options
```bash
# Run benchmarks multiple times for better statistics
./build/src/libstore-tests/nix-store-benchmarks --benchmark_repetitions=10
# Set minimum benchmark time (useful for micro-benchmarks)
./build/src/libstore-tests/nix-store-benchmarks --benchmark_min_time=2
# Compare against baseline
./build/src/libstore-tests/nix-store-benchmarks --benchmark_baseline=baseline.json
# Display time in custom units
./build/src/libstore-tests/nix-store-benchmarks --benchmark_time_unit=ms
```
## Writing New Benchmarks
To add new benchmarks:
1. Create a new `.cc` file in the appropriate `*-tests` directory
2. Include the benchmark header:
```cpp
#include <benchmark/benchmark.h>
```
3. Write benchmark functions:
```cpp
static void BM_YourBenchmark(benchmark::State & state)
{
// Setup code here
for (auto _ : state) {
// Code to benchmark
}
}
BENCHMARK(BM_YourBenchmark);
```
4. Add the file to the corresponding `meson.build`:
```meson
benchmarks_sources = files(
'your-benchmark.cc',
# existing benchmarks...
)
```
## Profiling with Benchmarks
For deeper performance analysis, combine benchmarks with profiling tools:
```bash
# Using Linux perf
perf record ./build/src/libstore-tests/nix-store-benchmarks
perf report
```
### Using Valgrind Callgrind
Valgrind's callgrind tool provides detailed profiling information that can be visualized with kcachegrind:
```bash
# Profile with callgrind
valgrind --tool=callgrind ./build/src/libstore-tests/nix-store-benchmarks
# Visualize the results with kcachegrind
kcachegrind callgrind.out.*
```
This provides:
- Function call graphs
- Instruction-level profiling
- Source code annotation
- Interactive visualization of performance bottlenecks
## Continuous Performance Testing
```bash
# Save baseline results
./build/src/libstore-tests/nix-store-benchmarks --benchmark_format=json > baseline.json
# Compare against baseline in CI
./build/src/libstore-tests/nix-store-benchmarks --benchmark_baseline=baseline.json
```
## Troubleshooting
### Benchmarks not building
Ensure benchmarks are enabled:
```bash
meson configure build | grep benchmarks
# Should show: benchmarks true
```
### Inconsistent results
- Ensure your system is not under heavy load
- Disable CPU frequency scaling for consistent results
- Run benchmarks multiple times with `--benchmark_repetitions`
## See Also
- [Google Benchmark documentation](https://github.com/google/benchmark/blob/main/docs/user_guide.md)