mirror of
https://github.com/facebook/zstd.git
synced 2025-12-05 00:03:19 -05:00
``` for f in $(find . \( -path ./.git -o -path ./tests/fuzz/corpora -o -path ./tests/regression/data-cache -o -path ./tests/regression/cache \) -prune -o -type f); do sed -i '/Copyright .* \(Yann Collet\)\|\(Meta Platforms\)/ s/Copyright .*/Copyright (c) Meta Platforms, Inc. and affiliates./' $f; done git checkout HEAD -- build/VS2010/libzstd-dll/libzstd-dll.rc build/VS2010/zstd/zstd.rc tests/test-license.py contrib/linux-kernel/test/include/linux/xxhash.h examples/streaming_compression_thread_pool.c lib/legacy/zstd_v0*.c lib/legacy/zstd_v0*.h nano ./programs/windres/zstd.rc nano ./build/VS2010/zstd/zstd.rc nano ./build/VS2010/libzstd-dll/libzstd-dll.rc ```
largeNbDicts
largeNbDicts is a benchmark test tool
dedicated to the specific scenario of
dictionary decompression using a very large number of dictionaries.
When dictionaries are constantly changing, they are always "cold",
suffering from increased latency due to cache misses.
The tool is created in a bid to investigate performance for this scenario, and experiment mitigation techniques.
Command line :
largeNbDicts [Options] filename(s)
Options :
-z : benchmark compression (default)
-d : benchmark decompression
-r : recursively load all files in subdirectories (default: off)
-B# : split input into blocks of size # (default: no split)
-# : use compression level # (default: 3)
-D # : use # as a dictionary (default: create one)
-i# : nb benchmark rounds (default: 6)
--nbBlocks=#: use # blocks for bench (default: one per file)
--nbDicts=# : create # dictionaries for bench (default: one per block)
-h : help (this text)
Advanced Options (see zstd.h for documentation) :
--dedicated-dict-search
--dict-content-type=#
--dict-attach-pref=#