The zstd symlinks, notably `zstdcat`, weren't working as expected
because only the `tests/cli-tests/bin/zstd` wrapper was symlinked. We
still invoked `zstd` with the name `zstd`. The fix is to create a
directory of zstd symlinks in `tests/cli-tests/bin/symlinks` for each
name that zstd recognizes. And when `tets/cli-tests/bin/zstd` is
invoked, it selects the correct symlink to call.
See the test `zstd-cli/zstdcat.sh` for an example of how it would work.
* playtests.sh: fix for a bug in macos' /bin/sh that persists temporary env vars when introduced before function calls
* cli-tests/run.py: Do not use existing ZSTD* envvars
The use of --long alters the window size internally in the underlying
library (lib/compress/zstd_compress.c:ZSTD_getCParamsFromCCtxParams),
which changes the memory required for decompression. This means that the
reported requirement from the zstd binary when -vv is specified is
incorrect.
A full fix for this would be to add an API call to be able to retrieve
the required decompression memory from the library, but as a
lighterweight fix we can just take account of the fact we've enabled
long mode and update our verbose output appropriately.
Fixes#2968
credit to oss-fuzz
This issue could happen when using the new Sequence Compression API in Explicit Delimiter Mode
with a too small dstCapacity.
In which case, there was one place where the buffer size wasn't checked.
* It couldn't detect that the `fastCoverParams` can't be non-null, since it was just an assertion.
* It thought we were accesing `wksp->dtable` beyond the bounds because we were using it to set the `workSpace` value. Instead, compute the workspace size used in a different way.
discovered by oss-fuzz
It's a bug in the test itself :
ZSTD_compressBound() as an upper bound of the compress size
only works for data compressed "normally".
But in situations where many flushes are forcefully introduced,
this creates many more blocks,
each of which has a potential to increase the size by 3 bytes.
In extreme cases (lots of small incompressible blocks), the expansion can go beyond ZSTD_compressBound().
This situation is similar when using the CompressSequences() API
with Explicit Block Delimiters.
In which case, each explicit block acts like a deliberate flush.
When employed by a fuzzer, it's possible to generate scenarios like the one described above,
with tons of incompressible blocks of small sizes,
thus going beyond ZSTD_compressBound().
fix : when using Explicit Block Delimiters, use a larger bound, to account for this scenario.
It's a bug in the test itself, in exceptional circumstances (no more space for additional sequence).
There should be enough room for all cases to work fine from now on,
and if not, we have an additional `assert()` to catch that situation.
Add cli-tests to `make test`. This adds a `python3` dependency to `make
test`, but not `make check`. We could make this dependency optional by
skipping the tests if `python3` is not present.
`datagen` was printing a `\n` even when it had no other output. Raise
the output level for the final `\n` to the minimum output level used.
This minor bug was caught by the new testing framework.
The option `--auto-threads` should still be accepted and parsed, even if
`ZSTD_MULTITHREAD` is not defined. It doesn't mean anything, but we
should still accept the option. Since we want scripts to be able to work
generically.
This bug was caught by tests I added to the new testing framework.
Passing 0 inputs to `DiB_shuffle()` caused an assertion failure where
it should just return.
A test is added in a later commit, with the initial introduction of the
new testing framework.
Fixes#3007.
credit to oss-fuzz
In rare circumstances, the block-splitter might cut a block at the exact beginning of a repcode.
In which case, since litlength=0, if the repcode expected 1+ literals in front, its signification changes.
This scenario is controlled in ZSTD_seqStore_resolveOffCodes(),
and the repcode is transformed into a raw offset when its new meaning is incorrect.
In more complex scenarios, the previous block might be emitted as uncompressed after all,
thus modifying the expected repcode history.
In the case discovered by oss-fuzz, the first block is emitted as uncompressed,
so the repcode history remains at default values: 1,4,8.
But since the starting repcode is repcode3, and the literal length is == 0,
its meaning is : = repcode1 - 1.
Since repcode1==1, it results in an offset value of 0, which is invalid.
So that's what the `assert()` was verifying : the result of the repcode translation should be a valid offset.
But actually, it doesn't matter, because this result will then be compared to reality,
and since it's an invalid offset, it will necessarily be discarded if incorrect,
then the repcode will be replaced by a raw offset.
So the `assert()` is not useful.
Furthermore, it's incorrect, because it assumes this situation cannot happen, but it does, as described in above scenario.
Knowing the version of zlib/lz4/lzma we're linking against is very
useful for debugging issues with those libraries, so print it out in the
verbosity 4 version output.
Also print this information at the top of `playTests.sh`.
which properly represents the maximum bit size of compressed literals (11) as defined in the specification.
To be preferred from HUF_TABLELOG_DEFAULT which represents the same value but by accident.
Name selected to keep the same convention as existing width definitions,
MLFSELog, LLFSELog and OffFSELog.
In rare cases, the default huffman depth selector is a bit too harsh,
requiring brutal adaptations to the tree,
resulting is some loss of compression ratio.
This new heuristic avoids the worse cases, favoring compression ratio.
As an example, compression of a specific distribution of 771 literals
is now improved to 441 bytes, from 601 bytes before.