credit to oss-fuzz
In rare circumstances, the block-splitter might cut a block at the exact beginning of a repcode.
In which case, since litlength=0, if the repcode expected 1+ literals in front, its signification changes.
This scenario is controlled in ZSTD_seqStore_resolveOffCodes(),
and the repcode is transformed into a raw offset when its new meaning is incorrect.
In more complex scenarios, the previous block might be emitted as uncompressed after all,
thus modifying the expected repcode history.
In the case discovered by oss-fuzz, the first block is emitted as uncompressed,
so the repcode history remains at default values: 1,4,8.
But since the starting repcode is repcode3, and the literal length is == 0,
its meaning is : = repcode1 - 1.
Since repcode1==1, it results in an offset value of 0, which is invalid.
So that's what the `assert()` was verifying : the result of the repcode translation should be a valid offset.
But actually, it doesn't matter, because this result will then be compared to reality,
and since it's an invalid offset, it will necessarily be discarded if incorrect,
then the repcode will be replaced by a raw offset.
So the `assert()` is not useful.
Furthermore, it's incorrect, because it assumes this situation cannot happen, but it does, as described in above scenario.
Knowing the version of zlib/lz4/lzma we're linking against is very
useful for debugging issues with those libraries, so print it out in the
verbosity 4 version output.
Also print this information at the top of `playTests.sh`.
which properly represents the maximum bit size of compressed literals (11) as defined in the specification.
To be preferred from HUF_TABLELOG_DEFAULT which represents the same value but by accident.
Name selected to keep the same convention as existing width definitions,
MLFSELog, LLFSELog and OffFSELog.
In rare cases, the default huffman depth selector is a bit too harsh,
requiring brutal adaptations to the tree,
resulting is some loss of compression ratio.
This new heuristic avoids the worse cases, favoring compression ratio.
As an example, compression of a specific distribution of 771 literals
is now improved to 441 bytes, from 601 bytes before.
-r on empty directory resulted in zstd waiting input from stdin. now zstd exits without error and prints a warning message explaining why no processing happened (no files or directories to process).
* Refactored fileio.c:
- Extracted asyncio code to fileio_asyncio.c/.h
- Moved type definitions to fileio_types.h
- Moved common macro definitions needed by both fileio.c and fileio_asyncio.c to fileio_common.h
* Bugfix - rename fileio_asycio to fileio_asyncio
* Added copyrights & license to new files
* CR fixes
* Async IO decompression:
- Added --[no-]asyncio flag for CLI decompression.
- Replaced dstBuffer in decompression with a pool of write jobs.
- Added an ability to execute write jobs in a separate thread.
- Added an ability to wait (join) on all jobs in a thread pool (queued and running).
We previously triggered release artifact generation on release creation. We
sometimes observed that the action failed to run. I hypothesized that we were
hitting rate limiting or something. I just stumbled across [this documentat-
ion](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#release), which says:
> Note: Workflows are not triggered for the `created`, `edited`, or `deleted`
> activity types for draft releases. When you create your release through the
> GitHub browser UI, your release may automatically be saved as a draft.
This must have been what was happening. This commit therefore changes the
trigger to the `published` activity. This should be more reliable.
This does have the unfortunate side effect that artifacts won't be generated
or attached until *after* the release has been published, which is what I was
trying to avoid by using the `created` activity. Oh well.