2154 Commits

Author SHA1 Message Date
Nick Terrell
a91e7ec175 Fix corruption that rarely occurs in 32-bit mode with wlog=25
Fix an off-by-one error in the compressor that emits corrupt blocks if:

* Zstd is compiled in 32-bit mode
* The windowLog == 25 exactly
* An offset of 2^25-3, 2^25-2, 2^25-1, or 2^25 is emitted
* The bitstream had 7 bits leftover before writing the offset

This bug has been present since before v1.0, but wasn't able to easily
be triggered, since until somewhat recently zstd wasn't able to find
matches that were within 128KB of the window size.

Add a test case, and fix 2 bugs in `ZSTD_compressSequences()`:
* The `ZSTD_isRLE()` check was incorrect. It wouldn't produce
  corruption, but it could waste CPU and not emit RLE even if the block
  was RLE
* One windowSize was `1 << windowLog`, not `1u << windowLog`

Thanks to @tansy for finding the issue, and giving us a reproducer!

Fixes Issue #3350.
2022-12-15 14:41:50 -08:00
daniellerozenblit
e2fc93340f
Merge branch 'dev' into http-to-https 2022-12-15 10:46:13 -05:00
Danielle Rozenblit
4dffc35f2e Convert references to https from http 2022-12-14 06:58:35 -08:00
Danielle Rozenblit
69ec75f0d5 fix window resizing edge case 2022-12-13 08:35:20 -08:00
Yann Collet
80cf73fd66
Merge pull request #3320 from cwoffenden/msvc-arm64-size_t
Fix for MSVC C4267 warning on ARM64 (which becomes error C2220 with /WX)
2022-12-04 18:55:30 -08:00
Yann Collet
ecd7601c36 minor: proper pledgedSrcSize trace 2022-11-22 06:00:45 -08:00
Elliot Gorokhovsky
c8d870fe52 Improve LDM cparam validation logic 2022-11-21 15:39:18 -05:00
Carl Woffenden
0547c3d3f8 Random edit to re-run the CI
I don't believe the (x64) Mac failure is related to error since it would take the SSE path.
2022-11-19 19:04:08 +01:00
Carl Woffenden
0168914490 Fix for MSVC C4267 error 2022-11-18 11:31:17 +01:00
Nick Terrell
dcc7228de9
[lazy] Use switch instead of indirect function calls. (#3295)
Use a switch statement to select the search function instead of an
indirect function call. This results in a sizable performance win.

This PR is a modification of the approach taken in PR #2828.
When I measured performance for that commit, it was neutral.
However, I now see a performance regression on gcc, but still
neutral on clang. I'm measuring on the same platform, but with
newer compilers. The new approach beats both the current dev
branch and the baseline before PR #2828 was merged.

This PR is necessary for Issue #3275, to update zstd in the kernel.
Without this PR there is a large regression in greedy - btlazy2
compression speed. With this PR it is about neutral.

gcc version: 12.2.0
clang version: 14.0.6
dataset: silesia.tar

| Compiler | Level | Dev Speed (MB/s) | PR Speed (MB/s) | Delta  |
|----------|-------|------------------|-----------------|--------|
| gcc      |     5 |            102.6 |           113.7 | +10.8% |
| gcc      |     7 |             66.6 |            74.8 | +12.3% |
| gcc      |     9 |             51.5 |            58.9 | +14.3% |
| gcc      |    13 |             14.3 |            14.3 |  +0.0% |
| clang    |     5 |            108.1 |           114.8 |  +6.2% |
| clang    |     7 |             68.5 |            72.3 |  +5.5% |
| clang    |     9 |             53.2 |            56.2 |  +5.6% |
| clang    |    13 |             14.3 |            14.7 |  +2.8% |

The binary size stays just about the same for clang and gcc, measured
using the `size` command:

| Compiler | Branch | Text    | Data | BSS | Total   |
|----------|--------|---------|------|-----|---------|
| gcc      | dev    | 1127950 | 3312 | 280 | 1131542 |
| gcc      | PR     | 1123422 | 2512 | 280 | 1126214 |
| clang    | dev    | 1046254 | 3256 | 216 | 1049726 |
| clang    | PR     | 1048198 | 2296 | 216 | 1050710 |
2022-10-21 17:14:02 -07:00
Danielle Rozenblit
a910489ff5 No longer pass srcSize to minTableLog 2022-10-17 08:03:44 -07:00
Danielle Rozenblit
b34729018c Minor simplication: no longer need to check src size if using cardinality for minTableLog 2022-10-17 07:55:07 -07:00
Danielle Rozenblit
75cd42afd7 Update regression results and better variable naming for HUF_cardinality 2022-10-14 13:37:19 -07:00
Danielle Rozenblit
e60cae33cf Additional ratio optimizations 2022-10-14 10:37:35 -07:00
Danielle Rozenblit
fa7d9c1139 Set threshold to use optimal table log 2022-10-11 14:33:25 -07:00
Danielle Rozenblit
8888a2ddcc CI failure fixes 2022-10-11 13:12:19 -07:00
Qiongsi Wu
1b445c1c2e
Fix hash4Ptr for big endian (#3227) 2022-08-01 10:41:24 -07:00
udayanbapat
43f21a600e
Intial commit to address 3090. Added support to decompress empty block. (#3118)
* Intial commit to address 3090. Added support to decompress empty block

* Update zstd_decompress_block.c

Addressed review comments for the case of 'set_basic'

* Update lib/decompress/zstd_decompress_block.c

Co-authored-by: Nick Terrell <nickrterrell@gmail.com>

* Update lib/decompress/zstd_decompress_block.c

Co-authored-by: Nick Terrell <nickrterrell@gmail.com>

Co-authored-by: Nick Terrell <nickrterrell@gmail.com>
2022-07-14 11:54:34 -07:00
Elliot Gorokhovsky
cb9e341129 Nits 2022-06-23 16:59:21 -04:00
Elliot Gorokhovsky
93b89fb24b Add docs 2022-06-22 16:13:07 -04:00
Elliot Gorokhovsky
2a128110d0 Add prefetchCDictTables CCtxParam 2022-06-22 16:13:07 -04:00
Elliot Gorokhovsky
f6ef14329f
"Short cache" optimization for level 1-4 DMS (+5-30% compression speed) (#3152)
* first attempt at fast DMS short cache

* significant wins for some scenarios

* fix all clang regressions

* nits

* fix 1.5% gcc11 regression on hot 110Kdict scenario

* fix CI

* nit

* Add tags to doublefast hash table

* use tags in doublefast DMS

* Fix CI

* Clean up some hardcoded logic / constants

* Switch forCCtx to an enum

* nit

* add short cache to ip+1 long search

* Move tag size into hashLog

* Minor nits

* Truncate dictionaries greater than 16MB in short cache mode

* Helper function for tag comparison

* Cap short cache hashLog at 24 to prevent overflow

* size_t dictTagsMatch -> int dictTagsMatch

* nit

* Clean up and comment dictionary truncation

* Move ZSTD_tableFillPurpose_e next to ZSTD_dictTableLoadMethod_e

* Comment and expand helper functions

* Asserts and documentation

* nit
2022-06-21 17:27:19 -04:00
Daniel Kutenin
05f3f415ce
Fix big endian ARM NEON path
It is not using the NEON acceleration but the bit grouping was applied
2022-06-13 09:16:24 +01:00
Elliot Gorokhovsky
f313a773a4
Merge pull request #3157 from embg/huge_dict_bugfix
Bugfix for huge dictionaries
2022-06-09 15:35:29 -04:00
Elliot Gorokhovsky
31bd6402c6 Bugfix for huge dictionaries 2022-06-09 11:39:30 -04:00
Nick Terrell
7c05b9aec3 Remove expensive assert in --rsyncable hot loop
This assert slows the loop down by 10x. We can get similar
coverage by asserting at the beginning & end of the loop.

We need this fix because Debian compiles zstd with asserts
enabled. Separately, we should ask them why, and if they would
consider disabling asserts in their builds. Since we don't
optimize for assert enabled builds.

Fixes Issue #3150.
2022-06-06 11:56:13 -07:00
ihsinme
5081ccb056
Update zstd_compress.c 2022-05-30 14:08:19 +03:00
Danila Kutenin
9166c6ae20 Again unused error warning. Fixed 2022-05-23 14:51:47 +00:00
Danila Kutenin
6b561d230f Move NEON version to a separate function and fix indentation 2022-05-23 14:49:35 +00:00
Danila Kutenin
778f639be9 Disable unused variable warning 2022-05-22 10:50:33 +00:00
Danila Kutenin
e11783b04d [lazy] Optimize ZSTD_row_getMatchMask for level 8-10
We found that movemask is not used properly or consumes too much CPU.
This effort helps to optimize the movemask emulation on ARM.

For level 8-9 we saw 3-5% improvements. For level 10 we say 1.5%
improvement.

The key idea is not to use pure movemasks but to have groups of bits.
For rowEntries == 16, 32 we are going to have groups of size 4 and 2
respectively. It means that each bit will be duplicated within the group

Then we do AND to have only one bit set in the group so that iteration
with lowering bit `a &= (a - 1)` works as well.

Also, aarch64 does not have rotate instructions for 16 bit, only for 32
and 64, that's why we see more improvements for level 8-9.

vshrn_n_u16 instruction is used to achieve that: vshrn_n_u16 shifts by
4 every u16 and narrows to 8 lower bits. See the picture below. It's
also used in
[Folly](c570259008/folly/container/detail/F14Table.h (L446)).
It also uses 2 cycles according to Neoverse-N{1,2} guidelines.

64 bit movemask is already well optimized. We have ongoing experiments
but were not able to validate other implementations work reliably faster.
2022-05-22 10:44:24 +00:00
Elliot Gorokhovsky
f349d18776
Merge pull request #3127 from embg/repcode_history
Correct and clarify repcode offset history logic
2022-05-12 13:50:15 -04:00
Elliot Gorokhovsky
3620a0a565 Nits 2022-05-12 12:53:15 -04:00
W. Felix Handte
1dd046a507 Fix Comments Slightly 2022-05-11 12:38:45 -04:00
W. Felix Handte
cd1f582943 Hoist Hash Table Writes Up into Each Match Found Block
Refactoring this way avoids the bad write in the case that `step > 4`, and
is a bit more straightforward. It also seems to perform better!
2022-05-11 11:27:34 -04:00
W. Felix Handte
040986a4f4 ZSTD_fast_noDict: Minimize Checks When Writing Hash Table for ip1
This commit avoids checking whether a hashtable write is safe in two of the
three match-found paths in `ZSTD_compressBlock_fast_noDict_generic`. This pro-
duces a ~0.5% speed-up in compression.

A comment in the code describes why we can skip this check in the other two
paths (the repcode check and the first match check in the unrolled loop).

A downside is that in the new position where we make this check, we have not
yet computed `mLength`. We therefore have to avoid writing *possibly* dangerous
positions, rather than the old check which only avoids writing *actually*
dangerous positions. This leads to a miniscule loss in ratio (remember that
this scenario can only been triggered in very negative levels or under incomp-
ressibility acceleration).
2022-05-10 14:29:39 -07:00
Elliot Gorokhovsky
22875ece61 Nits 2022-05-09 21:01:38 -04:00
Elliot Gorokhovsky
97aabc496e Correct and clarify repcode offset history logic 2022-05-09 21:01:38 -04:00
Elliot Gorokhovsky
ac371be27b Remove hasStep variant (not enough wins to justify the code size increase) 2022-04-28 18:06:24 -04:00
Elliot Gorokhovsky
ce6b69f5c5 Final nit 2022-04-28 14:49:45 -04:00
Elliot Gorokhovsky
6a2e1f7c69 Revert "Hardcode repcode safety check, fix cosmetic nits"
This reverts commit 518cb83833074d304dfcaa93cfc16039ea4683c8.
2022-04-27 18:16:21 -04:00
Elliot Gorokhovsky
518cb83833 Hardcode repcode safety check, fix cosmetic nits 2022-04-26 17:54:25 -04:00
Elliot Gorokhovsky
809f652912 Optimize repcode predicate, hardcode hasStep == 0 scenario, cosmetic fixes 2022-04-20 14:40:52 -04:00
Elliot Gorokhovsky
2820efe7ec Nits 2022-04-19 11:39:52 -04:00
Elliot Gorokhovsky
3536262f70 Port noDict pipeline 2022-04-15 12:16:16 -04:00
Elliot Gorokhovsky
64efba4c5e
Software pipeline for ZSTD_compressBlock_fast_dictMatchState (#3086)
* prefetch dict content inside loop

* ip0/ip1 pipeline

* add L2_4 prefetch to dms pipeline

* Remove L1 prefetch

* Remove L2 prefetching

* Reduce # of gotos

* Cosmetic fixes

* Check final position sometimes

* Track step size as in bc768bc

* Fix nits
2022-03-17 12:35:11 -04:00
Dominique Pelle
b772f53952 Typo and grammar fixes 2022-03-12 08:58:04 +01:00
Elliot Gorokhovsky
529cd7b821 Fix nits 2022-02-14 14:24:50 -05:00
Elliot Gorokhovsky
db2f4a6532 Move bitwise builtins into bits.h 2022-02-14 11:16:03 -05:00
binhdvo
b9566fc558
Add rails for huffman table log calculation (#3047) 2022-02-02 15:12:48 -05:00