mirror of
https://github.com/facebook/zstd.git
synced 2025-10-08 00:04:02 -04:00
* Extract the overflow correction into a helper function. * Load the dictionary `ZSTD_CHUNKSIZE_MAX = 512 MB` bytes at a time and overflow correct between each chunk. Data corruption could happen when all these conditions are true: * You are using multithreading mode * Your overlap size is >= 512 MB (implies window size >= 512 MB) * You are using a strategy >= ZSTD_btlazy * You are compressing more than 4 GB The problem is that when loading a large dictionary we don't do overflow correction. We can only load 512 MB at a time, and may need to do overflow correction before each chunk.