0

Clamp the gzip fuzzer output even more aggressively.

Even at O(N) scaling, the fuzzer infrastructure is kinda slow.

Bug: 940393
Change-Id: I0261e80e0cb24fbcbced52cd7a1d1de7ad8af652
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1520883
Commit-Queue: David Benjamin <davidben@chromium.org>
Commit-Queue: Matt Menke <mmenke@chromium.org>
Auto-Submit: David Benjamin <davidben@chromium.org>
Reviewed-by: Matt Menke <mmenke@chromium.org>
Cr-Commit-Position: refs/heads/master@{#640465}
This commit is contained in:
David Benjamin
2019-03-13 20:39:05 +00:00
committed by Commit Bot
parent 74379f8b44
commit 4078281dfc

@ -24,10 +24,9 @@ extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
// Gzip has a maximum compression ratio of 1032x. While, strictly speaking,
// linear, this means the fuzzer will often get stuck. Stop reading at a more
// modest compression ratio of 10x, or 2 MiB, whichever is larger. See
// modest compression ratio of 2x, or 512 KiB, whichever is larger. See
// https://crbug.com/921075.
size_t max_output =
std::max(10u * size, static_cast<size_t>(2 * 1024 * 1024));
size_t max_output = std::max(2u * size, static_cast<size_t>(512 * 1024));
const net::SourceStream::SourceType kGzipTypes[] = {
net::SourceStream::TYPE_GZIP, net::SourceStream::TYPE_DEFLATE};