Skip to content

Commit

Permalink
Order dictionary indicies to prevent 2x-50x size regression
Browse files Browse the repository at this point in the history
Summary:
I checked Vader [results](https://fburl.com/daiquery/4kk9jnm1) and found significant size differences between Nimble JNI and Velox rewrites for some files.

After some debugging I learned that we almost never get exactly the same Nimble files. You can do the rewrite 100 times and get 100 files with different sizes, meaning that Nimble writer has non-deterministic behavior. For example in this example P1630232570 **rewriting the same file multiple times yields files with sizes from 112MB to 364MB**.

After checking the encodings and enabling NIMBLE_ENCODING_SELECTION_DEBUG in the EncodingSelectionPolicy I **root-caused it to the dictionary encoding. All rewrites of the same files always had the same alphabet in different order, and that also caused dictionary indicies to be totally different.**

**Random order of the dictionary alphabet was caused by usage of the absl::flat_hash_map for the Statistic.uniqueCounts**. After switching Statistic.uniqueCounts type to F14FastMap, which has consistent iteration order, different rewrites of the same files became consistent.

After a bit of digging and comparing encoding selection logs I noticed that the **main size contributor in files with different sizes is the varint encoding of dictionary indicies**. Size of the encoded data depends on the encoded numeric value, the bigger it's the more bytes it needs in the encoded form. 

So, **I sorted indicies by the alphabet key frequency and put the most frequent keys first**, that would mean that the most frequent keys would get the smallest indicies. In case of the first 127 indicies we would only need 1 byte, for the next 16383 need 2 bytes, then 3, etc. **It consistently produced small files, but still with different sizes overall. I also did a reverse experiment, and it consistently produced large files.**


# Samples
Sorting alphabet by frequency would obviously impact writer performance, but IMHO it's worth it. In some examples the size regression is up to 580x (!!!), instead of 60MB files we can get a 35GB file for ifr_test_hive_table.

Checkout https://docs.google.com/spreadsheets/d/1d7m--4x6e0YddyfTdJvjrkYz-pB11O3sBdNM8usxmZM/edit?usp=sharing

I did testing on some top sized NImble tables and some tables where Vader shows significant size differences between JNI and Velox. `Lil` is the version from the diff, `big` is the version with reverse sorted indicies.

Some notable examples - top Nimble tables:
P1634619561

Some notable examples - top size differences:
P1634619725

# Other Notes 1
As you can see in the spreadsheet in some cases the original Nimble file is still much smaller, it means there are still some optimization and size fixing opportunities. May be sorting the alphabet would also lead to size decrease, or may be there is something else.

# Compression
It looks like we are not compressing Varint encoding, we should absolutely start doing that.

Differential Revision: D63964981
  • Loading branch information
sdruzkin authored and facebook-github-bot committed Oct 7, 2024
1 parent 43054ea commit 50b7a3d
Show file tree
Hide file tree
Showing 2 changed files with 47 additions and 4 deletions.
45 changes: 41 additions & 4 deletions dwio/nimble/encodings/DictionaryEncoding.h
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
*/
#pragma once

#include <algorithm>
#include <span>
#include "dwio/nimble/common/Buffer.h"
#include "dwio/nimble/common/EncodingPrimitives.h"
Expand Down Expand Up @@ -205,10 +206,46 @@ std::string_view DictionaryEncoding<T>::encode(
alphabetMapping.reserve(alphabetCount);
Vector<physicalType> alphabet{&buffer.getMemoryPool()};
alphabet.reserve(alphabetCount);
uint32_t index = 0;
for (const auto& pair : selection.statistics().uniqueCounts()) {
alphabet.push_back(pair.first);
alphabetMapping.emplace(pair.first, index++);

/// Indicies are usually stored with VARINT encoding which depends on the
/// number of set bits in the value.
///
/// 127 (1 << 7) is the maximum number that can be stored in one byte with
/// VARINT encoding. Meaning that if the alphabet has less than 127 unique
/// values, then all of them can be stored as one byte per index value.
///
/// If the alphabet has more than 127 unique values, then we need to put more
/// frequent alphabet values at the beginning of the alphabet to reduce the
/// number of bytes needed to store the indices encoded as VARINT.
///
/// This sorting optimization gives 3-5x size reduction if you compare most
/// and least optimal order of indicies when they use VARINT encoding.
if (alphabetCount > (1 << 7)) {
Vector<std::pair<physicalType, uint64_t>> sortedAlphabet{
&buffer.getMemoryPool()};
sortedAlphabet.reserve(alphabetCount);
for (const auto& pair : selection.statistics().uniqueCounts()) {
sortedAlphabet.push_back(pair);
}
sort(
sortedAlphabet.begin(),
sortedAlphabet.end(),
[](const std::pair<physicalType, uint32_t>& a,
const std::pair<physicalType, uint32_t>& b) {
return a.second > b.second;
});

uint32_t index = 0;
for (const auto& pair : sortedAlphabet) {
alphabet.push_back(pair.first);
alphabetMapping.emplace(pair.first, index++);
}
} else {
uint32_t index = 0;
for (const auto& pair : selection.statistics().uniqueCounts()) {
alphabet.push_back(pair.first);
alphabetMapping.emplace(pair.first, index++);
}
}

Vector<uint32_t> indices{&buffer.getMemoryPool()};
Expand Down
6 changes: 6 additions & 0 deletions dwio/nimble/encodings/Statistics.h
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,12 @@ namespace facebook::nimble {
template <typename T, typename InputType = T>
class UniqueValueCounts {
public:
// NOTICE: absl::flat_hash_map has incosistent iteration order
// every single time, this might be a problem for some encodings that depend
// on the order of the elements, such as dictionary indicies encoded with
// VARINT encoding. This also means that the order of the elements in the
// dictionary alphabet will be different every time giving incosistent file
// sizes.
using MapType = absl::flat_hash_map<T, uint64_t>;

struct Iterator {
Expand Down

0 comments on commit 50b7a3d

Please sign in to comment.