You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The benchmark result in README shows that size of compressed data is close to each other, but lz4 is much faster, takes about half of the time compared to snappy.
I feel that the file size is a little small (24KB) in this test.
So I tried large file, about 1.5GB, our real data file.
I am using python 2.7.6 on macbook air (1.7G i5).
Python 2.7.6 |Anaconda 1.6.1 (x86_64)| (default, Jan 10 2014, 11:23:15)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
My result shows that LZ4 has much better compression ratio, LZ4 data size after compression is 184MB, while snappy data size is 238MB, so lz4 is about 23% better in case of compression ratio.
But it takes about 4X+ and 1.7X+ time for compression and decompression respectively.
So it seems that LZ4 is quite slower than snappy with large data set.
Is it correct or maybe I need some tuning on block size?
The text was updated successfully, but these errors were encountered:
The benchmark result in README shows that size of compressed data is close to each other, but lz4 is much faster, takes about half of the time compared to snappy.
I feel that the file size is a little small (24KB) in this test.
So I tried large file, about 1.5GB, our real data file.
I am using python 2.7.6 on macbook air (1.7G i5).
My test results:
My result shows that LZ4 has much better compression ratio, LZ4 data size after compression is 184MB, while snappy data size is 238MB, so lz4 is about 23% better in case of compression ratio.
But it takes about 4X+ and 1.7X+ time for compression and decompression respectively.
So it seems that LZ4 is quite slower than snappy with large data set.
Is it correct or maybe I need some tuning on block size?
The text was updated successfully, but these errors were encountered: