You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A little request to help my situiation. I have a 50MB gzip file containing 180MB text data I'd like to quickly seek() and tell() within less than 128MB running RAM.
The problem with lz4.dumps() is that you have to read all 180MB into memory in order to get lz4 compressed data.
It would be cool if this library support lz4.dump(iter_obj) so it iteratively reads data from iter_obj or any file-like object, so final lz4 compressed data could be saved into memory without having to read the full original data first.
Thanks!
The text was updated successfully, but these errors were encountered:
I assume it's not just that the python interface needs to support iterator/generator objects but underlying implementation needs to support lz4 streaming format ?
Hi,
Thanks for this awesome project!
A little request to help my situiation. I have a 50MB gzip file containing 180MB text data I'd like to quickly seek() and tell() within less than 128MB running RAM.
The problem with
lz4.dumps()
is that you have to read all 180MB into memory in order to get lz4 compressed data.It would be cool if this library support
lz4.dump(iter_obj)
so it iteratively reads data fromiter_obj
or any file-like object, so final lz4 compressed data could be saved into memory without having to read the full original data first.Thanks!
The text was updated successfully, but these errors were encountered: