-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature : Hibernation Layer and plugins. #1036
Open
forensicxlab
wants to merge
27
commits into
volatilityfoundation:develop
Choose a base branch
from
forensicxlab:feature/hibernation-layer
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 10 commits
Commits
Show all changes
27 commits
Select commit
Hold shift + click to select a range
b0cbd3c
Adding hibernation support
k1nd0ne c91e417
Fixing Huffman
forensicxlab 5c44467
Adapting comments to the codec instead of the volatility3 prefetch pl…
k1nd0ne a13e105
Update the Xpress LZ77+Huffman decoder
k1nd0ne 3568d3f
Code comments and cleaning.
k1nd0ne 7e87e2d
Plugins added : hibernation.Info and hibernation.Dump. Support for ad…
k1nd0ne 142baa6
Adding support for Windows 10 2016 1607
k1nd0ne cfab14f
Only parse the kernel section if the user is using the 'windows.hiber…
k1nd0ne f6b960a
Quick code review and comments to make it more readable. Enhanced plu…
k1nd0ne 8db03c3
using black on the codecs
k1nd0ne d152b48
fixing mistakes in the lz77+huffman decompression algorithm
k1nd0ne 6213d45
Adding codecs + black
k1nd0ne 2bebe91
Formatting using good black version
k1nd0ne 9fd1567
Deported the decompression algorithm in a dedicated python3 package
k1nd0ne 9b855f1
Deported the decompression algorithm in a dedicated python3 package
k1nd0ne 2ffa3a9
conflict
k1nd0ne 9379e16
Merging to resolve conflict
k1nd0ne b17eab6
requirements
k1nd0ne 3f428f7
Fixing requirements
k1nd0ne ad31052
Fixing unused import
k1nd0ne 9d478f3
Merge branch 'volatilityfoundation:develop' into feature/hibernation-…
forensicxlab d779c0e
Upgrading pipeline and python3 version minimum requirement
k1nd0ne c947999
Upgrading pipeline and python3 version minimum requirement
k1nd0ne 0d6c1ea
Using symbol Tables (1/2)
k1nd0ne f378a1a
Merge branch 'volatilityfoundation:develop' into feature/hibernation-…
forensicxlab 3b1f2ae
Sync
3778c97
Merge branch 'develop' into feature/hibernation-layer
k1nd0ne File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,8 +1,322 @@ | ||
# This file is Copyright 2022 Volatility Foundation and licensed under the Volatility Software License 1.0 | ||
# This file is Copyright 2024 Volatility Foundation and licensed under the Volatility Software License 1.0 | ||
# which is available at https://www.volatilityfoundation.org/license/vsl-v1.0 | ||
# | ||
|
||
"""Codecs used for encoding or decoding data should live here | ||
|
||
# Temporary comment: | ||
# Both decompression algorithm are implemented differently. | ||
# The LZ77+Huffman decompression algorithm is largely inspired from the velocidex implementation. | ||
# Need to know if we choose to raise an exeception inside the decompression algorithm (like plain LZ77) or along with the result (like the Huffman+LZ77) | ||
# Both methods are presented here but this will need some discussions in the PR to choose a convention. | ||
|
||
""" | ||
import struct, logging | ||
from typing import Tuple, List, Union | ||
|
||
vollog = logging.getLogger(__name__) | ||
|
||
|
||
class BitStream: | ||
def __init__(self, source: bytes, in_pos: int): | ||
self.source = source | ||
self.index = in_pos + 4 | ||
# read UInt16 little endian | ||
mask = struct.unpack_from("<H", source, in_pos)[0] << 16 | ||
mask += struct.unpack_from("<H", source, in_pos + 2)[0] | ||
self.mask = mask | ||
self.bits = 32 | ||
|
||
def lookup(self, n: int) -> int: | ||
if n == 0: | ||
return 0 | ||
return self.mask >> (32 - n) | ||
|
||
def skip(self, n: int) -> Union[None, Exception]: | ||
self.mask = (self.mask << n) & 0xFFFFFFFF | ||
self.bits -= n | ||
if self.bits < 16: | ||
if self.index + 2 > len(self.source): | ||
return Exception("EOF Error") | ||
# read UInt16 little endian | ||
self.mask += ( | ||
(struct.unpack_from("<H", self.source, self.index)[0]) | ||
<< (16 - self.bits) | ||
) & 0xFFFFFFFF | ||
self.index += 2 | ||
self.bits += 16 | ||
|
||
return None | ||
|
||
def __str__(self): | ||
return f"{self.id}: symbol {self.symbol} length {self.length}" | ||
|
||
|
||
class PREFIX_CODE_NODE: | ||
def __init__(self): | ||
self.id = 0 | ||
self.symbol = 0 | ||
self.leaf = False | ||
self.child = [None, None] | ||
|
||
def __str__(self): | ||
return f"Node {self.id}: symbol {self.symbol} leaf {self.leaf}" | ||
|
||
|
||
class PREFIX_CODE_SYMBOL: | ||
def __init__(self): | ||
self.id = 0 | ||
self.symbol = 0 | ||
self.length = 0 | ||
|
||
def __str__(self): | ||
return f"Symbol {self.id}: symbol {self.symbol} length {self.length}" | ||
|
||
|
||
def prefix_code_tree_add_leaf( | ||
treeNodes: List[PREFIX_CODE_NODE], leafIndex: int, mask: int, bits: int | ||
) -> int: | ||
node = treeNodes[0] | ||
i = leafIndex + 1 | ||
childIndex = None | ||
|
||
while bits > 1: | ||
bits -= 1 | ||
childIndex = (mask >> bits) & 1 | ||
if node.child[childIndex] == None: | ||
|
||
node.child[childIndex] = treeNodes[i] | ||
treeNodes[i].leaf = False | ||
i += 1 | ||
node = node.child[childIndex] | ||
|
||
node.child[mask & 1] = treeNodes[leafIndex] | ||
|
||
return i | ||
|
||
|
||
def prefix_code_tree_rebuild(input: bytes) -> PREFIX_CODE_NODE: | ||
treeNodes = [PREFIX_CODE_NODE() for _ in range(1024)] | ||
symbolInfo = [PREFIX_CODE_SYMBOL() for _ in range(512)] | ||
|
||
for i in range(256): | ||
value = input[i] | ||
|
||
symbolInfo[2 * i].id = 2 * i | ||
symbolInfo[2 * i].symbol = 2 * i | ||
symbolInfo[2 * i].length = value & 0xF | ||
|
||
value >>= 4 | ||
|
||
symbolInfo[2 * i + 1].id = 2 * i + 1 | ||
symbolInfo[2 * i + 1].symbol = 2 * i + 1 | ||
symbolInfo[2 * i + 1].length = value & 0xF | ||
|
||
symbolInfo = sorted(symbolInfo, key=lambda x: (x.length, x.symbol)) | ||
|
||
i = 0 | ||
while i < 512 and symbolInfo[i].length == 0: | ||
i += 1 | ||
|
||
mask = 0 | ||
bits = 1 | ||
|
||
root = treeNodes[0] | ||
root.leaf = False | ||
|
||
j = 1 | ||
while i < 512: | ||
treeNodes[j].id = j | ||
treeNodes[j].symbol = symbolInfo[i].symbol | ||
treeNodes[j].leaf = True | ||
mask = mask << (symbolInfo[i].length - bits) | ||
bits = symbolInfo[i].length | ||
j = prefix_code_tree_add_leaf(treeNodes, j, mask, bits) | ||
mask += 1 | ||
i += 1 | ||
|
||
return root | ||
|
||
|
||
def prefix_code_tree_decode_symbol( | ||
bstr: BitStream, root: PREFIX_CODE_NODE | ||
) -> Tuple[int, Union[None, Exception]]: | ||
node = root | ||
i = 0 | ||
|
||
while True: | ||
bit = bstr.lookup(1) | ||
err = bstr.skip(1) | ||
if err is not None: | ||
vollog.warning("Some data could not be decompressed.") | ||
return 0, err | ||
|
||
node = node.child[bit] | ||
if node == None: | ||
|
||
vollog.warning("Corruption detected when decompressing the data.") | ||
return 0, Exception("Corruption detected") | ||
|
||
if node.leaf: | ||
break | ||
return node.symbol, None | ||
|
||
|
||
def lz77_huffman_decompress_chunck( | ||
in_idx: int, input: bytes, out_idx: int, output: bytearray, chunk_size: int | ||
) -> Tuple[int, int, Union[None, Exception]]: | ||
|
||
# Ensure there are at least 256 bytes available to read | ||
if in_idx + 256 > len(input): | ||
return 0, 0, Exception("EOF Error") | ||
|
||
root = prefix_code_tree_rebuild(input[in_idx:]) | ||
bstr = BitStream(input, in_idx + 256) | ||
|
||
i = out_idx | ||
|
||
while i < out_idx + chunk_size: | ||
symbol, err = prefix_code_tree_decode_symbol(bstr, root) | ||
|
||
if err is not None: | ||
return int(bstr.index), i, err | ||
|
||
if symbol < 256: | ||
output[i] = symbol | ||
i += 1 | ||
else: | ||
symbol -= 256 | ||
length = symbol & 15 | ||
symbol >>= 4 | ||
|
||
offset = 0 | ||
if symbol != 0: | ||
offset = int(bstr.lookup(symbol)) | ||
|
||
offset |= 1 << symbol | ||
offset = -offset | ||
|
||
if length == 15: | ||
length = bstr.source[bstr.index] + 15 | ||
bstr.index += 1 | ||
|
||
if length == 270: | ||
length = struct.unpack_from("<H", bstr.source, bstr.index)[0] | ||
bstr.index += 2 | ||
|
||
err = bstr.skip(symbol) | ||
if err is not None: | ||
return int(bstr.index), i, err | ||
|
||
length += 3 | ||
while length > 0: | ||
if i + offset < 0: | ||
vollog.warning("Some data could not be decompressed.") | ||
return int(bstr.index), i, Exception("Decompression Error") | ||
|
||
output[i] = output[i + offset] | ||
i += 1 | ||
length -= 1 | ||
if length == 0: | ||
break | ||
return int(bstr.index), i, None | ||
|
||
|
||
def lz77_huffman_decompress( | ||
input: bytes, output_size: int | ||
) -> Tuple[bytes, Union[None, Exception]]: | ||
""" | ||
Refs : | ||
- https://learn.microsoft.com/en-us/windows/win32/cmpapi/using-the-compression-api | ||
- https://raw.githubusercontent.com/Velocidex/go-prefetch/master/lzxpress.go | ||
""" | ||
output = bytearray(output_size) | ||
err = None | ||
|
||
# Index into the input buffer. | ||
in_idx = 0 | ||
|
||
# Index into the output buffer. | ||
out_idx = 0 | ||
|
||
while True: | ||
# How much data belongs in the current chunk. | ||
# Chunks are split into maximum 65536 bytes. | ||
chunk_size = output_size - out_idx | ||
if chunk_size > 65536: | ||
chunk_size = 65536 | ||
|
||
in_idx, out_idx, err = lz77_huffman_decompress_chunck( | ||
in_idx, input, out_idx, output, chunk_size | ||
) | ||
if err is not None: | ||
return output, err | ||
if out_idx >= len(output) or in_idx >= len(input): | ||
break | ||
return output, None | ||
|
||
|
||
def lz77_plain_decompress(in_buf): | ||
""" | ||
Refs : | ||
- https://learn.microsoft.com/en-us/windows/win32/cmpapi/using-the-compression-api | ||
""" | ||
out_idx = 0 | ||
in_idx = 0 | ||
nibble_idx = 0 | ||
|
||
flags = 0 | ||
flag_count = 0 | ||
|
||
out_buf = [] | ||
|
||
while in_idx < len(in_buf): | ||
if flag_count == 0: | ||
flags = struct.unpack("<I", in_buf[in_idx : in_idx + 4])[0] | ||
in_idx += 4 | ||
flag_count = 32 | ||
|
||
flag_count -= 1 | ||
|
||
if (flags & (1 << flag_count)) == 0: | ||
out_buf.append(in_buf[in_idx]) | ||
in_idx += 1 | ||
out_idx += 1 | ||
else: | ||
if in_idx == len(in_buf): | ||
return bytes(out_buf) | ||
|
||
length = struct.unpack("<H", in_buf[in_idx : in_idx + 2])[0] | ||
in_idx += 2 | ||
|
||
offset = (length // 8) + 1 | ||
length = length % 8 | ||
|
||
if length == 7: | ||
if nibble_idx == 0: | ||
length = in_buf[in_idx] % 16 | ||
nibble_idx = in_idx | ||
in_idx += 1 | ||
else: | ||
length = in_buf[nibble_idx] // 16 | ||
nibble_idx = 0 | ||
|
||
if length == 15: | ||
length = in_buf[in_idx] | ||
in_idx += 1 | ||
|
||
if length == 255: | ||
length = struct.unpack("<H", in_buf[in_idx : in_idx + 2])[0] | ||
in_idx += 2 | ||
|
||
if length == 0: | ||
length = struct.unpack("<I", in_buf[in_idx : in_idx + 4])[0] | ||
in_idx += 4 | ||
|
||
if length < 15 + 7: | ||
raise ValueError("CorruptedData") | ||
length -= 15 + 7 | ||
length += 15 | ||
length += 7 | ||
length += 3 | ||
|
||
for _ in range(length): | ||
if offset > out_idx: | ||
raise ValueError("CorruptedData") | ||
out_buf.append(out_buf[out_idx - offset]) | ||
out_idx += 1 | ||
return bytes(out_buf) |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder what's the best way to integrate algorithms into the "codecs" from the framework standpoint.
What are your recommandations about that ? Should I implement algorithms In seperate files ? I feel like I should but I have put both algorithms here for now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'd probably put them both in different files, but there needs to be a way to register them (either the main code seeks them out, or they add themselves to a shared singleton object when they're imported). I don't know if it'll be overkill, but it might be pretty cool to build them into the existing python codec framework?
https://docs.python.org/3/library/codecs.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I'll get right on that.