-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2024] AArch64 support #1088
base: develop
Are you sure you want to change the base?
[2024] AArch64 support #1088
Changes from 8 commits
e5f4071
1fd3708
490b593
c9e7a3b
584d5f1
156eb51
f63a6f4
f54c2d1
1833dbd
fee620c
720aa60
001e473
8e56198
f46d2a6
8c0c6f6
19e65ea
254a3e7
bc1712f
a2145be
edee718
7b50a34
6d60a08
0e23aee
c10ab69
6a821e7
37512f9
5355126
eb872e6
f994d28
f9f5c0f
66e989b
7040645
65de635
7ad5bb1
bc47e76
8978b38
6cafcc5
dc559fc
25f94df
7aae9c8
680d0e4
ff0bfc3
b4ab6ee
94cb067
f55a1cd
69ee431
cf51013
26d92c7
5c63b13
3449de2
62526b5
953eec0
f9a54e4
2e86c11
e6b68e7
40f78d6
287f029
a8a641b
0defdd6
9d05c2f
96404f7
07c416c
06688a1
ec5758a
927365b
20fcc68
5798394
2bb84d6
f7614e3
47f8a91
7dced24
9f609f1
2e8a467
b3279f1
d706bd7
c5afe39
9676330
c4766c2
cb69983
537b404
9f17302
d18496f
e1ffb7e
74b30cc
b682f49
1f3ab8e
59b58a5
046c806
d1e26c9
b302885
5ee05ce
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,20 +4,21 @@ | |
|
||
import logging | ||
import os | ||
from typing import Optional, Tuple, Type | ||
from typing import Optional, Tuple, Type, Union | ||
|
||
from volatility3.framework import constants, interfaces | ||
from volatility3.framework import constants, interfaces, exceptions | ||
from volatility3.framework.automagic import symbol_cache, symbol_finder | ||
from volatility3.framework.configuration import requirements | ||
from volatility3.framework.layers import intel, scanners | ||
from volatility3.framework.layers import intel, scanners, arm | ||
from volatility3.framework.symbols import linux | ||
|
||
vollog = logging.getLogger(__name__) | ||
|
||
|
||
class LinuxIntelStacker(interfaces.automagic.StackerLayerInterface): | ||
class LinuxStacker(interfaces.automagic.StackerLayerInterface): | ||
stack_order = 35 | ||
exclusion_list = ["mac", "windows"] | ||
join = interfaces.configuration.path_join | ||
|
||
@classmethod | ||
def stack( | ||
|
@@ -39,11 +40,10 @@ def stack( | |
|
||
# Bail out by default unless we can stack properly | ||
layer = context.layers[layer_name] | ||
join = interfaces.configuration.path_join | ||
|
||
# Never stack on top of an intel layer | ||
# Never stack on top of a linux layer | ||
# FIXME: Find a way to improve this check | ||
if isinstance(layer, intel.Intel): | ||
if isinstance(layer, intel.Intel) or isinstance(layer, arm.AArch64): | ||
return None | ||
|
||
identifiers_path = os.path.join( | ||
|
@@ -63,66 +63,230 @@ def stack( | |
for _, banner in layer.scan( | ||
context=context, scanner=mss, progress_callback=progress_callback | ||
): | ||
dtb = None | ||
vollog.debug(f"Identified banner: {repr(banner)}") | ||
|
||
isf_path = linux_banners.get(banner, None) | ||
if isf_path: | ||
table_name = context.symbol_space.free_table_name("LintelStacker") | ||
table_name = context.symbol_space.free_table_name("LinuxStacker") | ||
table = linux.LinuxKernelIntermedSymbols( | ||
context, | ||
"temporary." + table_name, | ||
name=table_name, | ||
isf_url=isf_path, | ||
) | ||
context.symbol_space.append(table) | ||
kaslr_shift, aslr_shift = cls.find_aslr( | ||
context, table_name, layer_name, progress_callback=progress_callback | ||
) | ||
|
||
layer_class: Type = intel.Intel | ||
if "init_top_pgt" in table.symbols: | ||
layer_class = intel.Intel32e | ||
dtb_symbol_name = "init_top_pgt" | ||
elif "init_level4_pgt" in table.symbols: | ||
layer_class = intel.Intel32e | ||
dtb_symbol_name = "init_level4_pgt" | ||
else: | ||
dtb_symbol_name = "swapper_pg_dir" | ||
|
||
dtb = cls.virtual_to_physical_address( | ||
table.get_symbol(dtb_symbol_name).address + kaslr_shift | ||
) | ||
|
||
# Build the new layer | ||
new_layer_name = context.layers.free_layer_name("IntelLayer") | ||
config_path = join("IntelHelper", new_layer_name) | ||
context.config[join(config_path, "memory_layer")] = layer_name | ||
context.config[join(config_path, "page_map_offset")] = dtb | ||
new_layer_name = context.layers.free_layer_name("LinuxLayer") | ||
config_path = cls.join("LinuxHelper", new_layer_name) | ||
context.config[cls.join(config_path, "memory_layer")] = layer_name | ||
context.config[ | ||
join(config_path, LinuxSymbolFinder.banner_config_key) | ||
cls.join(config_path, LinuxSymbolFinder.banner_config_key) | ||
] = str(banner, "latin-1") | ||
|
||
layer = layer_class( | ||
context, | ||
config_path=config_path, | ||
name=new_layer_name, | ||
metadata={"os": "Linux"}, | ||
) | ||
layer.config["kernel_virtual_offset"] = aslr_shift | ||
linux_arch_stackers = [cls.intel_stacker, cls.aarch64_stacker] | ||
|
||
for linux_arch_stacker in linux_arch_stackers: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here, the stacking is operated blindly. However, we could also rely on symbols to determine the architecture to stack on. I identified |
||
try: | ||
layer = linux_arch_stacker( | ||
context=context, | ||
layer_name=layer_name, | ||
table=table, | ||
table_name=table_name, | ||
config_path=config_path, | ||
new_layer_name=new_layer_name, | ||
banner=banner, | ||
progress_callback=progress_callback, | ||
) | ||
if layer: | ||
return layer | ||
except Exception as e: | ||
vollog.log( | ||
constants.LOGLEVEL_VVVV, | ||
f"{linux_arch_stacker.__name__} exception: {e}", | ||
) | ||
|
||
if layer and dtb: | ||
vollog.debug(f"DTB was found at: 0x{dtb:0x}") | ||
return layer | ||
vollog.debug("No suitable linux banner could be matched") | ||
return None | ||
|
||
@classmethod | ||
def intel_stacker( | ||
cls, | ||
context: interfaces.context.ContextInterface, | ||
layer_name: str, | ||
table: linux.LinuxKernelIntermedSymbols, | ||
table_name: str, | ||
config_path: str, | ||
new_layer_name: str, | ||
banner: str, | ||
progress_callback: constants.ProgressCallback = None, | ||
) -> Union[intel.Intel, intel.Intel32e, None]: | ||
|
||
layer_class: Type = intel.Intel | ||
if "init_top_pgt" in table.symbols: | ||
layer_class = intel.Intel32e | ||
dtb_symbol_name = "init_top_pgt" | ||
elif "init_level4_pgt" in table.symbols: | ||
layer_class = intel.Intel32e | ||
dtb_symbol_name = "init_level4_pgt" | ||
else: | ||
dtb_symbol_name = "swapper_pg_dir" | ||
|
||
kaslr_shift, aslr_shift = cls.find_aslr( | ||
context, | ||
table_name, | ||
layer_name, | ||
layer_class, | ||
progress_callback=progress_callback, | ||
) | ||
|
||
dtb = cls.virtual_to_physical_address( | ||
table.get_symbol(dtb_symbol_name).address + kaslr_shift | ||
) | ||
|
||
# Build the new layer | ||
context.config[cls.join(config_path, "page_map_offset")] = dtb | ||
|
||
layer = layer_class( | ||
context, | ||
config_path=config_path, | ||
name=new_layer_name, | ||
metadata={"os": "Linux"}, | ||
) | ||
layer.config["kernel_virtual_offset"] = aslr_shift | ||
linux_banner_address = table.get_symbol("linux_banner").address + aslr_shift | ||
test_banner_equality = cls.verify_translation_by_banner( | ||
context=context, | ||
layer=layer, | ||
layer_name=layer_name, | ||
linux_banner_address=linux_banner_address, | ||
target_banner=banner, | ||
) | ||
|
||
if layer and dtb and test_banner_equality: | ||
vollog.debug(f"DTB was found at: 0x{dtb:0x}") | ||
vollog.debug("Intel image found") | ||
return layer | ||
|
||
return None | ||
|
||
@classmethod | ||
def aarch64_stacker( | ||
cls, | ||
context: interfaces.context.ContextInterface, | ||
layer_name: str, | ||
table: linux.LinuxKernelIntermedSymbols, | ||
table_name: str, | ||
config_path: str, | ||
new_layer_name: str, | ||
banner: bytes, | ||
progress_callback: constants.ProgressCallback = None, | ||
) -> Optional[arm.AArch64]: | ||
|
||
layer_class = arm.AArch64 | ||
kaslr_shift, aslr_shift = cls.find_aslr( | ||
context, | ||
table_name, | ||
layer_name, | ||
layer_class, | ||
progress_callback=progress_callback, | ||
) | ||
dtb = table.get_symbol("swapper_pg_dir").address + kaslr_shift | ||
|
||
# CREDIT : https://github.com/crash-utility/crash/blob/28891d1127542dbb2d5ba16c575e14e741ed73ef/arm64.c#L941 | ||
kernel_flags = 0 | ||
if "_kernel_flags_le" in table.symbols: | ||
kernel_flags = table.get_symbol("_kernel_flags_le").address | ||
if "_kernel_flags_le_hi32" in table.symbols: | ||
kernel_flags |= table.get_symbol("_kernel_flags_le_hi32").address << 32 | ||
if "_kernel_flags_le_lo32" in table.symbols: | ||
kernel_flags |= table.get_symbol("_kernel_flags_le_lo32").address | ||
|
||
# TODO: There is no simple way to determine TTB0 page size for now, so assume TTB0 and TTB1 have the same (which is most likely). | ||
# https://www.kernel.org/doc/Documentation/arm64/booting.txt | ||
page_size_bit = (kernel_flags >> 1) & 3 | ||
linux_banner_address = table.get_symbol("linux_banner").address + aslr_shift | ||
|
||
# v6.7/source/arch/arm64/include/asm/memory.h#L186 - v5.7/source/arch/arm64/include/asm/memory.h#L160 | ||
if "vabits_actual" in table.symbols: | ||
vabits_actual_phys_addr = ( | ||
table.get_symbol("vabits_actual").address + kaslr_shift | ||
) | ||
va_bits = int.from_bytes( | ||
context.layers[layer_name].read(vabits_actual_phys_addr, 8), | ||
"little", | ||
) | ||
else: | ||
# TODO: If KASAN space is large enough, it *might* push kernel addresses higher and generate inaccurate results ? | ||
# TODO: There is no simple way to determine T0SZ for now, so assume T1SZ and T0SZ are the same (which is most likely). | ||
# We count the number of high bits equal to 1, which gives us the kernel space address mask and ultimately TCR_EL1.T1SZ. | ||
va_bits = (linux_banner_address ^ (2**64 - 1)).bit_length() + 1 | ||
|
||
if page_size_bit != 0: | ||
page_size_bit_map = {1: 4, 2: 16, 3: 64} | ||
page_size = page_size_bit_map[page_size_bit] | ||
tcr_el1_tnsz = 64 - va_bits | ||
|
||
context.config[cls.join(config_path, "page_map_offset")] = dtb | ||
context.config[cls.join(config_path, "page_map_offset_kernel")] = dtb | ||
context.config[cls.join(config_path, "tcr_el1_tnsz")] = tcr_el1_tnsz | ||
context.config[cls.join(config_path, "page_size")] = page_size | ||
|
||
layer = layer_class( | ||
context, | ||
config_path=config_path, | ||
name=new_layer_name, | ||
metadata={"os": "Linux"}, | ||
) | ||
layer.config["kernel_virtual_offset"] = aslr_shift | ||
test_banner_equality = cls.verify_translation_by_banner( | ||
context=context, | ||
layer=layer, | ||
layer_name=layer_name, | ||
linux_banner_address=linux_banner_address, | ||
target_banner=banner, | ||
) | ||
if layer and dtb and test_banner_equality: | ||
vollog.debug(f"Kernel DTB was found at: 0x{dtb:0x}") | ||
vollog.debug("AArch64 image found") | ||
return layer | ||
|
||
return None | ||
|
||
@classmethod | ||
def verify_translation_by_banner( | ||
cls, | ||
context: interfaces.context.ContextInterface, | ||
layer, | ||
layer_name: str, | ||
linux_banner_address: int, | ||
target_banner: bytes, | ||
) -> bool: | ||
"""Determine if a stacked layer is correct or a false positive, by callind the underlying | ||
_translate method against the linux_banner symbol virtual address. Then, compare it with | ||
the detected banner to verify the correct translation. | ||
This will directly raise an exception, as any failed attempt indicates a wrong layer selection. | ||
""" | ||
|
||
test_banner_equality = True | ||
try: | ||
banner_phys_address = layer._translate(linux_banner_address)[0] | ||
banner_value = context.layers[layer_name].read( | ||
banner_phys_address, len(target_banner) | ||
) | ||
except exceptions.InvalidAddressException: | ||
raise Exception('Cannot translate "linux_banner" virtual address') | ||
|
||
if not banner_value == target_banner: | ||
raise Exception( | ||
f'Translated "linux_banner" virtual address mismatches detected banner : \n{banner_value}\n!=\n{target_banner}' | ||
) | ||
|
||
return test_banner_equality | ||
|
||
@classmethod | ||
def find_aslr( | ||
cls, | ||
context: interfaces.context.ContextInterface, | ||
symbol_table: str, | ||
layer_name: str, | ||
layer_class, | ||
progress_callback: constants.ProgressCallback = None, | ||
) -> Tuple[int, int]: | ||
"""Determines the offset of the actual DTB in physical space and its | ||
|
@@ -162,9 +326,12 @@ def find_aslr( | |
init_task.files.cast("long unsigned int") | ||
- module.get_symbol("init_files").address | ||
) | ||
kaslr_shift = init_task_address - cls.virtual_to_physical_address( | ||
init_task_json_address | ||
) | ||
if layer_class == arm.AArch64: | ||
Abyss-W4tcher marked this conversation as resolved.
Show resolved
Hide resolved
|
||
kaslr_shift = init_task_address - init_task_json_address | ||
else: | ||
kaslr_shift = init_task_address - cls.virtual_to_physical_address( | ||
init_task_json_address | ||
) | ||
if address_mask: | ||
aslr_shift = aslr_shift & address_mask | ||
|
||
|
@@ -196,5 +363,5 @@ class LinuxSymbolFinder(symbol_finder.SymbolFinder): | |
banner_config_key = "kernel_banner" | ||
operating_system = "linux" | ||
symbol_class = "volatility3.framework.symbols.linux.LinuxKernelIntermedSymbols" | ||
find_aslr = lambda cls, *args: LinuxIntelStacker.find_aslr(*args)[1] | ||
find_aslr = lambda cls, *args: LinuxStacker.find_aslr(*args)[1] | ||
exclusion_list = ["mac", "windows"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To improve this, why not add an explicit layer property, like
_is_top_layer
insidevolatility3/framework/layers/intel.py#Intel
andvolatility3/framework/layers/arm.py#AArch64
and check with following :If we keep the current implementation, we have to change the Linux, Windows and Mac stacker for each new architecture layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part of the reason is virtualization, it's possible to have an arm layer inside an intel layer (and it's certainly possible to have an intel layer on top an intel layer). You're right, it's not really a scalable solution (hence the FIXME right beneath this line), but it's also not a trivial attribute...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which stacker is supposed to do this (LinuxStacker, WindowsIntelStacker and MacIntelStacker blocks it), for example if a VM managed by qemu-system-aarch64 on an Intel host was running when the memory was dumped ?
Will a "VM (qemu) layer" be available from the globals
context.layers
variable too :There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is anything automatic in core that would do that, but it would be nice to have.
There has been this issue from a while ago that talks about that kind of thing.
#464
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing it out :) So in the current state, as it is not implemented, we will continue to strictly refuse stacking on top of Linux and AArch64. Adding an explicit flag on those two might be a temporary and more scalable solution ?
I leave this as a side note, for a potentiel reader in the future interested in AArch64 hypervisor execution mode (and what it might imply, if treating a layer from the hypervisor point of view) : https://developer.arm.com/documentation/102412/0103/Privilege-and-Exception-levels/Exception-levels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do have a vmscan plugin, and in the future will likely generate a config file that will allow virtual machine guests to have volatility plugins run on them. This likely either by Intel on Intel, or IntelVM (doesn't exist yet) on Intel. The point is, it is a capability we'd like to leave open for the future, so planning for it now is useful.