-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2024] AArch64 support #1088
base: develop
Are you sure you want to change the base?
[2024] AArch64 support #1088
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much for this, it looks awesome! I haven't been through it all, I've most concentrated on the arm layer at the moment (and I haven't made my way through all of that). The comments pointing to the location of useful documentation is massively useful and very much appreciated! 5:D There's a few little bits that can be cleaned up but on the whole it looks perfectly in line with the rest of volatility, so good work! It'll take me a while to find time to go through more of it, and check the various references to fully get what's going on.
I'm trying to figure out the split between mapping
and _mapping
, it looks like _mapping
returns every chunk, and mapping
coalesces them together is that about right? I think that's probably for the best, but it feels like it makes for bulky code somehow. Really excited to see where this codes (and find some Windows-on-arm samples to try it against too!). 5:D
) | ||
self._base_layer = self.config["memory_layer"] | ||
# self._swap_layers = [] # TODO | ||
self._page_map_offset = self.config["page_map_offset"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two pages maps as part of AArch64, aren't there? Which one should people provide here? Might be better to use the terminology used by the architecture here, rather than reusing values used elsewhere...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the Linux Stacker, we are instantiating the kernel Layer. We pass the kernel DTB (Translation Table Base 1 Base Address) to the AArch64 layer. There is in fact a unique page map for the whole kernel, stored in ARM register TTBR1, and a page map for the user land stored in TTBR0. TTBR0 is not constant, and changes on every process context switch. What this means, is that TTBR0 is changed to the value of the current process DTB (which happens continuously).
For me, there are only two possible callers to this layer :
- LinuxStacker (kernel, only once)
- "Processes"
People should not have to instantiate the kernel layer twice, but only user space layers (programs) ? In another comment I detailed how I implemented a check to facilitate this.
volatility3/framework/layers/arm.py
Outdated
# self._swap_layers = [] # TODO | ||
self._page_map_offset = self.config["page_map_offset"] | ||
self._tcr_el1_tnsz = self.config["tcr_el1_tnsz"] | ||
self._page_size = self.config["page_size"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this part of the architecture, or is it dynamic, or is it something pre-defined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_page_map_offset
is the DTB (either kernel or specific to a process). For the kernel, it can be calculated thanks to the existing Intel implementation. For the process, I don't know exactly where, but it is stored in thetask_struct
directly if I'm not mistaken._tcr_el1_tnsz
: The size offset of the memory region addressed by TTBR1_EL1. The region size is 2(64-T1SZ) bytes -> See page 7081 of the doc. It is defined in the kernel config, but we cannot predict it (it is calculated in LinuxStacker)._page_size
: Page size (4, 16 or 64 KB), also defined in the kernel config, but we cannot predict it (it is calculated in LinuxStacker).
volatility3/framework/layers/arm.py
Outdated
self._tcr_el1_tnsz = self.config["tcr_el1_tnsz"] | ||
self._page_size = self.config["page_size"] | ||
|
||
# Context : TTB0 (user) or TTB1 (kernel) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this layer will only cope with one of user or kernel? That might be complex to handle in the future?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This layer needs to be instantiated one time for the kernel, and as many times as there are processes. End users should not have to instantiate the kernel layer twice, and instantiating a process layer is basically the same as the Intel logic (I think mostly of malfind, which I used to validate the processes layers translations).
volatility3/framework/layers/arm.py
Outdated
), | ||
requirements.IntRequirement(name="page_map_offset", optional=False), | ||
requirements.IntRequirement(name="page_map_offset_kernel", optional=False), | ||
requirements.IntRequirement(name="tcr_el1_tnsz", optional=False), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can actually provide a description field with requirements, which might be quite handy here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noted, I will take a look !
Hi @ikelos, thanks for your quick replies ! I will check every of your comment right now, as I have some spare time. Regarding your general concern regarding |
Hehehe, yeah, I figured it might be. That's not too sweet the intel layer isn't overly complicated but it's fine for now. If you find any ways of simplifying it we can back port then to intel too... 5;D |
Here is what a typical layer instantiation debug looks like : Kernel layer instantiated by DEBUG volatility3.framework.automagic.linux: Linux ASLR shift values determined: physical -ffffffbfc7e00000 virtual 172a600000
DEBUG volatility3.framework.layers.arm: Base layer : Elf64Layer
DEBUG volatility3.framework.layers.arm: Virtual address space : kernel
DEBUG volatility3.framework.layers.arm: Virtual addresses space range : ('0xffffffc000000000', '0xffffffffffffffff')
DEBUG volatility3.framework.layers.arm: Page size : 4
DEBUG volatility3.framework.layers.arm: T1SZ : 25
DEBUG volatility3.framework.layers.arm: Page map offset : 0x41963000
DEBUG volatility3.framework.layers.arm: Translation mappings : [(38, 30), (29, 21), (20, 12)]
DEBUG volatility3.framework.automagic.linux: Kernel DTB was found at: 0x41963000
DEBUG volatility3.framework.automagic.linux: AArch64 image found
DEBUG volatility3.framework.layers.arm: Base layer : memory_layer
DEBUG volatility3.framework.layers.arm: Virtual address space : kernel
DEBUG volatility3.framework.layers.arm: Virtual addresses space range : ('0xffffffc000000000', '0xffffffffffffffff')
DEBUG volatility3.framework.layers.arm: Page size : 4
DEBUG volatility3.framework.layers.arm: T1SZ : 25
DEBUG volatility3.framework.layers.arm: Page map offset : 0x41963000
DEBUG volatility3.framework.layers.arm: Translation mappings : [(38, 30), (29, 21), (20, 12)] Processes layers instantiated by DEBUG volatility3.framework.layers.arm: Base layer : memory_layer
DEBUG volatility3.framework.layers.arm: Virtual address space : user
DEBUG volatility3.framework.layers.arm: Virtual addresses space range : ('0x0', '0x3fffffffff')
DEBUG volatility3.framework.layers.arm: Page size : 4
DEBUG volatility3.framework.layers.arm: T0SZ : 25
DEBUG volatility3.framework.layers.arm: Page map offset : 0x47266000
DEBUG volatility3.framework.layers.arm: Translation mappings : [(38, 30), (29, 21), (20, 12)]
DEBUG volatility3.framework.layers.arm: Base layer : memory_layer
DEBUG volatility3.framework.layers.arm: Virtual address space : user
DEBUG volatility3.framework.layers.arm: Virtual addresses space range : ('0x0', '0x3fffffffff')
DEBUG volatility3.framework.layers.arm: Page size : 4
DEBUG volatility3.framework.layers.arm: T0SZ : 25
DEBUG volatility3.framework.layers.arm: Page map offset : 0x47928000
DEBUG volatility3.framework.layers.arm: Translation mappings : [(38, 30), (29, 21), (20, 12)] Check out memory samples here [7 days] :
|
|
||
# Never stack on top of an intel layer | ||
# Never stack on top of a linux layer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To improve this, why not add an explicit layer property, like _is_top_layer
inside volatility3/framework/layers/intel.py#Intel
and volatility3/framework/layers/arm.py#AArch64
and check with following :
if getattr(layer, "_is_top_layer") and layer._is_top_layer:
return None
If we keep the current implementation, we have to change the Linux, Windows and Mac stacker for each new architecture layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part of the reason is virtualization, it's possible to have an arm layer inside an intel layer (and it's certainly possible to have an intel layer on top an intel layer). You're right, it's not really a scalable solution (hence the FIXME right beneath this line), but it's also not a trivial attribute...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which stacker is supposed to do this (LinuxStacker, WindowsIntelStacker and MacIntelStacker blocks it), for example if a VM managed by qemu-system-aarch64 on an Intel host was running when the memory was dumped ?
Will a "VM (qemu) layer" be available from the globals context.layers
variable too :
- FileLayer
- LimeLayer
- IntelLayer
- Memory Layer
- QemuLayer (not sure about this one)
- IntelLayer
- Memory Layer
- IntelLayer
- IntelLayer
- LimeLayer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is anything automatic in core that would do that, but it would be nice to have.
There has been this issue from a while ago that talks about that kind of thing.
#464
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing it out :) So in the current state, as it is not implemented, we will continue to strictly refuse stacking on top of Linux and AArch64. Adding an explicit flag on those two might be a temporary and more scalable solution ?
I leave this as a side note, for a potentiel reader in the future interested in AArch64 hypervisor execution mode (and what it might imply, if treating a layer from the hypervisor point of view) : https://developer.arm.com/documentation/102412/0103/Privilege-and-Exception-levels/Exception-levels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do have a vmscan plugin, and in the future will likely generate a config file that will allow virtual machine guests to have volatility plugins run on them. This likely either by Intel on Intel, or IntelVM (doesn't exist yet) on Intel. The point is, it is a capability we'd like to leave open for the future, so planning for it now is useful.
) | ||
layer.config["kernel_virtual_offset"] = aslr_shift | ||
linux_arch_stackers = [LinuxIntelSubStacker, LinuxAArch64SubStacker] | ||
for linux_arch_stacker in linux_arch_stackers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, the stacking is operated blindly. However, we could also rely on symbols to determine the architecture to stack on.
I identified idmap_pg_dir
to be arm64 exclusive, and present since the first 3.7 release. Relying on a symbol, avoids "forced exception rounds", which shouts about missing symbols when an AArch64 image is first passed to the Intel stacker.
Hi 👋,
This PR provides AArch64 integration to the Volatility3 framework, as well as a design rework in the current linux stacker.
Implementation follows ARM official documentation, and includes essential APIs for higher level code (plugins etc.).
You can follow the roadmap here :
My ressources :
Testing :
Linux version 6.1.21
Linux version 3.18.94
Linux version 6.1.0-rpi8-rpi-v8
Linux version 5.10.157-android13-4-00003-g830b023b88f3-dirty
Linux version 4.9.201-tegra
Linux version 6.1.36
Thanks to everyone who took the time to test this PR on their devices !
Unstable plugins :
linux.bash
:sections have no size
on Android related samples.bash
by default, as it's not GNU/Linux running on the system . Either we need a dedicated plugin (.ash_history
?), or a check to tell users why the plugin isn't working.linux.lsof
:dentry
members can be NULL, and this behaviour isn't handled (more informations in the dedicated Slack thread)Please note that this is still experimental, testing is still going on. If you want to try this PR out, here are the steps :
git checkout aarch64-support
)dwarf2json
If you have any real-life samples to provide, or if you encounter any error, feel free to comment below !