Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce scatter-gather scatterlists #1338

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

Abyss-W4tcher
Copy link
Contributor

Hello,

scatter-gather lists allow to scatter (store) or gather (retrieve) data, and is notably used for DMA data transfer. It is useful when allocating contiguous large chunks of data is not possible/easy. Here, obviously, we only need the gather mechanisms :). These capabilities will be needed for later/potential DMA buffers extraction capabilities.

A general doc. for the curious : https://jungo.com/windriver/scatter-gather-or-contiguous-dma-data-transfer/

Copy link
Member

@ikelos ikelos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks good, comments and typing information are much appreciated. The main sticking point is how to determine which layer is the physical layer for which there's no guaranteed method at the moment (and this make assumptions about the intel layer). We should really solve that before merging this (and yes, we may already have done similar things in the code base, but that just adds to the case of needing to fix the situation).

volatility3/framework/symbols/linux/extensions/__init__.py Outdated Show resolved Hide resolved
Returns:
An iterator of bytes
"""
physical_layer = self._context.layers["memory_layer"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is... fragile. For Intel layers, the underlying layer will likely be memory_layer (but not in the case of a virtualized intel system, for instance), and there's no telling that Arm's lower layer will be called memory_layer. I don't immediately have a better way of doing this. The layer's dependencies list would tell you all the layers it lives on top of, and they're likely to be in order, but short of mapping an address down a layer (and if you pick the wrong one, you might hit swap or something else) there's no guaranteed way of knowing how the layers all fit together... Any thoughts how best to handle this?

Copy link
Contributor Author

@Abyss-W4tcher Abyss-W4tcher Nov 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so here it can be a bit tricky, as we are "blind" of the layer position in the stacking list. However, pdbscan, linux.py and mac.py will refer to the physical layer by the config key "memory_layer", which is a TranslationLayerRequirement for Intel. We can use this information to determine where we are. FYI, the AArch64 implementation has no reason to change this naming, which would only complicate things I think.

Consequently, this implementation should do the trick :

# Either physical is layer-1 because we are a module layer, either physical is the current layer
physical_layer_name = self.context.layers[self.vol.layer_name].config.get(
    "memory_layer", self.vol.layer_name
)
physical_layer = self.context.layers[physical_layer_name]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, definitely need to use the code above to get it from this object's layer. At the moment all plugins only deal with one image, but that's absolutely not the intention long term.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change has been applied.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants