You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Say I want to do something simple such as copying one array to another. Implementing this as follows can consume a lot of memory if the arrays are large?
I usually end up having to manually loop over the arrays chunk by chunk to force it to copy in parts. I'm aware of the Context API and concurrency limits, but this does not seem to have effectively constrain operations such as the one above.
Is there some way to limit the total memory used to avoid the Out Of Memory killer?
The text was updated successfully, but these errors were encountered:
There is no such semaphore built-in at the moment, though I'm working towards some internal updates that will allow such a feature in the future. You can do part of this with concurrency limits, but it will not provide any guarantees.
For an example of how you might do this yourself (in C++), look here:
Basically that benchmark uses a counter to track the expected in-flight bytes, and starts many full tensorstore reads as will fit into the limit, using the future completion to trigger the next read.
If there a method to limit global memory usage?
Say I want to do something simple such as copying one array to another. Implementing this as follows can consume a lot of memory if the arrays are large?
I usually end up having to manually loop over the arrays chunk by chunk to force it to copy in parts. I'm aware of the
Context
API and concurrency limits, but this does not seem to have effectively constrain operations such as the one above.Is there some way to limit the total memory used to avoid the Out Of Memory killer?
The text was updated successfully, but these errors were encountered: