You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Actually, the Lidar code is able to return a vector<repere::Position> object which represents all others robots.
It works without memory and calculates at each frame the time-and-space average of points to approximate adversary robots.
A relative easy-update would be to memorize the N previous frames and make some new processes:
Determine the relation-ship of robots between 2 consecutive frames: determine which robot of the first one correspond to which of the second one; or else if a robot (dis)appears --> tag each robot with its position among frames
Provide some helpful getters from these data, for example accessing the average speed vector from the last N seconds/frames and so we should be able to anticipate adversary moves
This update could improve our strategy by detecting in advance where adversaries want to go, and we can also distinguish our secondary robot from all detected robots by tracking it, and so approximately know its task.
The text was updated successfully, but these errors were encountered:
Actually, the Lidar code is able to return a
vector<repere::Position>
object which represents all others robots.It works without memory and calculates at each frame the time-and-space average of points to approximate adversary robots.
A relative easy-update would be to memorize the
N
previous frames and make some new processes:N
seconds/frames and so we should be able to anticipate adversary movesThis update could improve our strategy by detecting in advance where adversaries want to go, and we can also distinguish our secondary robot from all detected robots by tracking it, and so approximately know its task.
The text was updated successfully, but these errors were encountered: