-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation of Schelling's segregation model example #853
base: master
Are you sure you want to change the base?
Conversation
return FLAMEGPU->getVariable<unsigned int>("available"); | ||
} | ||
FLAMEGPU_AGENT_FUNCTION(output_available_locations, flamegpu::MessageNone, flamegpu::MessageArray) { | ||
FLAMEGPU->message_out.setIndex(FLAMEGPU->getThreadIndex()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is getThreadIndex()
still required? I doubt Paul will like this being used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it's because you only have a random selection of locations available, which aren't a contiguous block. So you need a way agents can identify valid spots, and this is a fairly simple method.
Not sure of a better approach off the top of my head, maybe one for discussion in the meeting.
Iirc, Primage's bonding was more all-to-all so didn't suffer from this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having thought about this, I guess the 'proper' way would be for the spaces_available
property to be binned, and a spaces_available
brute force message list created.
Spaces available output a message which only contains their ID, Agents then grab the list, check it's length and generate a random number to grab the message at that index. (Brute force messages don't currently have direct access like an array, so would need iterating, but there's no practical reason it couldn't be added).
One benefit of this 'proper' version is that it would better facilitate agents with more specific preferences, as the spaces_available
message could be improved to list any number of features of the space which the agent must approve of. (Although we might want to think about a more generalised version of Spatial messaging for multi-dimensional data if this were to be a real thing).
} | ||
|
||
FLAMEGPU_AGENT_FUNCTION(select_winners, flamegpu::MessageBucket, flamegpu::MessageArray) { | ||
// First agent in the bucket wins |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More expensive, but technically more 'fair' to have agents roll a random float which they attach to the message, and the highest/lowest float wins.
return flamegpu::ALIVE; | ||
} | ||
|
||
FLAMEGPU_AGENT_FUNCTION_CONDITION(is_available) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might benefit from a comment above this, to denote the remaining functions are from the iterative submodel.
(My initial thought was to split the entire submodel into a seperate header or source file, like I've done in primage to aid readability, but probably overkill for the small model (and I doubt I did it for sugar scape).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can probably rename the PR, it's nolonger particularly hacky
|
||
// Configurable properties | ||
constexpr unsigned int GRID_WIDTH = 100; | ||
constexpr float THRESHOLD = 0.70; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.70f
(This is the Windows CI fail)
|
||
// Functions | ||
auto& outputLocationsFunction = agent.newFunction("output_available_locations", output_available_locations); | ||
outputLocationsFunction.setMessageOutput("available_location_message"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My gut feeling is that branching inside agent functions would be cheaper than an agent function condition (which involves a sort of the agents after launching an agent function to create the filter value).
I think agent function conditions exist as a way to partition agents in stateful models.
But we probably don't have any models which demonstrate that, so eh.
Again, one for discussion at meeting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'll depend on how complex the function with the branching is, how big the populations are, if there's room for concurrency by splitting the pop, and how grouped together the agents which follow each branch would be (plus other things). It's something that we should probably investigate at some point to make recommendations about when to use it (if it follows a sane pattern wrt the abov epoints)
fixes windows warning
A revised / FLAME GPU 2.0.0-rc compatible version of this is in FLAMEGPU/ABM_Framework_Comparisons. https://github.com/FLAMEGPU/ABM_Framework_Comparisons/blob/FLAMEGPU2/FLAMEGPU2/src/schelling.cu The size will want to be reduced. |
This works but currently relies on some questionable code, specifically using cuda thread index to allow an unusual access pattern to a message list.