One way to specify Cellular Automata rules is with rule tables. Rule tables are enumerations of all possible neighbourhood states together with their node state mappings. For any given neighbourhood state, a rule table provides the associated node state value. Netomaton provides a built-in function for creating random rule tables. The following snippet demonstrates its usage:
import netomaton as ntm
rule_table, actual_lambda, quiescent_state = ntm.random_rule_table(lambda_val=0.45, k=4, r=2,
strong_quiescence=True, isotropic=True)
network = ntm.topology.cellular_automaton(n=128, r=2)
initial_conditions = ntm.init_random(128, k=4)
# use the built-in table_rule to use the generated rule table
trajectory = ntm.evolve(initial_conditions=initial_conditions, network=network, timesteps=200,
activity_rule=ntm.table_rule(rule_table))
The following plots demonstrate the effect of varying the lambda parameter:
The source code for the example above can be found here.
C. G. Langton describes the lambda parameter, and the transition from order to criticality to chaos in Cellular Automata while varying the lambda parameter, in the paper:
Langton, C. G. (1990). Computation at the edge of chaos: phase transitions and emergent computation. Physica D: Nonlinear Phenomena, 42(1-3), 12-37.
Netomaton provides various built-in functions which can act as measures of complexity in the automata being examined.
Average node entropy can reveal something about the presence of information within automata dynamics. The
built-in function average_node_entropy
provides the average Shannon entropy per single node in a given
automaton. The following snippet demonstrates the calculation of the average node entropy:
import netomaton as ntm
network = ntm.topology.cellular_automaton(n=200)
initial_conditions = ntm.init_random(200)
trajectory = ntm.evolve(initial_conditions=initial_conditions, network=network, timesteps=1000,
activity_rule=ntm.rules.nks_ca_rule(30))
# calculate the average node entropy; the value will be ~0.999 in this case
avg_node_entropy = ntm.average_node_entropy(trajectory)
The source code for the example above can be found here.
The following plots illustrate how average node entropy changes as a function of lambda:
The degree to which a node state is correlated to its state in the next time step can be described using mutual
information. Ideal levels of correlation are required for effective processing of information. The built-in function
average_mutual_information
provides the average mutual information between a node and itself in the next time step
(the temporal distance can be adjusted). The following snippet demonstrates the calculation of the average mutual
information:
import netomaton as ntm
network = ntm.topology.cellular_automaton(n=200)
initial_conditions = ntm.init_random(200)
trajectory = ntm.evolve(initial_conditions=initial_conditions, network=network, timesteps=1000,
activity_rule=ntm.rules.nks_ca_rule(30))
# calculate the average mutual information between a node and itself in the next time step
avg_mutual_information = ntm.average_mutual_information(trajectory)
The source code for the example above can be found here.
The following plots illustrate how average mutual information changes as a function of lambda:
The groups of plots above were created using the source code found here.