# "Small Universes" : Pattern hunting with the Wolfram Physics Project

Hi everyone,

A couple of days ago I came across this fascinating interview of Stephen Wolfram where he addressed the different concepts arising from his latest and most promising project: the Wolfram Physics Project. If you are curious about how his Theory of Everything relates to physics, quantum computation or the origin of life I highly recommend having a look at the short clips available on Lex Fridman’s channel, they are much easier to digest than the whole +3h interview.

Among the different topics covered I was particularly interested in the central notion of hypergraphs and their visual aesthetics in 3D space.

From what I understand these graphs are similar to Cellular Automaton in that they follow a discrete model of computation but are also reminiscent of the Lindenmayer L-Systems because of the rewriting system they are built on. Those who are familiar with these types of algorithms (and I know there are many inside the Processing community) will easily catch up with this new model as it mainly consists in applying a rule over and over again until it produces something that looks really complicated.

What I find particulariy fascinating is not only the complexity that emerges from simple rules but also the remarkable diversity of hypergraphs that it can produce. Without any notion of geometry the rewriting system can yet lead to various looking shapes in 2D and 3D space: trees, open/closed networks, meshes, reticular systems, Sierpiński structures,…etc. Like for Cellular Automata, the Wolfram community is keeping a registry of the notable “universe” models (as they call them) found so far with their corresponding rules. Here is a naive interpretation in Python mode of the first and most popular example used by Stephen Wolfram in his presentations.

Hypergraph Example (Python Mode)
``````from collections import defaultdict
from itertools import chain

W, H = 1400, 800

####### Multiple relations example (see part 2.7)
####### -> https://www.wolframphysics.org/technical-introduction/basic-form-of-models/rules-depending-on-more-than-one-relation/

# RULE (if edges corresponding to pattern 'a' -> replace by new edges corresponding to pattern 'b')
a = [[0, 1], [0, 2]]
b = [[0, 2], [0, 3], [1, 3], [2, 3]]

# Starting axiom
s = [[0, 0], [0, 0]]

# Number of steps/generations
G = 13

def setup():
size(W, H, P3D)
stroke(105, 140, 200)
smooth(8)

global physics, uniques, a, b, s

cam = PeasyCam(this, 1000)
physics = VerletPhysics3D()
physics.setDrag(.2)

#Transform / Translate axiom according to rule
a_,b_,s_ = [list(chain(*e)) for e in (a,b,s)]
ws = set(k for k in b_ if k > max(a_))

while ws:
a_ += [ws.pop()]
s_ += [max(s_)+1]

d = {k:i for i, k in enumerate(a_)}
t = [map(lambda x: s_[d[x]], e) for e in b]

print "Translated axiom: ", t

#Finding matching relations in the array of edges
for g in xrange(G-1):
temp = []
ids = set()
w = max(chain(*t))+1

#This part (until line 66 + line 70) should be modified depending on the input rule
for i, (xa, ya) in enumerate(t):
for j, (xb, yb) in enumerate(t):
if i != j:
if xa == xb and yb > ya: # reminder: [[xa, xb], [ya, yb]] = [[0, 1], [0, 2]]
if {i, j}.isdisjoint(ids):
temp.append((xa, ya, xb, yb))
ids.update([i, j])

for id in sorted(ids, reverse=True): del t[id]

for xa, ya, xb, yb in temp[::-1]:
t.extend([[xa, yb], [xa, w], [ya, w], [yb, w]])
w += 1

#Get id of each node
uniques = list(set(chain(*t)))

print "Hypergraph after %.i steps: %.i nodes // %.i edges" % (G, len(uniques), len(t))

#Store neighboring nodes for each node in a dictionary (handle both binary and ternary edges)
d = defaultdict(set)
for edge in t:
n = len(edge)
if n == 2:
n1, n2 = edge
elif n == 3:
n1, n2, n3 = edge
d[n2].update([n1, n3])
else:
pass

#Find smallest and largest number of neighbors (not mandatory, using this for computing edge length later)
nn = sorted(len(d[k]) for k in d)
lo, hi = nn, nn[-1]+1

#Create a particle for each node + give it a repulsion force
for n in uniques:

#Create spring between each pair of nodes (handle both binary and ternary edges)
for edge in t:
n = len(edge)
if n == 2:
n1, n2 = edge
p1, p2 = [physics.particles.get(uniques.index(n)) for n in edge]
l = ( len(d[n1]) + len(d[n2]) ) * .5 # make length of edge depend on the average number of neighbors of its end vertices
f = map(l, lo, hi, 1, .3) #the higher the degree of the end vertices, the lower the factor
s = VerletSpring3D(p1, p2, l*l*f, .3)
elif n == 3:
n1, n2, n3 = edge
p1, p2, p3 = [physics.particles.get(uniques.index(n)) for n in edge]
l1 = ( len(d[n1]) + len(d[n2]) ) * .5
l2 = ( len(d[n2]) + len(d[n3]) ) * .5
f1 = map(l1, lo, hi, 1, .2)
f2 = map(l2, lo, hi, 1, .2)
s1 = VerletSpring3D(p1, p2, l1*l1*f1, .2)
s2 = VerletSpring3D(p2, p3, l2*l2*f2, .2)
else:
pass

def draw():
background('#000000')
translate(-W>>1, -H>>1)

#Update physics
physics.update()

#Draw springs
for s in physics.springs:
line(s.a.x(),s.a.y(), s.a.z(), s.b.x(), s.b.y(), s.b.z())

#Draw nodes
pushStyle()
strokeWeight(3.5)
stroke('#BEF0FF')
for p in physics.particles:
point(p.x(), p.y(), p.z())
popStyle()
``````

It works with other rules (pictures above), but you will have to modify lines 59 to 66 according to the pattern you are trying to find.

My hope is that someone will come up with a better implementation in Java mode and share it here for everyone to play with. Who knows, some day one of us might come across an amazing pattern that will be worth including in Wolfram registry !

10 Likes

Simpler examples in 2D:

rule WM148 : `[(x, y)] -> [(x, y), (y, z)]`

``````x, y, z = 0, 1, 2
arr = [(x, y)]
steps = 4

for step in xrange(steps):
temp = [] #temporary array to store new pairs of nodes

for i, (x, y) in enumerate(arr):
new_pair = (y, z) #create a new pair connecting 'y' to 'z'
idx = 2*i+1 #compute its future index in array (next generation)
temp.append((new_pair, idx)) #store it with its index
z += 1

[arr.insert(i, p) for p, i in temp] #intertwine previous array with temporary array -> ([x, y], [y, z], [x, y], [y, z],...)
``````

or in a more pythonic way:

``````for step in xrange(steps):
temp = [(y, z+i+len(arr)-1) for i, (x,y) in enumerate(arr)]
arr = sum(zip(arr, temp),())
``````

Then attaching the pairs of nodes with springs using Toxiclibs:

Full sketch
``````### A basic implementation of rule WM148 in Python mode ###
### doc --> https://www.wolframphysics.org/universes/wm148/ ###
### explanation --> https://www.wolframphysics.org/technical-introduction/basic-form-of-models/first-example-of-a-rule/index.html ###

from collections import defaultdict

W, H = 1000, 600 #dimensions of canvas
x, y, z = 0, 1, 2
a = [(x, y)] #starting axiom
sw = []

def setup():
size(1000, 600)

global physics, a, sw

# Instantiate Verlet Physics + set drag
physics = VerletPhysics2D()
physics.setDrag(.2)

# RULE: [(x, y)] --> [(x, y), (y, z)]
for step in xrange(11):
temp = [(y, z+i+len(a)-1) for i, (x,y) in enumerate(a)]
a = sum(zip(a, temp),())

d = defaultdict(set)
for n1, n2 in a:

neighbors_count = sorted(len(d[k]) for k in d)
minl, maxl = neighbors_count, neighbors_count[-1]

# Get id of each node
uniques = set(sum(a, ())) # [0, 1, 2, ..., n]

# Add particle for each node
for id in uniques:

# Create spring between each pair of nodes
for n1, n2 in a:
p1 = physics.particles.get(n1)
p2 = physics.particles.get(n2)
l = (len(d[n1]) + len(d[n2])) * .5
f = map(l, minl, maxl, 1, 0.1)
s = VerletSpring2D(p1, p2, l*l*f, .3)

w = l*l*.015
sw.append(w)

def draw():
background('#FFFFFF')

physics.update() #update physics

#Draw springs + nodes
for i, s in enumerate(physics.springs):
strokeWeight(sw[i])
line(s.a.x(), s.a.y(), s.b.x(), s.b.y())

pushStyle()
strokeWeight(2)
for p in physics.particles:
point(p.x(), p.y())
popStyle()
``````

rule WM686 : `[(x, y)] --> [(y, z), (z, x)]`

This rule can be used as a simple circle growth algorithm if we update only one pair per iteration (instead of the whole stack)

Full sketch
``````
### How rule WM686 can be used as a circle growth algorithm ###
### doc --> https://www.wolframphysics.org/universes/wm686/ ###

W, H = 1000, 600 #dimensions of canvas
a = [(0, 1), (0, 2), (1, 2)] #starting axiom
z = max(sum(a,()))+1 #total number of edges AND the id of the next node to add

def setup():
size(W, H)

global physics

# Instantiate Verlet Physics + set drag
physics = VerletPhysics2D()
physics.setDrag(.2)

# Get id of each node
uniques = set(sum(a, ())) # [0, 1, 2]

# Add particle for each node
for id in uniques:

# Create spring between each pair of nodes
for n1, n2 in a:
p1 = physics.particles.get(n1)
p2 = physics.particles.get(n2)
s = VerletSpring2D(p1, p2, 4, .5)

def draw():
background('#FFFFFF')

global z

physics.update() #update physics

# Process rule: [(x, y)] --> [(y, z), (z, x)]
id = int(random(z)) #pick an edge at random
x, y = a[id] #coordinates of selected edge
del a[id] #remove edge
a.extend([(y, z), (z, x)]) #add 2 new edges according to replacement rule

# Manage physics accordingly
px = physics.particles.get(x) #coordinate of node x
py = physics.particles.get(y) #coordinate of node y
pz = VerletParticle2D(px.add(py).scale(.5)) #create a new particle in between

s = physics.getSpring(px, py) #find spring between the deleted edge
physics.removeSpring(s) #remove that spring

physics.addBehavior(AttractionBehavior2D(pz, 10, -.5)) #attach a repulsion behavior to it

s1 = VerletSpring2D(py, pz, 4, .5) #create spring between 1st new edge
s2 = VerletSpring2D(pz, px, 4, .5) #create spring between 2nd new edge

z += 1 #increment 'z'

#Draw springs
for s in physics.springs:
line(s.a.x(), s.a.y(), s.b.x(), s.b.y())
``````
9 Likes

Same sketch with different rules. I like the idea to randomly select a matching pair at each time step rather than to update the whole stack at once. This opens the way to a wider range of patterns and allows to address the model as some sort of a growth system.

• Rule WM2695 mixed with some random rule that creates loops at regular intervals
• Sierpiński patterns with rule WM97655
• Mesh knitting with rule WM7714
7 Likes

I’ve done what I can to get started however I know little about python, so I’ve left out the parts I did not understand.

``````//add_library('toxiclibs')
import toxiclibs*;
import peasycam*;
//Capture cam;

int W, H = 1400, 800;

//####### Multiple relations example (see part 2.7)
//####### -> https://www.wolframphysics.org/technical-introduction/basic-form-of-models/rules-depending-on-more-than-one-relation/

//# RULE (if edges corresponding to pattern 'a' -> replace by new edges corresponding to pattern 'b')
int [][] a = {{0,1}},{0,2}};
//[[0, 1], [0, 2]]
int [][] b = {{0,3}},{0,3},{1,3},{2,3}};
//[[0, 2], [0, 3], [1, 3], [2, 3]]

# Starting axiom
int s = {{0,0}},{0,0}};
//s = [[0, 0], [0, 0]]

# Number of steps/generations
int G = 13;

void setup(){
size(W, H, P3D);
stroke(105, 140, 200);
smooth(8);

//not sure of type
global physics, uniques, a, b, s;

cam = PeasyCam(this, 1000);
physics = VerletPhysics3D();
physics.setDrag(.2);

//#Transform / Translate axiom according to rule

//--------------------------------------------------------
a_,b_,s_ = [list(chain(*e)) for e in (a,b,s)]
ws = set(k for k in b_ if k > max(a_));
//--------------------------------------------------------
// not sure where for loop ends
while (ws){
a_ += [ws.pop()]
s_ += [max(s_)+1]
}

//---------------------------------------------------what does this do?
// its a for loop but what is happening to d, is t in the for loop?
//for(int i=k;i<?;i++){

//}
d = {k:i for i, k in enumerate(a_)};
t = [map(lambda x: s_[d[x]], e) for e in b];

print ("Translated axiom: ", t);

//#Finding matching relations in the array of edges
for (g in xrange(G-1)){
// need temp type
temp = [];
// what is set
ids = set();
//chain ?
w = max(chain(*t))+1;
}

//#This part (until line 66 + line 70) should be modified depending on the input rule

// not sure what to comment here not sure java enhanced loops can take multiple vars as in (xa, ya)
for i, (xa, ya) in enumerate(t):
for j, (xb, yb) in enumerate(t):
if (i != j){
// not sure what to do here
if xa == xb and yb > ya: # reminder: [[xa, xb], [ya, yb]] = [[0, 1], [0, 2]];
if {i, j}.isdisjoint(ids){
temp.append((xa, ya, xb, yb));
ids.update([i, j]);
}}
// del ?? also unusual for loop
for (id in sorted(ids, reverse=True)) del t[id];
// what does temp[::-1] do?
for (xa, ya, xb, yb in temp[::-1]){
// extend ??
t.extend([[xa, yb], [xa, w], [ya, w], [yb, w]]);
w += 1

}

//#Get id of each node
// what is list ?
uniques = list(set(chain(*t)));

print ("Hypergraph after %.i steps: %.i nodes // %.i edges" % (G, len(uniques), len(t)));

//#Store neighboring nodes for each node in a dictionary (handle both binary and ternary edges)
// what is java equiv of dictionary?
d = defaultdict(set);
for (edge in t){
n = len(edge);
if (n == 2)
n1, n2 = edge;
else if (n == 3){
n1, n2, n3 = edge;
d[n2].update([n1, n3]);
}else pass;

}

#Find smallest and largest number of neighbors (not mandatory, using this for computing edge length later)
nn = sorted(len(d[k]) for k in d)
lo, hi = nn, nn[-1]+1

#Create a particle for each node + give it a repulsion force
for (n in uniques){
}

#Create spring between each pair of nodes (handle both binary and ternary edges)
for( edge in t){
n = len(edge);
if n == 2:
n1, n2 = edge
p1, p2 = [physics.particles.get(uniques.index(n)) for n in edge]
l = ( len(d[n1]) + len(d[n2]) ) * .5 # make length of edge depend on the average number of neighbors of its end vertices
f = map(l, lo, hi, 1, .3) #the higher the degree of the end vertices, the lower the factor
s = VerletSpring3D(p1, p2, l*l*f, .3)
else if (n == 3){
n1, n2, n3 = edge;
p1, p2, p3 = [physics.particles.get(uniques.index(n)) for n in edge];
l1 = ( len(d[n1]) + len(d[n2]) ) * .5;
l2 = ( len(d[n2]) + len(d[n3]) ) * .5;
f1 = map(l1, lo, hi, 1, .2);
f2 = map(l2, lo, hi, 1, .2);
s1 = VerletSpring3D(p1, p2, l1*l1*f1, .2);
s2 = VerletSpring3D(p2, p3, l2*l2*f2, .2);
}else pass
}

}

void draw(){
background(0);
// >>????
translate(-W>>1, -H>>1);

//#Update physics
physics.update();

//#Draw springs
for (s in physics.springs)line(s.a.x(),s.a.y(), s.a.z(), s.b.x(), s.b.y(), s.b.z());

//#Draw nodes
pushMatrix();
strokeWeight(3.5);
stroke(0);
for (p in physics.particles){
point(p.x(), p.y(), p.z());
}
popMatrix();
};
``````
2 Likes

@paulgoux @solub Just for a laugh I translated the circle growth algorithm sketch to JRubyArt:-

``````require 'toxiclibs'
W = 1000
H = 600

def settings
size(W, H)
end

def setup
sketch_title 'Wolfram Circle Growth'
@axiom = [[0, 1], [0, 2], [1, 2]]
@z = axiom.flatten.max + 1
@physics = Physics::VerletPhysics2D.new
physics.set_drag(0.2)
# Get id of each node
uniques = axiom.flatten.uniq
uniques.each do
p = Physics::VerletParticle2D.new(TVec2D.randomVector.add(TVec2D.new(W >> 1, H >> 1)))
end
axiom.each do |node|
p1 = physics.particles.get(node)
p2 = physics.particles.get(node)
s = Physics::VerletSpring2D.new(p1, p2, 4, 0.5)
end
end

def draw
background(color('#FFFFFF'))
physics.update
node = axiom.sample # random node
x, y = *node # splat node
axiom.delete node
axiom << [y, z]
axiom << [z, x]
# Manage physics accordingly
px = physics.particles.get(x) # coordinate of node x
py = physics.particles.get(y) # coordinate of node y
pz = Physics::VerletParticle2D.new(px.add(py).scale(0.5)) # create a new particle in between
s = physics.get_spring(px, py) # find spring between the deleted edge
physics.remove_spring(s) # remove that spring
physics.add_behavior(Physics::AttractionBehavior2D.new(pz, 10, -0.5)) # attach a repulsion behavior to it
s1 = Physics::VerletSpring2D.new(py, pz, 4, 0.5) # create spring between 1st new edge
s2 = Physics::VerletSpring2D.new(pz, px, 4, 0.5) # create spring between 2nd new edge
@z += 1 # increment 'z'
# Draw springs
physics.springs.each do |spring|
line(spring.a.x, spring.a.y, spring.b.x, spring.b.y)
end
end

``````

Toxiclibs is available as gem in JRubyArt (TVec2D is used to avoid conflict with JRubyArt Vec2D). I think I will have a go at translating more examples. I wonder if it might be possible to create a generalized approach like with LSystems in java and here in JRubyArt?

3 Likes

Thank you for the JRubyArt translation. It is always insightful to see how languages compare to each other and I must say that Ruby looks elegant to my eye; at least I find it clear and expressive ( `axiom.flatten.uniq` !)

It should be, but I think there are 2 key aspects that differ from L-Systems and make the implementation of a generalized approach a bit trickier:

• the “interpretation” step
• the search for matching pairs

1/ Interpretation

By “interpretation” I mean the initial transformation of the axiom according to the input rule. For example, let’s consider rule wm2327:

(notice how the dimensions of the axiom and its transformation match the dimensions of ‘a’ and ‘b’ respectively)

If, and only if, the axiom is composed of similar values (0’s in this case, 1’s in Wolfram’s examples) then the translation is relatively simple. In Python:

``````m = max(chain(*a)) #max value of 'a' = 4
t = [[i-m if i >= m else 0 for i in l] for l in b] #subtract m to values > m in b

# -> [[1, 2, 1], [3, 1, 0], [0, 1], [0, 4]]
``````

However, if the axiom is composed of different non-equal values (I have not yet met this configuration), then we probably need to proceed to a complete remapping based on their indices (logic of the diagram above). In Python:

``````s = [[9,4,9], [2,1]]

a_,s_ = [list(chain(*e)) for e in (a,s)]
m = max(a_)
d = {k:i for i, k in enumerate(a_)}
t = [[s_[d[i]] if i in d else i-m for i in l] for l in b]

# -> [[1, 2, 1], [3, 1, 2], [9, 1], [2, 4]]
``````

This routine should handle sub-arrays of any size. In this case we have one sub-array of size 3 and one sub-array of size 2 (1213), but it could be five sub-arrays of size 4 (54) or two sub-arrays of size 1 + three sub-arrays of size 2 (2132), … etc.

2/ Matching pairs

So far I have been using `for` loops to find matching pairs but this naive approach turns out to be very costly when the ‘a’ part of the rule (the pattern) has more than 2 sub-arrays. Since each sub-array accounts for an additional nested loop, the time complexity of this method (`O(n^k)` where `k` is the number of sub-arrays), becomes an issue.

For example, let’s consider the pattern of rule wm97655 (cf. 2nd gif in my previous post): `a = [[1, 2], [1, 3], [1, 4]]` or put differently `[[a, b], [a, c], [a, d]]`

To find this pattern I had to iterate over the collection of pairs at 3 levels:

``````for i, (a1, b) in enumerate(pairs):
for j, (a2, c) in enumerate(pairs):
for k, (a3, d) in enumerate(pairs):
if i != j != k:
if a1==a2==a3 and b<c<d:
found = True
``````

I believe a more appropriate way to tackle this would be to vectorize the array and maybe use set theory (? I have to think about it. Please let me know if you find a workaround).

Also, as in step 1, this routine should ideally (for a general implementation) handle sub-arrays of any size. Here I am dealing with sub-arrays of fixed size (2) but an ultimate solution should be able to process a collection containing edges of various sizes (binary, ternary, quaternary and possibly higher).

3 Likes

Thanks for sharing this very interesting rendering, and pointing us towards this material.

A matching and rewriting language for hypergraph edges is a very interesting project to implement from scratch. I’ll just note as an aside that another possible approach (untested!) would be to use the Wolfram client for Python, which is Python2 compatible and so might work in Processing.py

…and use that to access their project set replacement tools directly, which are implemented as c++ libraries for a Wolfram Language extension here: https://github.com/maxitg/SetReplace

2 Likes

One of the interesting things about the Wolfram Physics approach for me is that it seems like an unusual generalization of the subgraph isomorphism problem, sometimes called “subgraph matching”

NOTE: although confusingly “graph matching” has nothing to do with what is called “a matching” in graph theory, which is a collection of disconnected edges, none touching. Many graph libraries have “matching” functions or classes that refer to this second, unrelated idea which isn’t relevant to Wolfram Physics.

For background, here are some papers surveying algorithms in this space:

The Wolfram language has very recently added some methods for doing just this, like FindOrderedHypergraphIsomorphism (2019-11). However the Wolfram Physics: Implementation and the notes on SetReplace represent the use case as very different from the normal way you find a motif in a graph (or in a hypergraph, if any edges have more than two vertices).

Although IANA Mathematician, the question “is {{x, y, z}, {x, w}} in {{1, 2, 3}, {1, 1, 1}, {1, 1}}” isn’t a strict isomorphism question, because the Wolfram physics search pattern doesn’t have a fixed morphology. x!=y!=z!=w and x=y=z=w are both allowed, so that pattern can match subgraphs with 1, 2, 3, or 4 nodes. I haven’t read all of the Wolfram Physics material, but this way of defining a match seems to be specifically tailored in order to allow graphs to grow from a single-node axiom, “big-bang” style. Then, as long as a single node has a sufficient number of self-edges of sufficient lengths, a single node can always match any rule, so any rule can start from a single node. That seems rhetorically important to the way the project wants to hypothesize about a generative / emergent physics.

So the “axiom” Wolfram usually gives isn’t any different from any other step – it is a normal graph with a single node and as many self-loops as are necessary to match the edge count of the rule. On the other hand, the results at step 1 are often different than if you provided a base copy of the replacement rule as the first step. Here is an example:

`{{{1, 1, 2}} -> {{3, 3, 1}, {2, 1, 1}}}`

The axiom is `{1, 1, 1}`, and, although the rewrite has 3 node ids `{{3, 3, 1}, {2, 1, 1}}`, the result at step one is only 2 nodes – `{{2, 2, 1}, {1, 1, 1}}` – because the rule matches the single node and propagates it in an undifferentiated way.

In other words, a pattern describes collections of edges of different specific lengths, and sometimes requires that they share one or more vertices. `{{x, y, z}, {x, w}}` says "I need an edge of length 2 and three that share the same vertex first ": {{x, _, _}, {x, _}}.

I’m familiar with graph query paradigms that work in the opposite way – requiring fixed nodes, but allow variable lengths, as in “I need any chain of friendships up to length 6 that includes both solub and Kevin Bacon”. This length-centric approach is more like “show me all grandchild-grandparent relations in the family tree” this makes me think of the WolframModel an edge-centric rather than node-centric paradigm.

3 Likes

@solub Thanks so much for the link to the Fridman interview, For an amateur follower of quantum physics (from Feynman to Carlo Rovelli et al) it was absolutely fascinating.

2 Likes

Me too! As a long-time Fridman channel subscriber, I liked these mostly.

• Lee Smolin, because I prefer loop quantum gravity over string theory.
• Roger Penrose, because of his courage to exhibit out of the box thinking.
• Leonard Susskind, because of his Feynman likewise ability to teach
2 Likes

That was an extremely insightful post and your explanation on how the polymorphic nature of the hyperedges is the condition for emergence was really helpful. Thank you very much !

One thing that is still bothering me, on a practical level at least, is the transformation of the initial mono-nodal axiom.

In your example, the starting axiom `{1, 1, 1}` matches the pattern `{1, 1, 2}` because the comparison step relies only on equality operators (`if x==y`). Same logic applies for some other cases. For example the initial axiom or rule wm2695 `{1,1},{1,1}` will match pattern `{1,2},{2,3}` because `ya==xb`.

However many comparisons rely on other relational operators like `< `or `>`. The pattern of rule wm97655 for example `{1,2},{1,3},{1,4}` not only checks if `xa==xb==xc` but also requires that `ya<yb<yc`. As a result the axiom `{1,1},{1,1},{1,1}` can not account for a valid match since `1` cannot be lower or greater than itself.

Hence the question: How can you treat step one as any other step if the initial axiom cannot be validated by the matching process ? Shouldn’t it be transformed once prior to the matching process, or at least be ignored by it, like in this example ?

``````for i, (xa, ya) in enumerate(axiom):
for j, (xb, yb) in enumerate(axiom):
for k, (xc, yc) in enumerate(axiom):
if i != j != k:
if frameCount==1 or (xa==xb==xc and ya<yb<yc): # 'if frameCount==1' means bypass comparision for the initial axiom and apply transformation anyway
if {i, j, k}.isdisjoint(ids):
found = True
``````
Rule WM97655 (full sketch)
``````### Rule WM97655 ###
### doc -> https://www.wolframphysics.org/universes/wm97655/ ###

from itertools import chain

W, H = 1000, 600 #dimensions of canvas

pattern = [[0, 1], [0, 2], [0, 3]]
transform = [[4, 5], [5, 4], [4, 6], [5, 6], [6, 4], [6, 5], [4, 1], [5, 2], [6, 3]]
axiom = [[0, 0], [0, 0], [0, 0]]
w = 1

L = 4 #spring length
F = 60 #repulsion force

pat_ = list(chain(*pattern))
m = max(pat_)+1

def setup():
size(W, H)

global physics

# Instantiate Verlet Physics + set drag and bounds
physics = VerletPhysics2D()
physics.setWorldBounds(Rect(0, 0, W, H))
physics.setDrag(.2)

# Add particle corresponding to 1st node

def draw():
background('#FFFFFF')

global axiom, w

physics.update() #update physics

# Find matching edge
temp = None
ids = set()

for i, (xa, ya) in enumerate(axiom):
for j, (xb, yb) in enumerate(axiom):
for k, (xc, yc) in enumerate(axiom):
if i != j != k:
if frameCount == 1 or (xa==xb==xc and ya<yb<yc): # 'if frameCount==1' means bypass comparision for the initial axiom and apply transformation anyway
if {i, j, k}.isdisjoint(ids):
temp = (xa, ya, xb, yb, xc, yc)
ids.update([i, j, k])
break
else: continue
break
else: continue
break

# Remove it
for id in sorted(ids, reverse=True): del axiom[id]

# Compute its transformation according to rule
d = {k:temp[i] for i, k in enumerate(pat_)}
transformed = [[d[k] if k in d else w+k-m for k in l] for l in transform]

axiom.extend(transformed)

# Update 'w' (base id for next new node(s))
w = max(chain(*transformed))+1

# Manage physics accordingly
xa, ya, xb, yb, xc, yc = temp

pxa = physics.particles.get(xa) #coordinate of node xa
pya = physics.particles.get(ya) #coordinate of node ya
pxb = physics.particles.get(xb) #coordinate of node za
pyb = physics.particles.get(yb) #coordinate of node xb
pxc = physics.particles.get(xc) #coordinate of node yb
pyc = physics.particles.get(yc) #coordinate of node zb
pw1 = VerletParticle2D(Vec2D.randomVector().add(pxa)) #create a new particle in-between

sxaya = physics.getSpring(pxa, pya) #find springs between the deleted edges
sxbyb = physics.getSpring(pxb, pyb)
sxcyc = physics.getSpring(pxc, pyc)
physics.removeSpring(sxaya) #remove those springs
physics.removeSpring(sxbyb)
physics.removeSpring(sxcyc)

physics.addBehavior(AttractionBehavior2D(pw1, F, -.5)) #attach a repulsion behavior to it

s1 = VerletSpring2D(pw1, pw2, L, .5) #create springs corresponding to the new edges
s2 = VerletSpring2D(pw1, pw3, L, .5)
s3 = VerletSpring2D(pw2, pw3, L, .5)
s4 = VerletSpring2D(pw1, pya, L, .5)
s5 = VerletSpring2D(pw2, pyb, L, .5)
s6 = VerletSpring2D(pw3, pyc, L, .5)

# Draw springs
for s in physics.springs:
line(s.a.x(), s.a.y(), s.b.x(), s.b.y())

pushStyle()
strokeWeight(2)
for p in physics.particles:
point(p.x(), p.y())
popStyle()
``````

This leads me to another important problem. In the previous example I stated that the pattern `{1,2},{1,3},{1,4}` required to make sure that `2<3<4`. I said so because I deducted this logic by trials and errors. As you rightly said, both `x!=y!=z` and `x=y=z` are allowed and in this case nothing indicates a priori that `ya` must be lower than `yb` and `yc` . In other words, I infered a condition that is not explicitely shown in the pattern.

This raises the following question: How to interpret (in code) patterns where relations between nodes is ambiguous ?

For example, how to interpret the pattern `{1,2,1},{3,4,5}` in rule wm4394 ? I can look for equality between `xa` and `za` for the first edge but what condition should I state in order to filter the second edge ? Is `3` supposed to be larger than `2` ? Is `4` supposed to be lower than `5` and larger than `3` ? From the tests I ran so far, none of these conditions seems to be correct.

``````for i, (xa, ya, za) in enumerate(axiom):
for j, (xb, yb, zb) in enumerate(axiom):
if i != j:
if xa == za and ????:
matches = True
``````

Something tells me that I am going in the wrong direction and that maybe I should be looking at other approaches like the subgraph matching algorithms you mentionned. Of course I have been thinking about accessing the `SetReplace` function from the Wolfram client library for Python but lingering curiosity and desire to do things by one-self prevent me to do so quite yet.

1 Like

Really glad to hear you enjoyed the topic. I personally don’t know anything about Quantum mechanics and must admit I have difficulties understanding most of it, but the tiny bit I think I get right makes me question our reality in such a radical way that it somewhat “affects” me (if that makes sense).

For example the idea that the very existence of the universe is “undecidable to any entity that is embedded in it” (source), is both fascinating and also profoundly unsettling to me.

Similarly, the recent conjecture that space-time itself is a code (following the evidences of a connection between quantum error correction and gravity) leaves me in awe and puzzled at the same time.

I don’t know if there is a word for this.

Anyway, thank you for your feedback and the links, I’ll have a look at them for sure.

@solub @noel Wolfram is definetly a maverick, and he might way off the mark regarding the universe and everything. But I still enjoyed the talk, Richard Feynman was also a maverick but his ideas have stood the test of time. There are it seems many colourful characters in physics including Carlo Rovelli, who has written some very accessible books.

1 Like

@monkstone @solub I found this link, and also this one so beautifully and clearly visualizing quantum field theory and general relativity. What about reproducing the visualizing of the fields in P5?

2 Likes