top of page
Writer's pictureixnayokay

About Zoologic

Updated: Apr 23, 2022

Zoologic #50, testnet



Zoologic is a population dynamics simulation built using cellular automata on a hexagonal grid. Small virtual organisms called "crobes" move through the grid and rely on a combination of inherited traits and remembered strategies to decide how to act. Crobes that act intelligently survive. The ones that don't... become lunch.

Crobes moving on a hexagonal grid

As work on Zoologic draws to a close, I'd like to share some notes on how it works, the story and process of creating it, and what it means to me. Part 1 of this article will talk about the main features of Zoologic; Part 2 will provide additional context for its creation and go deeper on technical aspects of the algorithm.




Part 1: Zoologic Algorithm and Features


Decide parameters


First, all of the parameters for the simulation are randomly chosen based on the token hash, within acceptable ranges. This includes a number of variables that contribute to the listed features for a mint, as well as a number that are internal and not surfaced as features.


If this mint is selected to be a "preset" a set of constraints related to that preset are then applied to the parameters -- for example with the cubez preset, where crobes are constrained to be hexagonal and drawn as cubes. Presets give mints a particular character, though each still has its own unique features and attributes within those constraints.


Finally, a set of universal constraints are applied to make sure the selected features work together (for example, both crobes and their tails can't be hidden, or there would be nothing to render!). These rules were discovered over the course of many random outputs by finding combinations that didn't work. For these, I'd lock the token hash in place and investigate which mixes of features were causing the issue -- then add a constraint to prevent that mix.


Once the parameters are selected for the simulation, it's time to set it up!



Seed the simulation - placement


Some parameters determine the characteristics of the 'starting placement' for the mint. There are six or so buckets of different placement types, each with a few subcategories.




"Sinusoidal" placements

Sinusoidal Multiwave placement, testnet #80

Sinusoidal: Crobes are placed according to a sin function


Sinusoidal Multiwave: Crobes are placed according to the sum of two sin functions with different phases and amplitudes


Deoxyribo: Crobes are placed according to a sin function with short phases that overlap multiple times, giving a helical appearance





"Circular" placements


Concentric placement, testnet #76

Circular: Crobes are placed in a circle


Concentric: Crobes are placed in an outer circle and an inner circle


Eternal Eye / Blind Eye / Squared Circles: Various warped versions of the circular placements including multiple ovals or squares






"Cross" placements

Diagonal Cross placement, testnet #60



Vertical Cross: Crobes are placed in a plus-shaped pattern


Criss-Cross: Crobes are placed in an x-shaped pattern


Diagonal Cross: Crobes are placed in two lines that cross diagonally









"Colony" placements

Colonies (uniform color) placement, testnet #92



Colonies (uniform color): Crobes are placed in small groups, where each group has its own unique colors


Colonies (mixed color): Crobes are placed in small groups, where each group is a mix of differently-colored crobes








Other placements


Textbook: Crobes are placed in evenly-spaced lines across the canvas

Window: Crobes are placed in four square areas on the canvas

All Over: Crobes are placed within some central region of the canvas



To increase diversity of the starting placements, sometimes they're rotated or flipped. Each placement has its own unique parameters that can be seeded, and each relies on placing crobes randomly within certain regions of the screen. Changing the random seed and rerunning the placement algorithm provides a similar placement that evolves in a different direction (press [N] to change this seed and try the simulation with a new starting placement!). Diverse placement is critical for achieving snapshot diversity and giving the mints unique behaviors.



Seed the simulation - creating new crobes


Armed with a bunch of parameters and some starting placement regions, we can create and place some crobes at (x, y) coordinates in the grid. But what goes into making a new crobe?


Each crobe is essentially just a bucket of stats and values -- some unique to each crobe, and some determined by the selected simulation parameters. Each is initialized with stepSize parameters controlling how it moves. They receive a varying amount of strength -- the resource used to determine crobe stats like attack and defense. Strength is the lifeblood of crobes -- it's critically important for their success in battle and in reproduction. Strength is conserved across all crobes in the simulation in every frame. More strength floating around in the grid means, roughly speaking, more crobes can live in the simulation.


Crobes also get a preference on whether to allocate stat points more to their attack or defense stats. To one extreme, crobes behave like "glass cannons" with lots of attack and little defense. To the other, they're "tanks" with high defense and low attack.


Each crobe contains three color values to use when it's rendered. When using a preset color mode, these color values correspond to the index of a color in the palette. When in a calculated color mode, they correspond to some hue from 0-359 on the HSL wheel. A variety of features (like "crobe and tail colors match") play a role in how these colors are determined and their relationships to one another.


If the simulation is set to use crobes with neural nets, we either clone a new neural net for each crobe, or if using hivemind mode, give each crobe a pointer to the shared neural net.


Visual aspects like rotation and transparency are determined. Reproductive preferences like age and strength required to reproduce, child count, and reproductive strategy are initialized here, within constraints set by the random parameters for the simulation. Age is set to zero, generation to one. Various data structures the crobe will need to operate (like its memory or tail location) are initialized to empty arrays.



The main program loop


The first part of the animation loop for a CA is to calculate the next state for the simulation. In Zoologic, the algorithm iterates through each of the crobes in the grid, and for each determines if it will be reproducing or moving in the next round. If it will be moving, the new position is stored in the planned move table for evaluation. If it reproduces, the new positions of its children are stored in the same planned move table. Then, the planned move table is evaluated to see which crobes will get to claim each cell in the next state, and which will become food.


Once the next state has been calculated, the second part of the animation loop is to draw that state on the screen. Based on the selected simulation parameters and each crobe's internal state, their bodies and tails are drawn on the screen. The framework updates the frames elapsed, and the next iteration of the loop begins. Let's look at each of these functions a little more in-depth.



Calculating the next state


Crobe Movement


Energy is a resource crobes use to move. Each step the crobe takes consumes one energy. Assuming a crobe has enough energy to move (> 0), but is not mature or strong enough to reproduce, it must decide how to move. There are two types of movement behavior for crobes -- a pre-coded movement algorithm, and movement controlled by a neural net.


Control crobes always use pre-coded movement, whereas Smart crobes use pre-coded movement when nothing is close, and rely on their neural net to make movement choices when predators or prey are within a couple of spaces.



Control Crobes First Movement Phase - Reaction

In the reaction movement phase, the crobe responds to stimuli in its environment and checks to see if it has any neighbors or nearest neighbors that are predators or prey. Predators are neighbors with higher stats that could eat the crobe. Prey are crobes with lower stats that can be eaten. Cells to avoid are given negative scores, with unavailable cells (off the edge, barriers, etc.) getting the lowest possible score. Cells containing predators or adjacent to predators also receive a negative score. Cells containing prey or adjacent to prey are given positive scores according to how much strength the prey has (tastier prey is worth more points).


There are a few exceptional cases. If the Rock Paper Scissors (RPS) feature is enabled, crobes won't attack other crobes that would trump them in a fight. If autotrain mode (more on this later) is on, control crobes will also avoid eating their own teammates, unless they get really hungry (energy < 10).


Once scores for adjacent cells are totaled, the crobe moves to the cell with the highest positive score, if there is one. If there isn't they move on to the second movement phase -- wandering.


Important to note -- there are at least a few suboptimal behaviors for the control crobes in the reaction phase. They don't actively flee predators, so much as they simply chase prey. When RPS is on, they don't chase otherwise stronger crobes that they can trump. Immediate neighbors that are predators/prey don't affect the scores of other neighboring cells the way that second neighbors do. The pointing assignment may be suboptimal for some combinations of neighbors. The need for logic handling the many possible different scenarios that can occur is a con of this algorithmic approach.



Smart Crobes First Movement Phase - Reaction

Similar to the control crobes, the smart crobes' first movement phase is reaction. This phase is the only place where the output from neural nets (NN) is used in the simulation, so it's pretty key. Smart crobes with no nearby crobes proceed directly to the wandering phase. While I'd experimented with having wandering movement controlled by NNs, the working memory and NN size available for crobes wasn't sufficient to have them act intelligently when wandering.


Instead of making algorithmic decisions based on surroundings, the reaction phase for smart crobes is mostly concerned with packaging information about the surroundings into a format the NN can understand, and asking for its decision on where to move. This means passing values into the NN in binary form, as a series of true/false values.


The crobe's NN (let's call our crobe Alice) has 43 inputs, which are as follows:


- 6 bits for whether the neighbor cells are occupied (each cell is assigned one bit)

- 6 bits for whether the occupant of the cell is stronger than Alice

- 12 bits for whether the 2nd neighbor cells are occupied

- 12 bits for whether the 2nd neighbor cells are stronger than Alice

- 7 bits for the direction Alice last moved (with the 7th being "no movement")

Diagram of Neural Network for each crobe

The NN for each crobe contains an input layer with 43 neurons, three hidden layers with 28/18/12 neurons, and an output layer with 7 neurons (6 for directions to move, and 1 for no movement) -- pictured above. Each neuron in each layer is connected to the neurons of the subsequent layer. I've included the connections between the last two layers as an illustration of how many connections there are.


To prevent the NN inputs from exploding in size, the packaging code makes the determinations about the relative strength of Alice to each neighboring crobe, and prepares this data in binary format for the NN. There are some shortcuts used to make the most use of the inputs to the NN. Unavailable cells (like walls) are treated as unoccupied cells stronger than Alice. When we want a weaker crobe on the same team to be ignored as a food source (during autotrain mode), we treat it as an empty cell. Each neighbor has four possible values, expressed as 2 bits (existence and strength):


00 - there is nothing in this cell

01 - there is a weaker crobe in this cell

10 - there is an obstacle in this cell

11 - there is a stronger crobe in this cell


After giving the Alice's NN the inputs, an output is generated -- a number between 0 and 1 for how strongly Alice feels about each potential movement option. The strongest preference is used for movement, as long as the confidence is high enough (> 0.6). If Alice isn't sure where to move, she moves on to movement phase 2 -- wandering.


You may have noticed some of the inputs to the NN are the previous outputs. This input serves as a very basic form of memory and was used to allow the NN to avoid counter-productive behaviors like moving repeatedly back-and-forth or always in a straight line. It was more useful when the NNs played an active role in wandering, but can still assist in predator evasion in the reaction phase by allowing the crobe to continue moving away from predators or dodging unpredictably.


To give the NNs their own distinctive behaviors in each mint, the "neural net fidelity" feature was created. This feature randomly resets a certain number of the neurons from their pre-trained state. Values range from savant (where no neurons are reset) to tabula rasa (where all neurons are reset). No matter the default behavior of the NN, it can be trained over time to become stronger, saved, and reloaded using the AI menu (more on this below).


For more details on how the NN is saved, loaded, and stored on the blockchain, please see Part 2 of this article.



Second Movement Phase - Wandering

Any crobes that didn't have neighbors or a decisive opinion on where to move after the initial reaction movement phase (both control and smart) proceed to a wandering movement phase, where they wander in rhythmic patterns across the canvas. Each crobe is generated with some "step counts" for different axes, an internal movement counter that increments when it moves, and a "movement tempo" that determines how often it moves or rests. Modulo operations are used on the movement counter to divide the movement over time into three directions on the axis -- positive, negative, and zero.


For example, a crobe may be generated with a step count of 10 on the B axis. If left to wander an empty grid, it would move left for 10 frames, not move on the B axis for 10 frames, and then move right for 10 frames. Repeating this for multiple axes independently with different counters creates a wide variety of behaviors -- zig-zags, corkscrews, circles, etc. Since the step counts can be fairly large, they may also move in a straight line along an axis for a while.

Crobes wander independently along three axes

There are a few variations on this wandering behavior. Some crobes attempt to use two axes, X and Y, and choose from a slightly larger number of potential destinations -- potentially two spaces away. The choice is then constrained to a closer, immediate neighbor. Occasionally, some crobes will wander randomly during this phase.


Nearly all of the variables associated with wandering movement styles are heritable and can mutate with each generation. Sometimes, a particularly successful movement style comes to dominate the simulation. On toroidal (unbounded) topologies, it's not uncommon to see crobes start swarming in a particular direction to overrun their prey and escape predators. Sometimes aggressive circling motions develop. Different populations seem to have different movements that work best for them, so it's interesting seeing what kind of behaviors evolve.


In addition to their back-and-forth movement across various axes, the crobes have a movement tempo feature that can adjust their behavior for a given mint. With hustle the behavior is as previously described -- but with stop and go the crobes sometimes rest for a while before continuing their movement. This has a different aesthetic, but also seems to affect the dynamics of the simulation. Since movement costs energy and running out of energy means death, sometimes waiting for prey to get close can be a good strategy to conserve energy! We see this strategy in nature with ambush predators like snakes and spiders.


For most variables related to movement, I'm not sure what the optimal strategy is, and it almost certainly depends on the unique context of each mint. What's important is that the crobes that are more successful competing in their environment survive and adapt.



Reproduction


Reproduction and mutation are a critical part of an evolutionary simulation. During reproduction, crobes use some of their resources to spawn slightly mutated children in neighboring cells. When and how does reproduction happen?



Choosing to reproduce


Instead of choosing to move, a crobe with enough resources may choose to reproduce instead. There are two main criteria for determining when the time is right -- strength, and age. Each mint uses these criteria in one of two ways based on the reproduction requirements feature:


Strength OR Age: When the crobe has enough strength, OR reaches maturity, it attempts to reproduce. This means strong adults can reproduce -- but also weak adults, or strong babies. An over-abundance of strength means having many children, and reaching maturity serves as an opportunity for reproduction even for less successful crobes.


Strength AND Age: The crobe must have enough strength AND reach maturity to reproduce. This is a more realistic condition. Crobes must reach adulthood and have sufficient strength to reproduce. Crobes that never achieve enough success in battle or don't survive long enough don't get to reproduce.


While the second of these is more realistic, it's not as dynamic and exciting to watch. Other variables in the simulation (like the initial number of crobes) are adjusted to keep the animation active and interesting. Crobes must also wait a certain amount of time after eating to reproduce (controlled by the "post-feed reproduction wait time" feature), which helps prevent population explosions in crowded simulations.



Creating and Placing Children


Once a crobe decides to reproduce, there are a few ways to go about it. Generally speaking, crobe reproduction is asexual, by cloning with mutation. The two main forms are cell division, where a parent splits into multiple cells, and budding, where a parent spawns multiple clones. In both cases, the strength of the parent is divided amongst its children. Which form of reproduction is used is controlled by the "reproductive strategy" feature.

Cell division and budding reproductive strategies for example parent with 60 strength

The example above shows examples of these strategies. Clockwise from the upper-right:


1+2, cell division: The parent splits into 2-6 children with equal strength

3, budding (small buds): The parent spawns 2-6 children with less strength than it has.

4, budding (large buds): The parent spawns 2-6 children that are equal in strength to it.


Children are placed in randomly selected cells adjacent to the parent. Like many other variables in Zoologic, it's not immediately obvious which of these strategies is best, and it likely depends on other variables. Having too many / too small children may be unsuccessful, since they'll be unable to win fights against other crobes. Having too few children means missed opportunities to eat, grow, spread, and adapt.


Reproductive age plays a role, too. Is it better to have many children young, or invest a lot of resources in a single child after gathering a lot of strength? These questions are at the heart of r/K selection theory, which plays an important role in Zoologic both mechanically and thematically.


Resolving the planned move queue

As movements and reproductions are planned for each frame, they are added to a queue of all planned state changes. Once all crobes have made a plan, this queue is then resolved. For some cells, multiple crobes may want to move there, so some process is needed to determine which crobe wins. This is also the process where crobes fight and eat each other for resources! Their attack and defense stats, determined by their strength and preference for stat allocations, determine how they match up in combat.


Once winners are chosen for all cells in conflict, the next state is set and the animation moves on to the drawing phase. For more information about how the planned move queue/table operates, please see "A table of planned moves" section in Part 2 of this article.



Drawing State


With the animation state set up and changing with each frame, all that remains for the core animation loop is to draw the state on the canvas. Generally speaking, there are two parts to a crobe: the body, which is a regular polygon, and the tail, which traces the crobe's path as it moves through the grid. Let's take a closer look at how these operate and some of the features that drive them.



Drawing Tails


The first part of a crobe to be drawn is its tail (since I felt it looked better having bodies generally be on top of their own tails). As a crobe moves, each position is recorded in a list of (x, y) coordinates attached to the crobe. The length of the list is a hidden feature known as "tail length" -- and ranges from very short (~10 positions) to very long (~200 positions), so earlier positions are removed from the list as the length limit is reached.


There are a number of ways tails can be drawn, controlled by the "crobe tails" feature:


off: crobe tails aren't drawn at all

angular: crobe tails are straight lines through the center of each cell visited by the crobe

curvy (strength): crobe tails are quadratic curves where strength determines control point

curvy (energy): crobe tails are quadratic curve where energy determines control point


The "jiggle"feature, when enabled for lines, causes each point along the tail to be drawn at a random offset from the point where it would otherwise be, which gives it a frenetic, lifelike appearance. The "crobe and tail colors match" feature, as it sounds, determines if tail color should match body color.


Some animations consist solely of lines, giving them a characteristic appearance. This is used for presets like "manuscript" and "scribbles."



Drawing Bodies


The crobe bodies are regular polygons with 3-8 sides as determined by the "side count" feature (or in some cases, all of them, if this feature is set to "various"). Bodies are centered on the cell where a crobe is located, but may be hidden when the "show crobes" feature is set to off. Like tails, bodies may be affected by the "jiggle" feature, and in each frame are drawn with a randomized offset from their actual position, to make them feel more alive.


Crobe bodies may be solid colors, or built with a gradient of multiple colors (for more on selecting colors, see "Improving Colors" in Part 2 of this article), as determined by the "crobe gradient" feature:


off: crobes are solid colors

on: crobes are colored with a gradient from center to edges, using two colors from state,

and their bodies have a rounded appearance

cell-like: the center color for the body is made dark, giving the appearance of a cell nucleus

shapes: like on, but the body retains its original polygonal shape

cell-like shapes: combination of the previous two


The "electron shells" feature is related, and requires a gradient to be enabled. It creates a number of radial gradients between the center and edges of the body, alternating back and forth to give the appearance of rings or targets.


The "borders" feature determines if crobe bodies are drawn with borders. The "hollow crobes" feature uses body color to draw thick borders on the crobe, while leaving the center transparent. "Cubic hexagons" feature is only available when drawing hexagons, and adds three additional lines to the center of the body to give it the appearance of a cube.


The "crobe size" feature serves as a multiplier on body size, and a limit for maximum size. The "size by strength" feature draws stronger, more dominant crobes as larger (in this case, large crobes eat smaller ones, and then grow larger).


The "rotation" feature determines what role rotation plays in rendering the bodies:


off: all crobes face in the same direction; this never changes

mutating: all crobes face different directions, and this can mutate when reproducing

constant and mutating: like mutating, but crobes can also rotate between individual frames



Waste Crobes

"Waste" crobes are those that have run out of energy and can no longer move. They have a different appearance than typical crobes to differentiate them. They and their tails no longer jiggle, making them still. They can be drawn in black or in gradually shifting colors (toggleable with [W]), which helps to add some negative space for crowded canvases (which is also why they are drawn last). The "waste type" feature determines their behavior:


normal: default to waste colors off (toggleable with [W])

toxic: default to waste colors on

oozing: waste crobes still reproduce, but children are also waste. Causes waste to spread

across the canvas, until living crobes can easily find it and start sweeping it up. The

prey-seeking behavior of crobes and inability of waste crobes to fight back means it's

common to see a path get eaten through waste areas once they are discovered.

toxic and oozing: both of the above



Other Drawing-related features


There are several features that apply to tails, bodies, and waste -- and have a significant impact on the appearance.


The "fade" feature determines whether the animation is wiped after each frame. When set to no fade, no wiping is done and each frame is drawn on top of the previous one on the canvas. This can make it easier to see trends over time and fills the canvas, but can make it more difficult to see the exact state of the animation during any particular frame. The other options draw over the previous frame with the background color at various transparency levels -- very low alpha for slow fade, ranging to a total canvas wipe between each frame at the extreme end of the fast fade option.


The "space warp" feature adjusts the sizing of tails and bodies depending on where they're drawn on the canvas, giving a sense of depth:


off: the default behavior

curve towards: crobes are larger at the edges, as if the viewer is inside a curved surface

curve away: crobes are smaller at the edges, as if the viewer is outside a curved surface

corner: crobes grow larger as they approach one corner

receding: crobes grow larger as they approach the bottom

approaching: crobes grow larger as they approach the top


The "crobe transparency" feature determines how much alpha transparency crobes are allowed to have -- which is determined by various hidden variables. The "rounded corners" feature determines whether lines should be drawn with rounded joins, or if they have a sharper appearance.


The "psychedelic" feature has a significant effect on colors. When enabled, it defaults the animation to using calculated colors, and uses the elapsed frames count to gradually move through an RGB color space. It's less concerned with color coordination or evolution, and moreso with using constantly shifting colors to create an eye-catching dynamism. When set to pulsing, this feature has more of a "two steps forward, one step back" progression. The "color phase speed" feature plays a role in determining how fast the colors shift.


The "dark matter" feature plays with the luminance of colors. On dark backgrounds, it can make tails darken over time, and bodies darken at the edges or center. On light background, this effect is reversed, lightening tails and bodies. When this feature is set to inverted, it causes darkening on light backgrounds and lightening on dark backgrounds. The strength and use of the effect depends on some hidden variables.



Interactive Controls


Zoologic contains a number of interactive controls designed to enhance the appearance of the animation to suit the viewer's preferences, and to enable exploration and play. While many of these are self-explanatory or described within the built-in help menu, I wanted to provide some additional context for some of them.



Adjusting Colors

Each mint comes with effectively 12 different color combinations controllable with [Space], [T], and [D] -- three presets, three calculated colors, and light/dark mode. These play a substantial role in the aesthetics and evolution of the animation, so definitely try them out! For a deeper dive into how colors work in Zoologic, please see Part 2 of this article.


Autoblank mode

This control is useful for long-form presentation and set creation. The basic blank control [B] can be used to wipe the background, which is helpful for mints with fade: off, but I wanted a way to automate this for display purposes. By enabling this setting, these mints can be periodically wiped, so that the viewer can enjoy seeing the individual crobes and watching the canvas fill again. The related auto color change (see help menu) is really nice for experiencing the different color presets offered for a mint in a hands-off way that's perfect for display on a wall or over time.


Autopause doesn't have much function beyond allowing the viewer to pause at a specific interval before blanking begins. This can be helpful for saving a variety of images that are similar in appearance to create a set or choose a favorite.



Diagnostic Modes

Zoologic includes a number of "diagnostic modes" that change the output appearance to make visualizing some of the underlying variables easier. While enabled, these modes generally hide tails, set crobes to a uniform size, and adjust coloration to reflect the variable of interest. More information on these modes is available in the built-in help menu.



Cell Count / Level of Detail

This effectively controls how many cells wide and tall the grid for the simulation is. Larger cell counts run more slowly, so increasing the target framerate can be helpful. They also provide more room in the animation for evolution, speciation, and stability. Smaller cell counts are faster, but tend to have less room for different strategies or populations to evolve, causing the crobes to be more uniform, and evolution to be a little more arbitrary.


The differences in cell counts also affect how detailed the simulation appears, which may change its visual character. The number of crobes (and therefore the total amount of "strength" resources, which are conserved) placed in the simulation is dependent on a variety of factors, including cell count. And, some of the rendering details are related to crobe size, so in some cases adjustments to the cell count may have further impacts to appearance. The starting placement also shows different amounts of detail at different scales. While well-tuned to provide a good experience, all of this this may cause some subtle behavior and display differences. Every mint is a little different in this regard, so enjoy playing with it!


Example comparing Zoologic #12 on testnet with low and high cell counts, respectively. Note the significantly more pronounced appearance of the sinusoidal starting placement at high cell count. It takes on a variety of appearances as cell count changes, and this is true of other starting placements.



In general, in evolution simulations, more extreme or competitive environments (like Zoologic with a low cell count) more strongly select for specific traits that offer a benefit, and approach a "local maximum" in the space of evolutionary possibilities. Less competitive environments, where there are abundant resources or relative safety, allow greater diversity and speciation. A great real-world example of this are the diverse and sometimes very large bird species of New Zealand, which evolved without significant competition from mammals, and so were able to diversify in ways not often seen in more competitive environments.



Speciation Mode

While it's easy enough to claim the crobes are evolving over time, I really wanted a way to demonstrate this and play with it. Visually and conceptually, it's interesting to see different behaviors and traits evolve -- and if they're truly evolving, it seems like we should see different species appear. How can we be sure this happens?


In nature, species evolve to fill different niches within the same environment or via genetic drift when populations split and evolve in isolation. Given that we only have one type of organism -- crobes -- that uses other crobes as a food source, and the simplicity of the simulation relative to the natural world in terms of factors like movement, grid size, food types, sexual reproduction, small state size, photosynthesis, weather, natural resources, etc., it's difficult to get them to evolve to exploit different niches. That leaves isolation as an option -- as it turns out, a good one.


This mode allows dividing the grid into multiple sections, isolating the existing crobes from each other and allowing them to "genetically drift" into different populations over time, without becoming too uniform in their [default] small and highly competitive environment. Sections can be recombined to force competition between the newly evolved species, and this process can be repeated. It's fascinating to watch them drift apart and then see which wins when recombined (and reminds me of collecting insects as a kid -- which would win in a jar, a bee or spider? A praying mantis or wasp? A fire ant or black ant?)


Since each divided section develops its own population dynamics, the crobes can become quite different in terms of reproduction and movement strategies. Smart crobes develop neural nets with noticeably different behaviors. Colors change as well (and calculated colors are a great fit for this mode). In general, diagnostic modes are a helpful tool for seeing what's driving the differences in each section.


AI Menu

The help is fairly comprehensive on this, but I wanted to mention a couple of nuances and helpful tips.


  • When using autotrain mode for any period of time, you'll want to play with the cell count. Lower cell counts finish each round more quickly, since it's easier for one team to be eliminated, but there is a higher chance that the winning team isn't really "better" and is just winning by random chance. Set the number too low, and your crobes may actually become less intelligent as they train, since inferior strategies may be selected by chance. Set the number too high, and crobes will still learn over time -- but it may take a while for a team to be eliminated to seed the next round. When a team does win, it's more certain that it's because it's has a better strategy in some statistical sense. Slower, but higher fidelity.


  • The algorithm used to select the "best" crobe during autotrain is similarly noisy. I tried a number of selection mechanics and ultimately landed on selecting the strongest (largest) crobe on the winning team since it has survived long enough and well enough to eat other crobes. But, there are cases where this doesn't mean an effective strategy. For example, if all crobes foolishly migrate into one corner, the strongest crobe may just be a giant idiot that sits there bumping into the corner and eating things that wander into it. Once selected as a winner, the next round will be seeded with idiots that just want to sit in the corner. I like to save good NN outputs periodically to avoid the chances of this happening. A good time is right after one round finishes and the next begins, since all crobes on the team are likely to have a similar strategy to the best one from the previous round. If you like the behavior you're seeing, pause and save.


  • Autotrain mode doesn't work well with speciation mode, since separating the population may make it impossible to eliminate one of the teams. Avoid combining them for best results.

  • The save/load/copy buttons don't work within an iframe (e.g. embedded in the Art Blocks detail view) because the clipboard and local storage are inaccessible. You can still manually copy from the text area to save the network, though. If you'd like to use the buttons, ensure you're looking at the live view.


  • Hivemind mode, as mentioned in the menu, is experimental. It can be entertaining, and trains quickly if you're lucky, but it can also quickly go off the rails and cause AI devolution. Use it at your own risk, and save often!

  • Advanced mode is also experimental and incurs a significant performance hit. It's not recommended for smooth animations or mobile devices. This mode uses an untrained NN that is substantially larger in size than the default one and that accepts many more inputs. It would require substantial training to make it better than the default network, and it's not clear if the additional inputs play a significant role. It's being included as an experiment in using larger nets, but also in including additional unexplored functionality that is turned off by default, but can be enabled and explored post-launch by the community.

For more details on how the neural networks for Zoologic were created, a deeper dive is available in Part 2 of this article.



Summary


While all outputs for Zoologic use similar shapes and movements, a diverse selection of parameters ensures each has its own unique character. A variety of interactive settings enable exploration of the many different output variations each mint can generate. The ability to save and load data allows viewers to return to the same mint repeatedly for new experiences, or to try their trained AI in different mints. It allows friends to experience the work together by battling their AIs.


Zoologic makes some of the forces driving evolution visible for observation and experimentation, and serves as an example of "intelligent art" that can learn, remember, and communicate. It demonstrates that cellular automata with sufficiently complex state can be used as models for organisms, with a lifelike aesthetic.



Part 2 -- Going Deeper



Context


The more immediate context for Zoologic was my recent generative artwork "mono no aware" released on Art Blocks (more on this project at https://www.ixnayokay.art/post/mono-no-aware-overview ). While I'd worked with evolutionary or randomly parameterized programs before, this was my first experience with building a generative algorithm robust enough to trust with uncurated output generation. It was also my first experience developing artwork intended for commercial release and public display, which differs substantially from building hobby simulations for my own exploration. I definitely ran into the limits of my artistic experience, both in terms of aesthetics and technical craftsmanship / code.


After the launch of "mono no aware" I was still very much thinking within the context of generative art, and was looking forward to building a new generative piece "from the ground up." Whereas "mono" was largely taking some of my existing concepts and turning them into something generative, I was excited about starting from scratch with concepts intended to be a good fit for a generative set. And, about starting from scratch technically. While I usually work in plain JS, I went with p5js framework for "mono" since I wasn't familiar with Art Blocks' platform and felt more comfortable using an approved library. I wanted to get back to vanilla.


For a couple of months, I enjoyed doing some sketches to try out new ideas. Some were new themes (insect simulations, faces, text), and others explored new technical techniques (memoization, color palettes, vanilla JS framework meeting Art Blocks constraints for resolution agnosticism, automata optimization, hexagonal automata). I thought a lot about what direction I might go with a future project, and how I could improve my technique to incorporate feedback I'd been given and lessons learned. Some of my goals were:


  1. Improve performance of the simulation. I wanted it to be performant on desktop and mobile, and even at high levels of detail.

  2. Improve colors. I've always had a soft spot for math-driven RGB color, but to push aesthetics forward, I needed some color palettes.

  3. No libraries. I felt confident I could get my usual approaches working with AB platform.

  4. Reusability. If I'm going to be making serial generative art projects, I need to have a toolset and a process to reuse. Every project can't be a total one-off.

  5. Community. That's so important to successful NFT projects, and I heard it again and again at NFTNYC '21. How could I make the mints something people would want to compare or talk about? How could I make them interact with one another?

  6. Embrace the medium (web and mobile). This is part of the reason I've enjoyed working with JS all these years -- native, natural animation using a canvas. Web art can be more than static pictures; it can be alive! I wanted to embrace animation, interactivity, and hypertext as part of the experience

  7. Push deeper in my craft. How can I fundamentally change my approach to fractals or automata to explore them in a new way? How can I contribute new ideas or techniques to the microgenres of CA art or fractal art?

  8. Additional features. Rather than identifying parameters in an existing program and using them as features, I wanted to make features a first-class part of the design process.

  9. Feature Design. Exposing a bunch of raw numbers as features isn't a great user experience. How can features be chosen to be explained with a few words? How can features be clustered together under a single name to avoid all of them being simple on/off switches?


Thematically, I had a few changes/directions in mind:


  1. Symmetry isn't a visual theme. "mono" definitely relied heavily on symmetry, but that takes a lot of the viewer's attention. I wanted to focus on other visual aspects.

  2. Organic/lifelike. I have a soft spot for pop. dynamics, so thought it would be cool to explore evolution/reproduction somehow.

  3. It doesn't feel like watching math happen. When I say I like to make math art, I mean art about math rather than art made with math. Even within the realm of art about math -- math art -- I feel like the clear visual demonstration of some relationship between numbers can really be the topic of some pieces. Sometimes you want that, and sometimes you don't, and for my next piece, I didn't. I wanted it to feel more like a "simulation of something happening" at a little higher level of abstraction.

  4. It's not really about beauty, a moment in time, or anything theoretical like that. In this sense, I did want my next piece to be more "art about math" -- a simulation that thematically leans a little more into being math for math's sake.

  5. It's not pixel art. When working with CA, discrete cells like pixels feel really natural. But even if working with CA, I wanted to play with lines, polygons, transparency, and layering.

  6. No cycles of rebirth or dying out. I wanted a simulation that was going to run continuously and change over time without becoming unstable and collapsing ("die out") or exploding (see "glitched out" feature on "mono"). This sort of balance is key when working with CA, so this is another place to hone the craft.


 


Improving Performance


I was talking with a friend from college about "mono", and he asked "did you ever think, when we were in school, that you'd end up doing that with your [CS] degree?" We laughed about it, but as it turns out, it's been a big help in some ways. Looking at the complexity of some of the main parts of the CA draw loop was a key area for improving performance.


Improving the CA algorithm's performance


Consider a naive 2D CA algorithm taking place on a grid (this was what "mono" was):

function F () {

    let nextState = [];
    for(let x=0; x<xMax; x++){
        nextState[x] = [];
        for(let y=0; y<yMax; y++){
            nextState[x][y] = calculateNextState(x,y);
        }
    }
    state = nextState;
}

Here we assume that calculateNextState is a [potentially very expensive] function that, given an (x,y) coordinate in the grid, can examine it and its neighbors and determine what the next state for that cell in the grid should be.



Assume a screen size where y = x*c . So if a screen is x*y, then a screen/this algo runs in x*x*c time with complexity O(x^2). Probably not going to perform great in general. How can we improve on it? What assumptions can we make about the problem of calculating the next CA state that might help?




Examining the core CA algorithm

One point of interest is that empty cells with no neighbors don't change state, so their next state is also empty. How many of these cells are there? It depends on how dense the cells in the CA animation are. If 10% of the cells are occupied at a given time, and all cells without neighbors have no transitions, then most of the cells will have no state change. Similarly, if 90% of the cells are occupied at a given time, then almost none of the cells will have no state change. If there were a way for us to omit all the cells that need no state change from the calculation loop, we could probably speed things up a bit. So let's consider the things we want to include in the loop.


We'll need to do calculations for all of the occupied cells, and all of the cells neighboring occupied cells. These are the cells that could potentially have a state change. It seems that generally, here, the number of neighboring cells will be larger than the number of occupied cells. A single occupied cell with no neighbors, for example, has 4 neighboring cells on a square grid and 6 on a hexagonal one. The relationship looks something similar to:

neighboring = occupied * 4

Except, when multiple occupied cells cluster together, when there are less neighbors/occupied cell. In any case, the number of neighboring cells is larger than the number of occupied cells.



Constraining the core CA algorithm

The relationship between neighboring and occupied cells suggests a way we might constrain our CA algorithm to obtain a performance improvement. If we could omit the neighboring cells from our loop (as well as the empty ones with no neighbors), and only do calculations for the occupied cells, we'd be left with a small portion of our original calculations. So, we add this rule:

Empty cells never do anything. Calculations only happen for occupied cells.

Essentially, this means that we never call calculateNextState(x, y) for an (x, y) pair that isn't occupied. Considered as a whole, the function F changes from F(allCells) to G(occupiedCells). CA like Game of Life are a little more complicated to calculate with this kind of algorithm. Consider an empty cell with 3 neighbors in GoL that changes to be alive in the next state. How would it be calculated with G? We'd probably have to do something silly like having each occupied cell calculate the next state of all its neighbors, but it's likely we'd calculate the same cells more than once in some cases. GoL isn't a great fit for this algo, but there are other types of CA that are. What do they look like?



Constructing an improved CA algorithm

What about a system where the grid contains some occupied cells that are allowed to move to a neighboring cell during G? That seems like a natural sort of thing to do for a physics simulation, and sounds reasonable. But how complex is G?


If F is O(x^2), then it sounds like G is O(occupiedCells). How many occupied cells (oc) are there? I'm not totally sure about this, and fortunately I don't need to do a proof for this project -- but intuition tells me it's probably something like O(oc) ≈ O(x*log(x)). It might be closer to O(x) for sparse CA algorithms or would be closer to O(x^2) for extremely dense ones.


Also fortunate is that we don't necessarily need to know the exact complexity before we construct the new algorithm. G iterates over a list of occupied cells. Rather than a 2D array, it's 1D array / list (and intuitively, this also seems to suggest that G is lower in complexity than F).


One issue here is that it's going to be really common in a physics or population dynamics simulation to want to know what's in the surrounding cells for a given cell. While this was easy to do in a 2D grid, it becomes a lot more complicated with a 1D list where we don't have O(1) lookup for neighboring cells. How can we adapt to ensure each cell knows its surrounding cells' contents?



Maintaining neighbor awareness

One option that comes to mind is that each occupied cell would have a pointer to its neighboring cells in the list. That sounds a little unfortunate, in that every time an occupied cell moves, it would need to start pointing to different cells. How would it even know where in the list they were to point to them?


What if we still maintained a 2D array M that just contained pointers to the things in the list L of occupied cells? Then, we iterate over the list with G(L) and use the (x,y) of each thing in L to look up its neighbors in M. No additional copies of the cells are stored in M, only pointers -- so that's the only memory overhead. Now, we can get the performance benefits of using G instead of F, and still look up information about neighbors in M.



Algorithm Improvement Summary

Intuitively, the improvement to the algorithm is that we do calculations just for the occupied cells, instead of for all the cells. Because we can easily choose a CA algorithm where most cells are left empty in each state (e.g. by filling only 1/20 cells with occupants), we can also eliminate most of the calculations that need to be done (in the 1/20 case, this would be a 95% reduction).



 



Memoization


When I started looking at improving performance, I started looking for operations that were both expensive and were being run many times. There are a couple of classes of these -- the ones running once a loop for each crobe (O(oc)) and the ones running n times for each crobe for each loop (O(n*oc)).



Neighbor Memoization

A good example of one of the O(oc) operations was finding the neighbors for each occupied cell. It's not an expensive operation, but especially on a hexagonal grid, there are some calculations involved in finding the six surrounding cells and packaging them in a way that's convenient for other code to use (code that makes a decision based on the neighbors). Zoologic also uses second-nearest neighbors, of which there are 12 on a hexagonal grid - so there are 18 neighbors in total for each occupied cell. One problem with this operation is that it's required by literally every occupied cell on every draw loop, so the costs really add up.


Memoization is helpful in reducing this cost. The first time we look up the neighbors of an occupied cell at (x, y), we can store a list of pointers to the 18 neighbors of (x, y) at a hash key like `${x}_${y}`. Then, in the future when we need to examine neighboring cells for occupants, we can easily retrieve the list from the hash. The hash is roughly O(x^2) in memory -- for the max detail level of 200, that's around 40000 cells * (18 neighbors + overhead). I'm not sure exactly how big the data is for each cell, but even at 0.5 KB ea. (an overestimate), it would still only be ~20MB -- not too bad on a modern device. The neighbor memoization made the simulation substantially more performant.



Tail memoization

The crobe tails are a good example of an O(n*oc) operation that really needed some memoization to perform reasonably. The tails can have some variable length n up to around 200 or so, of past crobe coordinates. This can get expensive when, for example, the "jiggle" attribute for tails is on, requiring randomization of the tail positioning with each frame. Multiplied by all of the crobes, it adds up fast.


Instead, I memoized the "jiggled" line tail position for each crobe at each coordinate, and then apply a multiplier that changes each loop when rendering the tails. This maintains the appearance of thrashing tails without such a high performance cost.



Other memoization

A number of colors calculated for waste cells and psychedelic features rely on sin functions for smooth color. Since it's pretty expensive to calculate, I instead memoize the color outputs for certain inputs of 3-4 variables like x, y, cellState, and fMod. fMod is the number of frames elapsed in the simulation, mod 256, which allows colors to change over time. The modulo constrains possible inputs to a smaller number of possibilities that can be memoized. Each of these memoizations trades additional memory for improved performance, which helps the animation run smoothly.



 


Improving Colors


My approach to color has almost always been to tie the numbers from the simulation directly to an RGB color space. I like the way it makes it easy to see what the simulation is doing and how it's changing over time, so the viewer can appreciate the aesthetic of the system. That said, rolling through RGB isn't really the best experience visually. I wanted to try something new and use some predefined color palettes.



Choosing Color Palettes

If I was going to be using predefined colors, I wanted to be sure I'd still have some diverse color in my set. I wanted to have many different palettes. Over the course of ~4 weeks, I worked to select a couple of palettes at a time. I'd test each new palette with a variety of different features and outputs to see how it was working, and then adjust some of the colors until I liked how they were looking together.


Over time, as I saw the different palettes across thousands of outputs, I liked some more than others. I removed a lot of the palettes, combined some, split others. It was a new experience for me to be willing to throw a lot of the colors I'd spent time adding onto the trash heap. Some colors just don't work the way you expect when you try them. For me, sometimes it can be hard to envision how they'll look until they're plugged in. Choosing color palettes that work for a piece seems like an iterative process. I'll be interested to see how I can reuse some of these colors in future work.


For my palettes, I also switched to using HSL (instead of RGB), because I like some of the opportunities available with controlling luminance or saturation independently. The "dark matter" feature of Zoologic relies on being able to adjust the luminance value, for example. And if you stick with predefined palettes, you don't get as much of that "running through the rainbow" effect that comes with shifting hue values for HSL.



Organizing Color Palettes

Ultimately, I ended up with ~45 color palettes that I liked. But I think it's worth exploring how I chose some of those out of all the possibilities, and that was dependent on how the palettes are organized.


I've noticed the AB community seems to value creation of sets, so I wanted each mint to include multiple palettes. This helps with a few things:


  1. Each mint includes multiple differently-colored perspectives on the same algorithm.

  2. Each mint has a higher likelihood of including some color that the minter wants.

  3. More mints capable of generating an output with each color will be created, making it easier for collectors to build sets in a particular palette. Even for non-collectors, this will make a higher number of each color available to view and appreciate.


In order to make multiple palettes work, though, it seems there are a few reasonable constraints:


  1. No mint should have the same color more than once.

  2. No mint should have multiple palettes that look alike.

  3. Avoid harming the diversity of the "primary color" that is shown in snapshots. All color palettes available should be able to come up in a snapshot.

My thinking was that I would want to somehow "bucket" the palettes into buckets where each palette looked roughly the same. That way, whatever color palette selection algorithm I used could just choose from different buckets to avoid similar colors. I took my large list of palettes and did a side-by-side comparison between each pair of palettes in the list. When two palettes looked somewhat similar, I put them into the same bucket.


Some buckets had only a single color, but others had more like five. Where possible, I split buckets apart -- but in some cases, I threw my least favorite of the overcrowded palettes away. This also helped me find certain color schemes that I didn't have a lot of representation for in my set, and find new palettes to fill in the gaps.


Ultimately, I ended up with 15 buckets of similar palettes, each with a primary/secondary/tertiary color palette. Now, I could satisfy constraints 1+2 above with an algorithm like:


  1. Choose 3 random buckets - a primary, secondary, and tertiary bucket

  2. Take the primary color from the primary bucket, secondary color from secondary bucket, and tertiary color from tertiary bucket. These are the colors for your mint.

Since we've chosen palettes from 3 different buckets, none of them can be the same palette, and constraint 1 is satisfied. Because palettes in different buckets look different from each other, we've guaranteed the three colors will all look different from each other and satisfied constraint 2. Unfortunately, we haven't satisfied the 3rd. We select the primary color from only 1/3 of the full set of colors, limiting the diversity of our snapshots.


The last part of the selection algorithm is essentially to shuffle the list of 3 selected colors between slots. Any of the colors could be primary, secondary, or tertiary. So now we can have diverse snapshots, and every mint gets three unique color palettes -- awesome!



Naming Color Palettes

Having distinct, predefined color palettes affords an opportunity to name them creatively. I let names come to me over the course of weeks as I was working on the project. A lot of them are plucked directly from the stream of consciousness. Several of them came from friends and family who got previews of what I was working on. I'd ask "what would you call this one?"


I think naming features (like colors) is also an interesting way to set the mood for a viewer of a mint -- and in the case of the colors, where each mint has three, whichever names are chosen are going to be shown together. So I went with a less serious set of names that's a little all over the place; there isn't really a theme for the names. Some of them are just made-up words. I think it's interesting seeing the various color palette names in combination with the different features of Zoologic.


On a fun note for anyone reading this far into the section about colors, there's a special palette called polychroic that allows selecting from all 45 color palettes in a single mint. Keep an eye out for it!

 


No Libraries, Reusable Framework


While I'd used p5js for "mono" I was a little less than enthused about the performance I was getting, relative to some of my previous work with vanilla JS. That may just be my inexperience in working with p5js -- but in any case, I wanted to get back to vanilla JS.


Since I would need to manage things like the draw loop and rendering to canvas myself, this was a good opportunity to build a bit of a framework for my own use that would handle these things and be reusable for future projects. This framework is responsible for features like framerate, aspect ratio, high DPI support, saving the image on the canvas, full-screen support, wiping the background, detecting keyboard input, and displaying on-screen messages.


A fair amount of the up-front development process for Zoologic was the development of this framework. This is exciting, since the effort shouldn't need to be repeated for future projects! The initial framework was also really useful for sketching, since I could just swap out the animation script and have everything "just work" out of the box.


Generally speaking, I've been pleased with the performance offered by creating my own little framework. Creating a framework comes with some downsides, like the additional time required to fix bugs and solve problems that other people have already solved. The logic needed for these things also needs to be uploaded to the blockchain, instead of relying on embedded libraries, so there can be a cost as well. But it gives me a lot of control over the optimization and features, so I think it's worked out well.


The lack of a library also means I still have an available 3rd-party dependency when creating a project on Art Blocks, so I could include a sound component using tone.js. I actually experimented with this a bit, but pretty quickly discovered it was going to be outside the scope of this project. I'd like to revisit this in the future with a project built from-the-ground-up to incorporate sound, which is an option that was unavailable when relying on p5js.



 


Embracing the Medium (web and mobile)


I think it's great that we have web browsers as a medium for art. The interactivity offered is really powerful, so I wanted to take advantage of that. I wanted to help viewers customize their experience with each mint, to help them get the most out of the experience (while, of course, still ensuring that the default interaction-free experience is a good one).


I added a rich variety of controls to control colors, framerate, detail level, and a number of other things to help with this -- but very soon ran into another problem. I had pretty much exhausted the keys on the keyboard, and there was no way it was going to be obvious what all the different controls are for the average viewer. I needed some way to communicate these controls to the viewer.


So, I added some interactive help menus using HTML. While projects usually only make use of the canvas, the entire document body is really available for interactive use as an application interface. I wanted this ability to add interactive help to be part of the framework. One downside to this is that it's pretty expensive to list all the HTML needed for these help menus on-chain -- but if that's what it takes to achieve this degree of interactivity, it seems worth it.


The interactive menus also made it easier to support a variety of user inputs on mobile devices. While I added some swipe controls for mobile to control a few key features, it was pretty limiting -- maybe around 6 different inputs. By opening an interactive menu with controls, I was able to allow a variety of inputs for the user to enrich their experience.



 


Going deeper with Cellular Automata



A hexagonal grid


I've spent a lot of time working with algorithms similar to Game of Life on a 2D rectangular grid over the years, so I was interested to see how I could use CA in a different way. I started by experimenting with some 2nd and 3rd-nearest-neighbor variations of CA that were interesting, but from there moved on to thinking about creating automata on a hexagonal grid. It looked like there was an extra degree of freedom and connectedness that should give richer behavior for CAs, since each cell has two additional neighbors.


It took me a few sketches to get a working hexagonal grid. There are some intricacies to addressing the cells of a hex grid with (x, y) coordinates and calculating the neighbors and 2nd-nearest neighbors of them. I exercised this code by building some fairly traditional GoL-style automata that could create symmetrical snowflake patterns. Having code to address and draw cells on a hexagonal grid enabled a variety of CA animations.



A movement-based CA algorithm


I mentioned before in the section about CA optimization that the new algorithm only performs calculations for occupied cells, instead of all cells. The crobes in occupied cells can, in effect, move to a different adjacent cell during each state update -- but empty space doesn't do anything. Unlike the traditional GoL algorithm, this results in a new problem -- collisions. What if two crobes decide to move to the same cell at the same time?


Solving collisions

Usually with CA algorithms, during a state transition you have one 2D array representing the current state (S), and another representing the next state (S'), and using our state transition function from earlier, G, S' = G(S). Those arrays need to be kept separate -- otherwise, in a CA like Game of Life, if you start rewriting the board as you loop through it, the new values of earlier cells affect the calculated values of later cells. By using a separate next state, the algo can work as if time passes at the same rate everywhere in each frame.


A first naive attempt at solving the collision issue is something like this:


  1. If the crobe trying to move to (x, y) in S' finds an empty space, put the crobe in that space.

  2. If the crobe trying to move to (x, y) in S' finds a crobe, the two crobes compete to see which one gets to occupy the space.

  3. The losing crobe is removed and their resources are consumed by the winner.


The issue here is that if multiple different crobes move to the same space and fight in the order they move, then the order they move starts to affect the outcome. This causes the looping direction of your CA algo to start to affect the state (by making time pass at different rates in different parts of the simulation), which will warp the output over time.


For example, consider 3 crobes moving to a space -- one Rock, one Paper, one Scissors. If their move order is: Rock, Paper, Scissors, what happens?

  1. Rock moves to empty space

  2. Paper moves to space and defeats Rock

  3. Scissors moves to space and defeats Paper

  4. Scissors is the last crobe left. Scissors wins!


But if their order is Scissors, Paper, Rock?

  1. Scissors moves to empty space

  2. Paper moves to space and is defeated by Scissors

  3. Rock moves to space and defeats Scissors

  4. Rock is the last crobe left. Rock wins!


The same three crobes fighting for the same space could have a different winner based on the starting position of those three crobes, if we always loop through the positions in S in the same order. In fact, in Zoologic, I always loop through neighbors of a given cell like Left, Upper Left, Upper Right, Right, Lower Right, Lower Left -- so the algo would have a bias for moving in certain directions. How can we solve this?


A table of planned moves

When thinking of which data structure to use for this issue, what's needed as an output is a list of unique (x,y) spaces where crobes have planned to move -- and for each of those spaces, a list of what the planned moves are. Then that list of planned moves for a space can be used as an input to some function that decides how all those planned moves resolve in a fair way that isn't biased in a particular direction.


For the "list of unique (x, y) spaces" I decided to use a hash. Whenever we store a planned move, we know (x, y) which makes finding the right bucket for the add O(1). Pushing into the bucket is also O(1). Then, to resolve the moves, we can just iterate over the keys of the hash to keep things O(oc) instead of O(x^2). Once a move is resolved, we still have access to the x/y coordinates of the move to write it to S'.


In Zoologic, I resolve the stack of moves for each (x, y) space using a sorting function that treats all neighbors fairly. This helps prevent a bias in any particular direction. This solution allows multiple crobes to try to move to the same cell, fairly compete, and then take control of the cell if they're the strongest. It provides a natural place to transfer resources from one crobe to another as they compete.


In future work, I might explore using a rollback algorithm (like fighting games use for their netcode) to allow one crobe to "move first" into an empty cell and then allow other cells to challenge from their own cells before attempting to move. But the solution with a stack for each occupied location and a winner-take-all competition was simple and performant for this implementation.


 

Feature Design


When I was developing "mono" the Art Blocks "features" requirement wasn't really something I had in mind until the end of development -- like a checkbox that needed to be checked to launch the project. It was a reactive process where I chose some of the key parameters to display. The names for some of the seeds came very late, too.


When developing Zoologic, I tried to be mindful from the start about which features I'd want to display, and what the user experience was going to be for people looking at these features or filtering by them on the AB site. I wanted my features to have a few attributes:


  1. Every feature should have a meaningful effect on the output. There shouldn't be a lot of "what does this even do?"

  2. If a feature isn't applicable, it should be obvious. There are some features that have a value of "n/a" when they simply don't apply to the output.

  3. There should be some fun names for things. With things like preset colors, combinations of features, and seeds, there are lots of opportunities to give different kinds of outputs their own names.

  4. Not a lot of numeric features. For some parameters that fall within a range, I bucketed the range like low: 0-33, medium: 34-66, high: 67-99 and used those as feature names. I feel like it's better to have them generally grouped and not to focus on every minute difference

During the course of development, I would lock one of these named features into place, and then refresh over and over to get a feel for its "identity." I'd tune the parameters until the feature had an identity of its own.



Neural Networks


I'd to go into some detail about how I got a pre-trained neural network into a form that could be stored on the blockchain in a way that wasn't impossibly expensive.



Constructing the neural network


Since there isn't a library available on AB to deal with neural networks, I needed to implement one in JS. I created a neuron class inspired by the open-source carrot neural network library's Node class, and added some methods for creating arbitrarily sized sets of neurons (layer) and sets of layers (network), while connecting the neurons in each layer together. This network supports activation with inputs, and back-propagation to train the network.



Training the network

I created a new network initialized with random weights (as described in Part 1 of this article), and trained it in isolation using a test harness to simulate a large number of different possible scenarios and their desired outcomes. For this, I hand-coded a large set of training data for scenarios like "you see prey, and move towards it" or "you see a predator and move away from it" or "you see prey above and below you and a predator to the right, so you randomly move either up or down." A lot of these scenarios were my best guess at what a somewhat reasonable behavior would be. Good enough to start.


I pulled a small portion of this training data out to use as a test set, and then trained the network on the training set a couple of thousand times. Now, I had a trained neural net in memory that could be placed into the body of a crobe to, theoretically, make it act based on the training. I would need a way to copy it into my crobes, though. For this, my initial approach was to convert the NN to a JSON string to be stored in a variable. That variable could then be parsed into new copies of the network when creating each crobe.


To avoid the need to train the network from scratch every time I worked with the animation, I was able to copy this JSON string for the pre-trained network into a constant in the code, replacing the blank network instantiation and training. From here, the networks were in crobe bodies in the actual simulation. If I could give them the ability to learn from their experiences, they could continue their training in a more realistic way.



Implementing learning in the simulation


In order for the crobes to learn, they would need to be able to train themselves sometime during the draw loop. The first question was "when is an appropriate time for them to learn/train?" It seems like the most reasonable ways to teach crobes are:


  1. Reward them when they do something good

  2. Punish them when they do something bad

The second case wasn't needed, because the "something bad" here is getting eaten. Crobes that get eaten can't learn, because they're gone. That leaves the first case, where "something good" is eating prey. So it seems like a reward should be given when prey is eaten, which happens for the winning crobe when processing the queued movement table. We'll assume the reward for the crobe can just be, learning to make the same decisions that led it to success again, in the future.


To actually train the network, training data about the decisions the crobe has made and what resulted are needed. With each step, a crobe remembers a new pair of data: the inputs it saw before moving, and where it moved to as an output. This is the training data we need. But how many past steps should really be used for training? If a crobe wandered through empty space for 100 steps in random directions, and then closed in on prey on steps 101/102, should it learn all 102 pieces of data? Or how many?


In practice, I found that remembering many previous steps introduced a lot of noise during training and degraded AI performance. Experimentation led me to using the last 2 steps for the crobe, which sort of makes sense, since crobes can only see 2 spaces. So crobes learn to repeat the behaviors they used in the last steps leading up to a meal. Here, I was able to see the crobes in the simulation start to learn and change their behavior based on "real-world" experience. Really cool!


Training, Part 2


Armed with a neural network pre-trained in the harness, and the ability to train crobes during the draw loop, I was ready to move on to the next stage of training. For this, I actually developed the autotrain functionality as a way to let the simulation evolve better neural nets via competition between two crobe populations with different nets (so this process is something you can do in each mint!). For this, I also developed the ability to export the "best" crobe's net. The training process looked something like:


  1. Load a random output starting with a NN stored in a constant

  2. Set the simulation to autotrain mode

  3. Let the simulation run overnight

  4. Output the "best" NN that had evolved

  5. Copy and paste the NN in to replace the NN constant stored in code

  6. Repeat steps 1-5


This allowed me to train the crobes in "real world" scenarios for thousands of generations, across different mints with different parameters for movement and reproduction -- preventing overfitting and allowing them to learn to chase prey and flee predators in a variety of scenarios. In theory, I could put the data for this NN into the program and store it on-chain. In theory. Reality?


Preparing the pre-trained NN to go on-chain


The initial implementation for my NN was around ~52 KB, which is pretty huge (i.e., expensive). I'd have to find some way to reduce it in size. Looking at the data, the bulk of the size was in the weights of each connection between neurons. Besides looking like -1.234203482342, there were a ton of them (see the neural net diagram in Part 1 for an illustration of how many connections there are). This seemed like a great opportunity to get the data size down.


I used setPrecision to cut the precision of the weights down to 2 (turning the previous example into -1.23 -- much shorter). I wasn't convinced this would still behave correctly, to just truncate all of the weights that way, so I actually trained the NN under the truncated precision conditions so it could learn to work with them. This took my size down to ~17 KB for the network data and NN code -- a huge improvement!


Given the large number of connections, there was still room for further optimization. I wrote some code to strip quotation marks from the JSON format, so that I could store it in a more condensed custom format, and code to turn that format back into JSON. Even with the additional code, there were so many quotation marks that I was able to shave the size down to ~14 KB.


The setPrecision restriction was removed from the code, so that neurons could once again use the long format when learning, so that they can become more precise when trained by viewers than they were on-chain to start. The network can re-expand to ~52 KB and take on additional precision as it learns. I think this is an interesting proof of concept for compressing a NN to live on-chain.



Advanced networks


Similar to the way the NN can expand locally with training, the "advanced network" mode allows local training/saving/loading/trading of significantly larger NNs hundreds of KB in size. I think in general the idea that small on-chain programs could serve as bootstraps for extremely large off-chain datasets to be loaded in at runtime (perhaps even GB in size) to achieve significantly expanded performance or functionality has a lot of potential. "Advanced mode" is a variation of this where a small on-chain program can generate data sets an order of magnitude larger than the program itself, and then ingest them for use.



 


The experience of building Zoologic


Zoologic was built over the course of four months -- one month of sketching, two months of active development, and one month of testing/validation/feature development/writing this article. Surprisingly, the idea to include neural nets to control behavior came to me fairly late in the development process -- about 1 1/2 months in. I've really enjoyed working with them, and look forward to seeing what other sorts of uses I can come up with them for on-chain generative art.


Building this project on this timeline has been an exhausting but rewarding experience. I feel like technically and aesthetically, I've been able to take generative art made with cellular automata to a level of polish I didn't reach with "mono no aware". I've learned a variety of new techniques, and built a framework well-suited for creating additional future works. I feel this is my best work to date, and very much look forward to sharing it! And to battling AIs with people...


577 views0 comments

Recent Posts

See All

Comments


bottom of page