This article talks about some of the techniques used to build Implications.
Implications #34, Art Blocks test net
Techniques
Implications was built using the same vanilla JS "framework" I developed for building Zoologic. It handles the drawing loop, sizing for the screen, dpi settings, drawing to the canvas, framerate, menus, etc. By reusing my generic animation code, I was able to spend more of my time focusing on the art. I took this opportunity to make a couple of enhancements that will be carried forward on future projects.
Web Workers
I made use of web workers -- independent threads that can execute in parallel -- to perform some of the calculations for my animation. Once I developed a method for this, I did a lot of performance testing to see which parts of the algorithm benefited the most from parallelization. Surprisingly, not all parts did, but in the end I was able to achieve a double-digit % performance improvement using workers.
Improved Framerate Management
My existing animation code from Zoologic used a naive form of frame rate management that would simply call the draw loop at intervals based on the target frame rate (e.g., a target FPS of 10 would call the draw loop every 1000/10=100 ms). Unless the draw loop executed instantly, this meant the actual framerate was sometimes below the selected target.
This system was overhauled to measure the actual FPS being achieved and continuously adjust the draw loop timing to ensure a timing as close to the target as possible. The result is a smoother animation that behaves more consistently across devices of varying performance. This should also help to ensure that the animation continues to behave as expected in the future as devices become more powerful.
Filtering and Layering
I made liberal use of globalCompositeOperation and opacity to render multiple layered copies of each frame with different filters applied. I found this a really helpful way to add additional vibrance and texture. An additional "film grain" effect is drawn on top to give the animation some additional warmth. By using multiple canvases, I was able to have more control over the way frames were blended. As an added bonus, this allowed me to better isolate the messaging UI (seen when using interactive controls) from the animation itself.
Memoization
I used memoization to store the results of expensive calculations that could be reused. This includes calculating the size, position, and rotation of shapes over time -- and the pixels they contain. The calculations for circles and rotation involve trigonometry, which is expensive to calculate for N^2 pixels in each frame, so I pre-calculated the possible positions for each shape when possible.
XOR function
I made heavy use of the XOR function to calculate the outcome of several overlapping regions. It gave me the appearance I wanted, but is also really fast to calculate. The commutativity/associativity of XOR also made it possible to parallelize the calculation of different types of regions (circles, squares/triangles, sliders, sweepers) and combine the results afterwards.
Curated Constraints
Finding ways to constrain parameters to an algorithm to get outputs with desirable qualities is a central consideration when making generative art. This can be challenging when you have a feature with these attributes:
a large space of possible inputs
outputs vary widely in quality (some may be good, others unusable)
outputs can be categorized/rated, but the inputs are difficult to rate
This problem comes up in a number of different systems, but chaotic systems or NP-hard problems in particular seem prone to it. Anything that's difficult to find a solution for, but easy to verify a solution for. An example is "for a double pendulum with these measurements, where are the places you can drop it from so that the angle between the two pendulums is < 10 degrees after 60 seconds?" It's easy to measure the angle at a point in time, but the chaotic nature of a double pendulum makes this question difficult to answer.
One solution I've landed on is what I'll call, for lack of a better name, curated constraints (if you know a better name for this concept, let me know!). It relies on scaffolding -- building code and data for use in development, but not used as a part of the final product -- to find constraints for the input space. It looks like:
Set up a feature with well defined parameters (e.g. a, b, c). These are likely a small subset of the parameters used for the overall algorithm.
Run the feature with many possible values of a/b/c and examine them.
Make a "good list" and "bad list" of a/b/c values. If possible, categorize them by appearance. Here, you're capturing difficult, subjective evaluations as data
There are a variety of options for what to do with these lists. You can look for patterns in the behavior, and try to constrain your parameters to align with what you're seeing in the "good list" and avoid what you're seeing in the "bad list." If the patterns are difficult to see or codify -- or, like a double pendulum, sensitive to initial conditions -- you can use the "good list" with a random index as the source of your parameters. With a sufficient number of entries, the diversity is still high. This list of known-good parameters is a curated constraint on those parameters.
[Coming back to the double pendulum -- if you want many double pendulums that are straight at 60 seconds, the curated constraint solution might look like generating and running random double pendulums until you've found as many as you need. The starting conditions that produce these double pendulums are the "curated constraints."]
Difference from "hard coding"
To me, this technique differs from "hard coding" in a few ways:
Good solutions may not be immediately obvious without wandering a large space of potential solutions
It feels less like a prescription or decided "genotype" for a feature, and more like a captured "phenotype" -- an ideal outcome that's been noticed and preserved.
The curated constraints may still take parameters and exhibit unexpected or generative behavior -- but the behavior is constrained.
It's like the difference between genetically engineering an animal to have specific traits, and selectively breeding to find animals with desired traits. The former supposes a deep understanding of genotype->phenotype mappings and chooses genotypes to produce a specific result, whereas the latter searches a space of phenotypes for acceptable ones (but might not consider or control genotypes at all).
Example - curated automata rulesets
The ruleset generator for "cyber" patterns in Implications is capable of generating a little over 250,000 possible combinations -- many of which aren't functional or appealing. I looked at several thousand randomly generated rulesets, and classified them on their behavior and suitability for use. Of these, I curated ~100 rulesets that had interesting and diverse behaviors. Each output of Implications selects several of these rulesets to include in a random order, ensuring that each output has a unique character over time. This approach helps me to maintain a high level of quality in the behavior of the animation.
[When searching the space of potential rulesets, I excluded known "good" or "bad" rulesets from being generated by the scaffolding, so that I wouldn't spend time reevaluating them. This seems helpful for searching a space where the possible combinations aren't astronomical, but it's hard to find a pattern to the quality of inputs.]
Example - curated color palettes
The colors in Implications were developed by starting with a few dozen scaffolded "palette templates" -- functions that use several randomized parameters to generate palettes. I viewed thousands of pairs of these palettes (foreground/background), and gradually created a "good list" of palette pairs that worked well together.
The color algorithm for Implications colors each cell based on the total count of neighbors it has, plus the count of its neighbors' neighbors -- a number ranging from 0 to 72. Each of the 72 possible values are mapped to a color in the selected palette. This means that each output has 144 possible colors to work with, but also that different rulesets can have a very different coloration when using the same palette, because the different cell spacing. This made it difficult to easily narrow the parameter space, so a curated constraint was really helpful to ensuring a floor on color quality.
As an interesting side-note, I've noticed that videos of Implications seem to lack some of the detail of the live view. It seems like compression algorithms might struggle with the patterns? I'm not sure -- but the live view is definitely the best way to view!
If you have questions about how something was implemented, please ask and I'll be happy to include some info about it here!
Comentarios