technology

The Fitness Function Problem

In 2011, I spent weeks staring at my screen, watching transparent polygons slowly assemble into faces. My computer was running a C++ program I had written — a genetic algorithm that tried to reconstruct portraits from nothing but 250 randomly generated, semi-transparent polygons.

The subjects: Stalin, Mao, Hitler, and — because every group project needs someone less famous — myself.

What Genetic Algorithms Actually Do

The principle is deceptively simple. You start with a population of random candidates — in my case, sets of 250 colored, transparent polygons scattered across a canvas. Each candidate produces an image. A fitness function compares that image to the source photograph, pixel by pixel. The candidates that produce the closest match survive. The best two are “bred” together — their polygon configurations combined, with small random mutations thrown in. The next generation begins.

It’s evolution, compressed into code. No intelligence, no intent — just selection pressure, reproduction, and randomness. Over thousands of generations, the polygons drift toward something recognizable. A jawline emerges. Eye sockets darken. A face materialises from chaos.

My computer ran this process for weeks. A clever designer could have produced something similar — probably better — in an afternoon. But the point was never efficiency. The point was the process itself.

The genetic algorithm at work — from random polygons to recognisable faces.

The Provocation

Why those three faces next to mine? Because we share the same ingredients.

The 250 polygons that construct my portrait are identical in nature to those forming Hitler’s. Same primitive shapes, same transparency, same color space. Different output. The algorithm doesn’t know who it’s reconstructing. It just optimises toward whatever the fitness function rewards.

This was my response to genetic determinism — the idea that your DNA is your destiny. We share overwhelming amounts of genetic material with each other. There is no “evil gene.” There’s a common saying that the layer of civilisation is thin and can be destroyed very easily. I was born in the right time, in the right country. I didn’t suffer because of these men. But there’s no guarantee it stays that way.

The four 4x4 fine art prints hung next to each other, identical in technique, identical in resolution — four faces built from the same building blocks. The discomfort was the point.

Polygonal portrait of Mao Zedong, reconstructed from 250 transparent polygons by a genetic algorithmPolygonal portrait of Stalin, reconstructed from 250 transparent polygons by a genetic algorithm

Mao and Stalin — each reconstructed from 250 semi-transparent polygons.

The Fitness Function Problem

Fifteen years later, the work reads differently than I intended.

Genetic algorithms are, in a very real sense, precursors to modern AI. The feedback loop I implemented in 2011 — generate output, evaluate against a target, select the best, iterate — is the same fundamental pattern that drives AI training today. In Reinforcement Learning from Human Feedback (RLHF), the fitness function is no longer a pixel-by-pixel comparison — it’s human judgment. A model generates responses, people rate them as helpful or harmful, and the model adapts toward what the raters rewarded. The principle is identical to my polygons evolving toward a face, except now the target isn’t a photograph — it’s a value judgment made by people hired and instructed by the companies building the models. In agentic coding, an LLM writes code, linters and tests assess the output, and the model adjusts. Even this conversation I’m writing about started as a dialogue with an AI, guided by feedback loops.

The underlying principle is the same: evolution, RLHF, code review — they’re all fitness functions applied to generated output.

And this is where it gets uncomfortable. In my art project, I defined the fitness function. I chose whose faces the algorithm would optimise toward. The tool itself was neutral — polygons don’t care whether they’re forming a portrait of a dictator or of me. The ethics lived entirely in the question: what are we optimising for, and who decides?

Today, that question has scaled enormously — and the answer is less reassuring than ever. Large language models lose their neutrality long before any fitness function kicks in. The selection of training data alone is already a political act. What gets included, what gets excluded, whose language, whose knowledge, whose worldview makes it into the corpus — these decisions shape the model’s reality before a single parameter is tuned. By the time RLHF refines the output, the boundaries of what the model can even think have already been drawn.

And who draws them? A handful of tech billionaires who have made it abundantly clear that ethics is, at best, a PR problem for them. Men who fire safety teams when they slow down product launches. Who court authoritarian governments when it’s profitable. Who speak of “effective acceleration” while accumulating power that would make the monopolists of the Gilded Age blush. What unites them isn’t technical brilliance — it’s a striking lack of empathy, a willingness to treat people as data points in someone else’s optimisation function. Which, come to think of it, is another trait they share with the three other men in my portraits. The co-inventor of the transistor was a Nobel Prize-winning eugenicist. IBM leased its punch card machines to the Nazis to help organize the Holocaust — from identifying Jews to managing concentration camps. The tools have always been neutral — the people wielding them, rarely.

Polygonal portrait of Hitler, reconstructed from 250 transparent polygons by a genetic algorithmPolygonal self-portrait of the author, reconstructed from 250 transparent polygons by a genetic algorithm

Hitler and me — same technique, same building blocks, different output.

Same Building Blocks, Still

My 2011 project was about genes. The word “gene” is right there in “genetic algorithm.” But the real subject was always about what we do with shared material — how context, power, and intent transform identical ingredients into vastly different outcomes.

We’re having the same conversation now about AI, just at a different scale. The tools are powerful. The feedback loops work. The question that remains is the one my four portraits asked silently from the gallery wall: same ingredients, different output — and who gets to define what “fit” means?

There are no good or bad genes — sorry, Sidney Sweeney — and there are no good or bad algorithms. There is only what we make of them.