After the first benchmark, one thing became hard to ignore. Elliot did not simply get better. He got better by becoming more unstable at the same time. More food. Longer lives. More chaos.

That raised a more precise question.

How much of the improvement came from structure, and how much came from something simpler: that a little randomness made it harder for Elliot to get trapped in his own weak local patterns?

That was the next test.

The Setup

I kept the base version fixed at v3, the version with five neurons per sense. Same world. Same benchmark structure. Same agent. The only thing that changed was loop-randomness.

The tested levels were 0.00, 0.05, 0.10, 0.20, and 0.35. Each level was run under the same benchmark conditions so the question could stay narrow: does added randomness actually help Elliot, and if it does, what does it cost?

That mattered, because otherwise it becomes too easy to confuse improvement with complexity.

What Happened

At 0.00 noise, Elliot was clean but limited. Food rate was lower. Lives were shorter. Negative events were few. It was the tidiest version of the behavior, but not the strongest.

At 0.05 noise, food rate rose immediately, and Elliot also began surviving longer. A small amount of disturbance was enough to make a real difference.

At 0.10 noise, the pattern became clearer. Elliot found more food. He lived longer. But the cost also started to show. Pain and wall events increased.

At 0.20 noise, the same direction continued. More escape. More success. More damage.

At 0.35 noise, Elliot reached the highest food rate in the sweep. He also lived the longest. But the cost rose with him. More collisions. More negative events. More turbulence in the system.

The important part of the result is not just that noise helped.

The important part is how it helped.

What That Means

The result is not that randomness made Elliot smarter. The result is that randomness made Elliot less likely to stay trapped in weak local behavior.

That is not the same thing.

Across the tested range, noise improved both food-seeking and survival. But the improvement did not come cleanly. Elliot became more effective by becoming more costly in negative events at the same time.

That is exactly the kind of result that makes the system worth continuing. If Elliot had simply improved on every axis, this would have been less interesting. Instead the experiment exposed a tension that feels closer to something real: escape has value, but escape has a price.

Why It Matters

This is a small model, but it still points at something useful. Better performance is not always the same thing as cleaner design. Sometimes improvement comes from controlled disturbance inside a system that would otherwise be too brittle to break its own patterns.

That does not mean noise is intelligence.

It means some systems need disturbance in order to stop repeating their own mistakes.

That is a smaller claim. It is also a more honest one.

What Comes Next

This does not settle the question. It sharpens it.

The next step is not to celebrate randomness as a universal answer. The next step is to find where the useful range stops, when the cost becomes too high, and whether a different brain structure can produce some of the same gains with less damage.

That is the advantage of keeping Elliot small. The system is simple enough to be pushed, measured, and broken in public.

One of the first clear things Elliot has shown is this:

Randomness helped.

It just did not help for free.

— Dennis Hedegreen