Ethan Mollick said in 2023:

“Today’s AI is the worst AI you will ever use.”

He meant it as a statement about progress.
The models will improve.
Capabilities will expand.
What feels advanced now will look primitive later.

He is probably right.

But I think there is another side to that sentence.

What if today’s AI is also the least understood AI you will ever have access to?

Not because it is simple.
But because almost nobody has had enough time to learn what it already does well.

The Tool Is Ahead Of The User

Most AI discussion is about tomorrow.

What will the next model do?
What benchmark will move?
What job will change?
What new capability will arrive?

Much less attention goes to a more immediate fact:

the tool is already ahead of most of its users.

The models are here.
They write.
They code.
They search.
They translate vague intent into structure at a speed that would have looked absurd a few years ago.

And most people still use them for faster email, faster summaries, and small rewrites.

That is not really a criticism.
It is what early adoption often looks like.

A new tool appears.
People first use it to do old tasks a bit faster.
Only later do they start discovering what the tool changes at the level of workflow, structure, and possibility.

That delay matters.

Because the bottleneck is no longer only the model.

The bottleneck is learning.

We Have Seen This Before

This pattern is not new.

The printing press arrived long before the surrounding culture fully reorganized around what print made possible.

Electricity entered factories before factories were redesigned around electricity.

Computers existed long before most people understood that they were not just calculators, but general-purpose environments for entirely new forms of work.

The first phase of adoption is usually imitation.
A new tool gets used as a substitute for an older one.

Only later does the deeper shift happen.

Only later do people stop asking:
“How do I use this to do the same thing faster?”

And start asking:
“What becomes possible now that was not possible before?”

That is roughly where AI still is.

Most people use it as acceleration.
Much fewer use it as infrastructure.

The Gap

I do not know how much of current AI has already been seriously explored.

Maybe more than I think.
Maybe less.

But my honest impression is that the explored fraction is still small.

Not what AI might do next year.
What it can already do right now.

That impression is partly historical.
New tools usually outrun cultural understanding.

But it is also practical.

In a very short span of time, using ordinary public tools, I was able to build a research platform, publish heavily, run experiments, structure workflows, and turn scattered ideas into systems.

The important part is not me.

The important part is that none of this required special access, institutional backing, or some secret technical edge.

Same internet.
Same class of tools.
Same general public moment.

That should make people uncomfortable in the right way.

Because if even a small fraction of this is already possible for an ordinary person working intensively, then the real limit is clearly not only the tool.

It is what people know how to ask from it.
How deeply they understand it.
How seriously they are willing to reorganize around it.

The Real Point

Mollick is right that the models will improve.

But I think the bigger gap, at least right now, is not between this model and the next one.

It is between what current AI can already do and what most people actually use it for.

That gap is enormous.

And it is not mainly a benchmark problem.

It is a human problem.

A learning problem.
An imagination problem.
A habits problem.

People still treat AI as a chatbot to throw tasks at.

Much fewer treat it as an instrument worth studying.

That difference matters.

Because tools like this do not reveal themselves all at once.
They reveal themselves through use, pressure, iteration, and restructuring.

Not through one prompt.
Not through one demo.
Not through one headline about the next release.

Why This Matters

If AI development stopped tomorrow, the tools we already have would still take years to understand properly.

Probably longer.

That is not because progress is fake.
It is because understanding usually lags far behind invention.

That has happened before.
It is happening again.

So yes:
today’s AI may be the worst AI you will ever use.

But it may also be the least understood.

And that second point matters just as much.

Because the next real unlock may not come from the next model.

It may come from more people finally asking a better question:

What can this tool already do that I still have not learned how to use?

— Dennis Hedegreen, trying to see the structure