Field Theory and The Rise of Ambition Machines

TLDR: I made Field Theory because I felt like copying and pasting a wide variety of context into an agent's input field shouldn't be tedious (e.g. logs, screenshots, docs, voice transcriptions, etc). It should be easy.

LDR: below are some of the theories behind the project.


Context stacking

I believe "context stacking" is an emergent set of behaviors - voice transcription, model memory, conversation summaries, plan and debug mode, etc. - these are all forms of information stacking. When you get enough of it cobbled together and send it to a model it tends to be a more productive partner.

Pre tool-call and "think-harder" era (the present day), we used to have prompt engineering. I remember the Dali-like writing of professional prompt engineers. They were part code, part emotional plea, part ascii-art. And they read more like incantations than English.

This pidgin language that prompt engineers used was short lived because context windows got bigger, multi-modal input became standard, we made better harnesses, and in general the models became pretty good at separating meaning from a bunch of half-formed ideas or a dump of text. In so doing, hallucination seemed to give way to real productive creativity.

But there has been a major consequence to model improvement. The machine's appetite for more context about a human's project or goal or ambition has become insatiable. The current meta for talking to SOTA models is effectively, "Tell it more and it will make you a better thing." When it's bad, it's often a human's fault for not contributing enough context or good judgement.

So we copy and paste more stuff in. But when your job includes doing this hours a day you quickly realize context management at inference time (CMIT?) is pretty time consuming and pretty tedious. I think we need many more tools to help facilitate the process of getting both the right amount and the right type of information shuttled to a model on-demand. This is the driving thesis behind the application, Field Theory.

For sure, there will be a great context firehose in the not too distant future. One that will come from a federation of technologies that you wear or install or have near you that do autonomous, continuous context stacking (computer use, always-on microphones, personal cameras, that sort of thing). But until then, we need better more ergonomic tools to move information around.

And also... in the process of partnering with models to do more and important work, humans might find an enduring purpose.


Elephants on balance beams

Computer use isn't yet a replacement for a human's ability to select what context matters. And as a general principle, today's models do better when given specific relevant information than when given all of the information. As an extreme example: analyzing a continuous video stream of a day's worth of work on a laptop is less efficient for a model to solve a problem than a human saying, "this is the error I'm getting."

A good litmus test for this is the current state of browser use. The frontend of the internet is a native environment for humans but not so much for agents. Browser use is great when you don't really care about latency. That's because asking an agent to interact with a webpage by clicking on a frontend is like putting an elephant on a balance beam. It will do it and it's awesome to watch but it needs some coaxing and it's kinda slow. On the other hand, if you ask the same agent to use a console or a terminal, it's like putting a fish in water. They move in ways we can't.

Humans, for our part, are pretty good at pointing at things and describing what's in our head. Or explaining what matters. Or ignoring what doesn't. It turns out having a head at all, one that wants things to exist in the world, is one of the attributes that makes us a computer's perfect complement. And we are the only generation of humans that gets to witness the transition. The group that gets to see what happens when you finally combine these two things.

We have will. They have infiniteness. Can there be such a thing as infinite will?

Until now, I think the evidence of co-evolution between computers and humans hasn't really looked like co-dependence (in the healthy, cooperative sense, I mean). But it's starting to. As far back as I can remember, I've never felt pride for a gadget. Delight? Or Wonder? Definitely. But pride? I don't know. Like, I've never felt proud of my iPhone.

But I have felt something close, maybe not exactly, but something close to pride when Opus 4.5 has done genuinely good work on a genuinely hard problem. I'm invested in that model's future. I'm rooting for it.

All this to say, I think the era of extraordinary gadgets (modern life so far) is ending and giving way to a set of experiences with technology that we don't quite have suitable common language for yet.


Ambition machines

I think the transition we're experiencing is an extreme version of Marshall McLuhan's, "We shape our tools and thereafter our tools shape us."

Today, a single, curious human talking to a fast array of wimpy nodes can have a picture in their head and minutes later an object in their hand. I think an empowered human like this is often an optimistic one and that most great achievements in human history are downstream of optimism.

What do you want to contribute to this world? If you say it plainly enough and if you're the right kind of stubborn, so it will be. It's for this reason, I think we're at the beginning of a once in a civilization transition from being tool makers to becoming ambition machines.

And maybe this is the thing we were always meant to be. Or have always been.

When you build and contribute a useful thing the exhaust from that creative act is enthusiasm. That means that using these models and their descendants to make good and productive things might legitimately be making us better at being human. And everyone, even if only for a few moments, everyone knows what it feels like to be enthusiastic.

It's what kids feel and it's infectious and boundless and something like freedom.

Andrew
AMB-MAC


Back home