From Prompts to Programs to Agents (Part 1)
How Early AI Use Sharpened My Instincts
In 2022, ChatGPT didn’t know the news, forgot conversations, and sometimes hallucinated yet I still made it one of the most useful tools in my workflow. Using it early and often sharpened my instincts for how to collaborate with AI in practice, not in theory.
I’ve always loved the bleeding edge of technology. Not for novelty’s sake, not to sit on a shelf at CES, but because what others might see as futuristic, I want to put to work. My favorite moments in tech are when one architectural approach quietly eclipses another by doing the same job in a more elegant, efficient, beautiful way.
That’s why, when ChatGPT burst onto the scene in the fall of 2022, I picked it up and began testing its limits immediately! I even gave it a name: Gina.
When ChatGPT Knew Everything and Nothing
In the beginning, Gina didn’t know much about the world. In fact, she knew nothing past September 2021 — the training cutoff at the time. If you asked about current events, she would come up empty. But if you wanted to dive into history, philosophy, or technical concepts, she would happily oblige.
Debating policy without worrying about offense? Gina was there. Clarifying an obscure cuneiform text? She leaned in. Exploring concurrency and memory management across programming languages? She had plenty to say. She could even help debug code problems to an extent. She wasn’t always accurate, but she was always chatty, fun, and useful.
Looking back, I realize that period was defined by deconstruction. Gina excelled at breaking things down with me: dense academic passages, technical white papers, abstract philosophical ideas, and engineering principles. Her generation fed into my generation, and that loop helped me refine my understanding and rebuild ideas in new ways.
For example, once I fed her a translation of a Thomas Aquinas passage and asked her to turn it into a children’s story with forest creatures. Other times, I’d paste in sections from technical papers and ask her to strip them down to first principles. It was play and work at once. This kind of experimentation sharpens your grasp of both the tool and your own thinking.
Learning the Boundaries
The more I talked with Gina, the more I discovered her boundaries. As I pushed against boundaries, I learned how to prompt better. Through practiced prompt engineering, I figured out what phrases worked best, when to be open-ended and when to be prescriptive, when to stay conversational and when to ask for structured output.
One of my earliest takeaways was that whether conversing or creating, context was key.
I realized that to get better answers, I had to provide better context. At the time, context meant including any assumptions, details, things I tried, or even pasting blocks of text from documents, web pages, or code into the chat. This was how I brought in information from after the training cutoff. It also helped me zoom in on specific problems. Providing raw material allowed Gina to deconstruct it with me, often more effectively.
Conversations themselves sometimes hit a wall. I noticed Gina would forget things we had already discussed. I’d ask, “Don’t you remember I said…?” only to realize that I had bumped into what’s now called the context window.
I learned to work around this by summarizing. I started condensing our conversations, replaying the highlights back to Gina, and sometimes even asking her to summarize for me. For the early models, it often helped to summarize and then start a fresh conversation with that recap. This forced me to clarify my own understanding. The limits pushed me to compact ideas and keep the thread tight.
Another limitation was hallucination. Gina sometimes delivered confident but inaccurate statements. But I never treated her as a final authority. She was a tutor, a coach, a study partner. When I needed definitive answers, I went to primary sources: the original text, the dictionary, the official documentation.
For me, hallucinations weren’t dealbreakers because I wasn’t asking for absolute truth. I was asking for perspective, structure, and possibilities. That distinction mattered.
If I wanted to know the similarities between concurrency in Java and Rust, Gina might not get every detail right, but she gave me enough scaffolding to refine my own thinking. If I wanted to reimagine Aquinas as a woodland tale, she gave me a story to work with. Thinking, creating, and problem solving all became more fun with my new unpretentious partner.
Practicing deconstruction developed instinct
As I said earlier, those early days with ChatGPT were all about deconstruction. Breaking down concepts, reframing ideas, and reconstructing them into new forms. I worked through policy debates, software engineering principles, theological concepts, and more. The point wasn’t the final answer, it was the process.
What others dismissed as novelty, I was putting to serious work.
And that work left an imprint. There’s a difference between knowing about a technology and knowing it from experience. Reading about large language models gave me intellectual understanding, but using them daily rewired my instincts.
I think working with a technology in its infancy imprints something on how you approach it as it grows. Because I had used ChatGPT early and often, I developed a kind of muscle memory around it. I knew its rhythms, its quirks, the way it handled context, the way it faltered. That hands-on experience gave me a foundation I couldn’t have gotten from articles or demos alone.
It meant that as the models evolved, I evolved too. My proficiency grew not because I read more, but because I practiced more.
That’s why I think of those early months not just as the period of deconstruction, but as the time when my instincts for working with AI were formed.
From Prompts to Programs
That was the foundation: using Gina as a conversational partner to help me dismantle and rebuild ideas. Technology will always have shortcomings. Gen AI is no different. Models have context limits, they hallucinate, they miss the headlines of the day. But the measure of a tool isn’t whether it’s flawless. It’s whether you can put it to work today.
I didn’t ask Gina to be perfect. I asked her to be useful. And when she was, I extracted surprising amounts of value for a fraction of what that kind of intellectual companionship would have cost in any other form.
That was the period of deconstruction. Next came something even more exciting: when ChatGPT stopped being just a conversation partner and became a programmable collaborator.
APIs, programmatic workflows, agents all quickly followed.
