Coding with Agents
/ 8 min read
Table of Contents
AI agents have changed how I approach, deliver, and solve problems at a fundamental level. Over the last year, I have spent a lot of time with agentic tools and frameworks. From the early days of Copilot, to Claude Code, and now Cursor, Codex, and many others. Code, now more than ever, feels replaceable, and how I build and run applications has changed drastically in the last year.
Interfaces and Tools
I was not a regular user of Copilot when it launched, but I remember trying it a couple of times. I used to work on Infrastructure Ops (think Ansible, Jenkins, Linux, etc.) back then and didn’t write code regularly. But Copilot came in handy when I had to write a shell script or Ansible playbook. It felt magical at the time.
Claude Code
When Claude Code launched, I received a few hundred dollars in free credits to try it out. I burned through the credits in a month.
This was a completely new approach to coding. The terminal is where I lived (and still do). So having a terminal tab dedicated to answering questions, running ops, writing new code was a game changer. I would delegate something boring/menial that I didn’t want to spend time on to Claude. Things like checking if a new release for a Terraform provider was out, or upgrading dependencies in an application. I remember using it to create manifests to provision a new EKS cluster by looking at what I had already done in the past in another repository. Basically, at a higher level, I was delegating tasks that I already knew how to do.
It did make mistakes, which was expected. This was my first rodeo with agentic interfaces and since it was not part of my core workflow yet, when the credits ran out, I stopped using claude and it didn’t bother me the slightest.
Then, when working on a client engagement through my employer, I got access to Claude Code, which meant cost was no longer a concern. I began to use it more liberally. It became more like an assistant helping with documentation, bug fixes, upgrading dependencies, and writing Terraform modules. These were tasks I knew how to do, but I wasn’t completely aware of the approach I would take to solve the problem. This was something I would figure out by talking to the agent. I also became more comfortable with delegation and often had multiple terminals running Claude sessions in parallel.
Cursor
VS Code, but with wings.
When I joined People Inc I started working heavily on infrastructure and platform engineering. I also got access to Cursor. I was building things in record time. I remember deploying something new to the platform every other day. Things like EKS, ArgoCD, Traefik, and Cert Manager, External DNS all of these I rolled out in record time. I liked the planning aspect, and its understanding of the codebase. I have found that the editor+agent interface is good for quickly walking through the codebase, asking focused questions on specific functions or files and trying to understand how different pieces of a large codebase fit together.
One quirk around Cursor that I’ve found is that it’s not super easy to see or work with it while you actively run terminal commands (aws, kubectl, argo, etc.). Also, the UI does not feel suited towards for terminal-first operations.
Cursor does have issues. It consumes too much memory and makes my system slow when I open multiple windows. It has gotten much better in the last 2 months, and I’ve had fewer problems with it now. But I still prefer a terminal-native workflow over a code editor.
I now use the Cursor CLI more than I use the editor. It’s faster than the GUI, but it’s not as polished as the other options out there.
Codex
For me, Codex has really been a journey. I’ve seen the harness evolve over time. In its early days, Codex was just trying to keep up with Claude Code. I like my tools fast. And Codex definitely is fast, but the models historically have not been. In my experience, OpenAI’s models are generally more capable at long-running and focused tasks, but the speed of the model has been too slow. Although that has not stopped me from using it for most of my day almost every day.
Codex is the tool I have used the most and it has gotten a lot better, especially in the last couple of weeks as I am writing this blog with the new gpt-5.3-codex release and other new features added to Codex CLI. The new Codex app is really good as well.
Codex excels at gathering context both locally and remotely (through aws, kubectl, helm commands). It is also the best at troubleshooting. I often point it to a failing deployment, or an issue in AWS. It can map out dependencies well and its review system is also good at catching issues early. I typically ask Codex to look around and once it has context, ask it to work on a specific task.
Codex is my agent harness of choice. I have become so comfortable that I typically have three or more terminal tabs open with different Codex sessions all working on different things. I do have a robust AGENTS.md and require it (codex) to seek approval for commands that mutate state, so I don’t have to worry about it nuking something important accidentally.
Others
Out of all of these, I like Amp the most. I like what they are doing with the advertisement model. Amp can one-shot most simple tasks, but it is the most expensive of all of these tools because it is not subsidized via a subscription. I use it from time to time and assign it specific focused tasks. Over time I have learned the kind of tasks that would fit within a $10 budget window and only use it on those tasks. If the task is more complex or time consuming, I hand it off to Codex.
On Cline and Droid, I felt that Cline has the same problems Cursor does, and it is far less capable at what it can do. It is also the slowest of the bunch, at least for the things that I have tested it on. It’s one of my least used tools.
I used Droid CLI for two weeks. It definitely had an edge over codex & the cursor CLI in the past, but the other two have caught up. I don’t use droid because I already pay 20$ for Claude Code. I use Cursor agent when I need to with models not supported by Codex CLI.
Writing Code
Writing code was not my forte. But, that has changed. Now that I don’t have to go in depth into every single line of code, I find myself using agents to write more and more code.
At work, I find myself using agents to bootstrap things I need quickly. Recently, I used Codex to build a documentation site on top of Starlight that aggregates documentation for all of the internal tools we have. It uses Agent Skills and Argo Workflows under the hood, to generate documentation for our internal platform. I probably wouldn’t have worked on this project if I had to write all of the code by hand. And this was implemented in a few hours.
On the personal front, I have been building tools and automations. A Few weeks ago, built F1Recap. A website for gathering Formula 1 recaps & schedule, built an Invoice generator for my dad’s business, and even a CLI to place orders for water cans.
The pace at which I build things both at work and in personal life has increased multifold. I have been reading various blogposts which tend to express the idea that code is replaceable and I think I agree. But, this is just the early days and I’m excited to see what lies ahead.
Learning New Skills
Last week, I started learning about Crossplane and used it to automate Grafana provisioning at work. Crossplane is a completely new tool for me. My approach to learning this tool has been different from how I learned other tools in the past.
In the past, I would find YouTube videos, tutorials, and maybe watch a demo on how the tool works, what its capabilities are.
This time around, I asked Codex about it. It showed me examples, use cases, created a sample codebase for me to explore, and I feel this approach is going to stick. Having an agent on the terminal that can run the same commands as me feels like a game changer now.
Terminal based access to AI Agents has become a fundamental part of my workflow. Something I cannot imagine working without. I’m excited to see what’s ahead in this space.
I did read this interesting article by Anthropic on how using agents impact our skills. It does make me worry a little. But, I have yet to think deeply about this topic.
Exciting times are ahead of us. We are witnessing a shift in the software paradigm right under our eyes. I am glad I am in some way able to take part in this, and excited to see what lies ahead.
What the future holds, I don’t really know. I eagerly wait for new releases, constantly get new ideas and try to build something, and always try to find & optimize my workflow around agents. I’m constantly learning from many others who have shared similar experiences.