Agentic coding blows my mind
Blog: 13-Feb-2026
So my last blog entry looked at some easy newbie vibe coding. I mean, how difficult could it be to vibe code an image converter? The answer turned out to be: not very. So I thought, why not move up to the next level in my AI voyage of discovery and do a full agentic code of a website application? But what kind of website app to build, and how to start?
Then fate gave me a break. On the 2nd of February 2026, the UK government released the brand-new Fuel Finder service to provide the latest retail fuel prices and forecourt details across the UK. I first read about this on the 4th of February and thought: bingo! I had found my web app. What really appealed was that no LLM could possibly have any knowledge of this, so it would be a good test of an agent's "intelligence". I also knew that I wanted to mix it with some Ordnance Survey location code, which I had a lot of experience with, but which was also somewhat niche and hence probably below the radar for LLMs. I reckoned this would be a good test case to see how my LLM dealt with subjects it had little or no knowledge of.
So I decided to use the hot new Kimi 2.5 model (launched only a week or so ago) to build Fuelseeker.net that evening. It took maybe 4 hours of collaborative coding in Zed to get a working build. I then spent a couple more evenings fixing various minor odds and ends. But all told it was done and dusted in about 6 hours of agentic coding, plus another few hours sorting out the automated data update—more on that later. Moonshot (Kimi's mum/dad owner/inventor) charged me $7.60 for the whole experience.
It's hard to explain to anyone who doesn't code just how much the latest AI tooling has kicked software developers in the nuts. I can sum it up like this: adapt or die, it's that simple. I think the worst hit will be developers who are overly smug about their craft and who have become change-resistant over time. That's a very human condition to suffer from. I know, because I've been there. I felt that when I changed career aged 43. Change is hard. Your ego takes a thumping, and you find yourself floating uncomfortably in a sea of uncertainty. Your hard-won knowledge suddenly feels like yesterday's news. But you can't turn back the clock. This stuff is here, now, and it isn't going away. My advice is to swallow your pride, get stuck in, and learn to ride the wave. After a short while you'll start to get a joyous feeling of reinvention and rejuvenation, especially once you realise that it's your craft and experience which makes agentic coding really fly.
I keep on reading on Hacker News about how agents generate garbage, blah blah stochastic parrot blah, non-deterministic blah blah, can't be trusted blah, ad nauseam. Maybe so in some fields. But software generation is an exceptionally wide field. How can a language dev, a compiler dev, an OS dev, an embedded dev, a business app dev, a fintech dev, a data science dev, a website dev or any other kind of dev have the exact same developer experience? Hence the wide disparity of opinions. Well, I can categorically state that for what I do these days (web dev and small CLI applications) agentic coding is absolutely world changing.
I reckon Fuelseeker.net would have taken me a week to build by hand to get to the point that Kimi 2.5 and I built together in 4 hours. That's roughly a 10x time reducer. Note that this is relative dev time. I've no doubt that some speedy young hero could do it manually in way less than a week. But (a) I'm retired, (b) I'm in no rush, and (c) I'm a distinctly average developer. So let's dive in to why it was so good for my use case, and how I got the best out of it.
Rule 0
Get this into your head before starting: you're supposed to in charge! The mental image you should have is this:
Rather than this:
A man with a plan
An initial firm hand is key. I got pretty much a one-shot result with Fuelseeker.net by really speccing out
what I wanted in detail. You can read it
on GitHub if
you're curious. The point is: tell the agent the technologies you want to use along with any limitations and design
decisions, plus anything else that's not up for discussion. You're harnessing the agent, not giving
it free rein. Many agents (including Kimi) now support an AGENTS.MD file, but I didn't want to go
there just yet. My plan was to get the site pretty much done using my own docs, then ask the agent to write out
its own AGENTS.MD as part of the final tidy-up. The reasoning here was to get a clean file for when
I re-visit the site for maintenance or to add features, possibly days or weeks after the initial context is long gone.
A future agent could then hopefully get up to speed quickly from a known-accurate and clean AGENTS.MD file.
Next step: dealing with external APIs. I knew I need these:
- the gov.uk Fuel Finder API
- the Ordnance Survey (OS) Names API
So I downloaded the Fuel Finder docs and made local .md copies of them, including all the example code.
That's because the gov.uk site requires a login and I didn't want to risk exposing login details by asking the agent
to go to the site directly. Then I added links to these local copies in my master design doc above. Then I added
instructions for the agent to go to one of my other site's code (held locally) to get demo code on how to access
the OS API.
Now I was ready to talk to the agent. I pointed it at my design doc and off it went. A few minutes later the site was mostly built and operational. I had very few issues on the first run, the agent wrote the auth code and extracted the data right off the bat. Very impressive. The rest of the site soon fell into place with me telling Kimi what I wanted, and Kimi doing it. I let Kimi do all the coding, even minor CSS changes and appearance tweaks. It's amazing how much time you can waste tweaking.
What the agent did by itself
Kimi went straight to SQLite for holding a local fast-rendering copy of the fuel data
rather than work with the live API. This was a good decision because pulling down all the UK fuel data currently takes
up to a couple of minutes. It also chose to use Leaflet as the mapping tool rather than using the OpenLayers code I had
pointed it towards. Fair enough, so long as it works. The JS for the site is clean, very readable and well-commented.
Purists might faint at the occasional use of innerHTML, but there is no risk of any XSS attacks because
there's nothing to steal, so innerHTML is fine here. All the OAuth stuff to get API tokens was done in PHP,
and Kimi did a good job of hiding credentials by putting them into an .env file which the PHP code read
and then put the values into env vars. Kimi made it very clear that the .env file should NOT
be uploaded to GitHub, and even created an env.example file suitably annotated with instructions and
warnings. Nice touch.
Actually, this nearly bit me. Kimi put the .env in the same directory as the other PHP scripts. That would
be fine if I was using Apache or Nginx to serve the site because they both follow standard Unix practice of hiding dot
files, i.e. not serving them. But I'm using Caddy these days, and during testing I discovered, to my utter
amazement, that it serves out dot files anywhere under whichever path you specify as the site fileserver root! This is
the last thing anyone familiar with the other web servers would expect. You have to explicitly hide dot files
in the Caddy config file! Bonkers! To me this is a really poor design decision in an otherwise excellent web server
(principle of least astonishment and all that). I know I'm not the first person to be caught out by this as there's
some talk about it online. But there must surely be a non-zero number of folk moving to Caddy from the other web servers,
using dot files on their sites and not realising that they are exposing them to the world. Rant over, back to the story...
"But it works on my machine!"
Then the classic dev versus live issues came out to bite me. First off, my live server (a Hetzner cloud server in Finland) refused to update the fuel data. The exact same code worked fine on my dev box here in the UK. When asked, Kimi wrote a load of test logging code and soon found exactly where the code was failing. I hadn't yet told it that the the dev and live machines were in different countries, but the failure point made it obvious that there must be some geo-blocking going on. So I told Kimi about the physical locations and my suspicions about geo-blocking, and it concurred that was the issue. It suggested several workarounds:
- rent a cheap UK-located server just for doing the updates, including example costs
- do the updates on my dev Mac on a cron job and transferring the SQLite db over to live
- use a VPN to the UK to run the script through
It gave well-argued reasons for and against all of them. In the end, I opted for the VPN approach (I have a NordVPN account) so Kimi told me there was a Nord CLI version for my Linux server, and gave me instruction how to install it. Then it amended the update code to launch the VPN, choose a UK server, do the update, then quit the VPN. It worked on the first run. I'm not losing any sleep over sneaking around the geo-block because I'm offering a UK service with UK data for UK users. Let's face it, no-one other than people in the UK will GAF about UK fuel prices.
Big issue No 2 was the live server hanging during the two-minute SQLite update and refusing to serve any of the three sites I host there. It even kicked me off my SSH connection, so clearly there was a resource hogging issue of some kind. Kimi was amazing here, suggesting numerous work-arounds, most of which I had never heard of, both in SQLite and deep inside the Linux system settings for throttling CPU use and the like. Sadly, none of it worked—my virtual server is just too puny. But we finally agreed (after much informed discussion) to stick to doing updates once a day in the quiet period in the middle of the UK night. Then the updating only blocks the server for a few seconds, which I can live with for a hobby site.
But I just couldn't leave the problem of the server being slow during the update. More
investigation revealed that the apparent server hang happened the instant the VPN fired up, so I asked Kimi if the
VPN could have been the issue all along. Was it blocking ports or something? Maybe a network engineer would have guessed
this straight away, but I was stumped. Kimi ran a load of tests and concluded that when NordVPN connects, it automatically
adds iptables rules that drop all incoming IPv4 traffic on eth0 except for explicitly whitelisted
ports, which meant that:
- only SSH on port 22 was whitelisted
- incoming HTTP requests on ports 80 & 443 were being dropped
- and NordVPN entirely disables IPv6 routing when connected!
So no wonder my sites were down when the VPN was running! So Kimi amended the fuel data update script to handle all of
this by adding 80 & 443 to the NordVPN whitelist and doing some sysctl magic to alter routing and
re-enable IPv6 after the VPN was closed. Now it all works fine, the sites all run fine during the VPN run (although
IPv6 is down for a minute or so during the update, which I can live with. So I can now update my database as often as
I like. Code problems I can eventually fix myself, but this stuff makes my brain ache. But no worries, because I now
have my own 24/7 network engineer stood by to help me out!
Conclusions
I had expected to get rapid coding, but I hadn't expected to also get plain old-fashioned advice. This is where a good agent shines, particularly for a solo developer. You don't just get an informed pair coder sat alongside you, you also get a listening ear to actually discuss pretty much any issue you have problems with.
I think the biggest thing an agent does for me is to reduce the daunting feeling you get prior to kicking off a new project. An agent makes your coding wish-list suddenly seem smaller and more doable in a fraction of the time it would have taken only a year ago. You can have your fun playing with the design decisions and let the agent do the grunt work. I can now no longer imagine coding without an agent. My game has changed for good.