Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Do you have any evidence that agentic coding works?
134 points by terabytest 16 hours ago | hide | past | favorite | 126 comments
I've been trying to get agentic coding to work, but the dissonance between what I'm seeing online and what I'm able to achieve is doing my head in.

Is there real evidence, beyond hype, that agentic coding produces net-positive results? If any of you have actually got it to work, could you share (in detail) how you did it?

By "getting it to work" I mean: * creating more value than technical debt, and * producing code that’s structurally sound enough for someone responsible for the architecture to sign off on.

Lately I’ve seen a push toward minimal or nonexistent code review, with the claim that we should move from “validating architecture” to “validating behavior.” In practice, this seems to mean: don’t look at the code; if tests and CI pass, ship it. I can’t see how this holds up long-term. My expectation is that you end up with "spaghetti" code that works on the happy path but accumulates subtle, hard-to-debug failures over time.

When I tried using Codex on my existing codebases, with or without guardrails, half of my time went into fixing the subtle mistakes it made or the duplication it introduced.

Last weekend I tried building an iOS app for pet feeding reminders from scratch. I instructed Codex to research and propose an architectural blueprint for SwiftUI first. Then, I worked with it to write a spec describing what should be implemented and how.

The first implementation pass was surprisingly good, although it had a number of bugs. Things went downhill fast, however. I spent the rest of my weekend getting Codex to make things work, fix bugs without introducing new ones, and research best practices instead of making stuff up. Although I made it record new guidelines and guardrails as I found them, things didn't improve. In the end I just gave up.

I personally can't accept shipping unreviewed code. It feels wrong. The product has to work, but the code must also be high-quality.





I use Augment with Claud Opus 4.5 every day at my job. I barely ever write code by hand anymore. I don't blindly accept the code that it writes, I iterate with it. We review code at my work. I have absolutely found a lot of benefit from my tools.

I've implemented several medium-scale projects that I anticipate would have taken 1-2 weeks manually, and took a day or so using agentic tools.

A few very concrete advantages I've found:

* I can spin up several agents in parallel and cycle between them. Reviewing the output of one while the others crank away.

* It's greatly improved my ability in languages I'm not expert in. For example, I wrote a Chrome extension which I've maintained for a decade or so. I'm quite weak in Javascript. I pointed Antigravity at it and gave it a very open-ended prompt (basically, "improve this extension") and in about five minutes in vastly improved the quality of the extension (better UI, performance, removed dependencies). The improvements may have been easy for someone expert in JS, but I'm not.

Here's the approach I follow that works pretty well:

1. Tell the agent your spec, as clearly as possible. Tell the agent to analyze the code and make a plan based on your spec. Tell the agent to not make any changes without consulting you.

2. Iterate on the plan with the agent until you think it's a good idea.

3. Have the agent implement your plan step by step. Tell the agent to pause and get your input between each step.

4. Between each step, look at what the agent did and tell it to make any corrections or modifications to the plan you notice. (I find that it helps to remind them what the overall plan is because sometimes they forget...).

5. Once the code is completed (or even between each step), I like to run a code-cleanup subagent that maintains the logic but improves style (factors out magic constants, helper functions, etc.)

This works quite well for me. Since these are text-based interfaces, I find that clarity of prose makes a big difference. Being very careful and explicit about the spec you provide to the agent is crucial.


Great advice.

> Tell the agent your spec, as clearly as possible.

I have recently added a step before that when beginning a project with Claude Code: invoke the AskUserQuestionTool and have it ask me questions about what I want to do and what approaches I prefer. It helps to clarify my thinking, and the specs it then produces are much better than if I had written them myself.

I should note, though, that I am a pure vibe coder. I don't understand any programming language well enough to identify problems in code by looking at it. When I want to check whether working code produced by Claude might still contain bugs, I have Gemini and Codex check it as well. They always find problems, which I then ask Claude to fix.

None of what I produce this way is mission-critical or for commercial use. My current hobby project, still in progress, is a Japanese-English dictionary:

https://github.com/tkgally/je-dict-1

https://www.tkgje.jp/


A principal engineer at Google posted on Twitter that Claude Code did in an hour what the team couldn’t do in a year.

Two days later, after people freaked out, context was added. The team built multiple versions in that year, each had its trade offs. All that context was given to the AI and it was able to produce a “toy” version. I can only assume it had similar trade offs.

https://xcancel.com/rakyll/status/2007659740126761033#m

My experience has been similar to yours, and I think a lot of the hype is from people like this Google engineer who play into the hype and leave out the context. This sets expectations way out of line from reality and leads to frustration and disappointment.


Humans regularly design entire Uber, google, youtube, twitter, whatsapp etc in 45 mins in system design interviews. So AI designing some toy version is meh.

You fundamentally misunderstand AI assisted coding if you think it does the work for you, or that it gets it right, or that it can be trusted to complete a job.

It is an assistant not a team mate.

If you think that getting it wrong, or bugs, or misunderstandings, or lost code, or misdirections, are AI "failing", then yes you will fail to understand or see the value.

The point is that a good AI assisted developer steers through these things and has the skill to make great software from the chaotic ingredients that AI brings to the table.

And this is why articles like this one "just don't get it", because they are expecting the AI to do their job for them and holding it to the standards of a team mate. It does not work that way.


"I bought a subscription to Claude and it didn't write a perfectly coded application for me while I watched a game of baseball. AI sucks!"

The only approach I've tried that seems to work reasonably well, and consistently, was the following:

Make a commit.

Give Claude a task that's not particularly open ended, the closer to pure "monkey work" boilerplate nonsense the task is, the better (which is also the sort of code I don't want do deal with myself).

Preferably it should be something that only touches a file or two in the codebase unless it is a trivial refactor (like changing the same method call all over the place)

Make sure it is set to planning mode and let it come up with a plan.

Review the plan.

Let it implement the plan.

If it works, great, move on to review. I've seen it one-shot some pretty annoying tasks like porting code from one platform to another.

If there are obvious mistakes (program doesn't build, tests don't pass, etc.) then a few more iterations usually fix the issue.

If there are subtle mistakes, make a branch and have it try again. If it fails, then this is beyond what it can do, abort the branch and solve the issue myself.

Review and cleanup the code it wrote, it's usually a lot messier than it needs to be. This also allows me to take ownership of the code. I now know what it does and how it works.

I don't bother giving it guidelines or guardrails or anything of the sort, it can't follow them reliably. Even something as simple as "This project uses CMake, build it like this" was repeatedly ignored as it kept trying to invoke the makefile directly and in the wrong folder.

This doesn't save me all that much time since the review and cleanup can take long, but it serves a great unblocker.

I also use it as a rubber duck that can talk back and documentation source. It's pretty good for that.

This idea of having an army of agents all working together on the codebase is hilarious to me. Replace "agents" with "juniors I hired on fiverr with anterograde amnesia" and it's about how well it goes.


TBH I think the greatest benefit is on the documentation/analysis side. The "write the code" part is fine when it sits in the envelope of things that are 100% conventional boilerplate. Like, as a frontend to ffmpeg you can get a ton of value out of LLMs. As soon as things go open-ended and design-centric, brace yourself.

I get the sense that the application of armies of agents is actually a scaled-up Lisp curse - Gas Town's entire premise is coding wizardry, the emphasis on abstract goals and values, complete with cute, impenetrable naming schemes. There's some corollary with "programs are for humans to read and computers to incidentally execute" here. Ultimately the program has to be a person addressing another person, or nature, and as such it has to evolve within the whole.


That's the way.

I have the same experience despite using claude every day. As an funny anecdote:

Someone I know wrote the code and the unit tests for a new feature with an agent. The code was subtly wrong, fine, it happens, but worse the 30 or so tests they added added 10 minutes to the test run time and they all essentially amounted to `expect(true).to.be(true)` because the LLM had worked around the code not working in the tests


There was an article on HN last week (?) which described this exact behaviour in the newer models.

Older, less "capable", models would fail to accomplish a task. Newer models would cheat, and provide a worthless but apparently functional solution.

Hopefully someone with a larger context window than myself can recall the article in question.


I think that article was basically wrong. They asked the agent not to provide any commentary, then gave an unsolvable task, and wanted the agent to state that the task was impossible. So they were basically testing which instructions the agent would refuse to follow.

Purely anecdotally, I've found agents have gotten much better at asking clarifying questions, stating that two requirements are incompatible and asking which one to change, and so on.

https://spectrum.ieee.org/ai-coding-degrades


From my experience: TDD helps here - write (or have AI write) tests first, review them as the spec, then let it implement.

But when I use Claude code, I also supervise it somewhat closely. I don't let it go wild, and if it starts to make changes to existing tests it better have a damn good reason or it gets the hose again.

The failure mode here is letting the AI manage both the implementation and the testing. May as well ask high schoolers to grade their own exams. Everyone got an A+, how surprising!


> TDD helps here - write (or have AI write) tests first, review them as the spec

I agree, although I think the problem usually comes in writing the spec in the first place. If you can write detailed enough specs the agent will usually give you exactly what you asked for. If you're spec is vague, it's hard to eyeball if the tests or even the implementation of the tests matches what you're looking for.


This happens with me every time I try to get claude to write tests. I've given up on it. Instead I will write the tests if I really care enough to have tests.

> they all essentially amounted to `expect(true).to.be(true)` because the LLM had worked around the code not working in the tests

A very human solution


I wonder if Volkswagen would've blamed AI if they got caught with Dieselgate nowadays...

In PR-lese: "To improve quality and reduce costs, we used AI to program some test code. Unfortunately the test code the AI generated fell below our standards, and it was missed during QA.".

Then again they got their supplier Bosch to program the "defeat device" and lied to them that "Oh don't worry, it's just for testing, we won't deploy it to production". (The "device" (probably just an algorithm) detects whether the steering wheel was being moved or not as the throttle is pushed, and if not, it assumes the car was undergoing emissions testing, and it runs the engine in the environmentally friendlier mode).


Learning how to drive the models is a legit skill - and I don't mean "prompt engineering". There are absolutely techniques that help and because things are moving fast there is little established practice to draw from. But it's also been interesting seeing experienced coders struggle - I've found my time as a manager has been more help to me than my time as a coder. How to keep people on task and focused etc is very similar to managing humans. I suspect much of the next 5 years will be people rediscovering existing human and project management techniques and rebranding them as AI something.

Some techniques I've found useful recently:

- If the agent struggled on something once it's done I'll ask it "you were struggling here, think about what happened and if there are is anything you learned. Put this into a learnings document and reference it in agents.md so we don't get stuck next time"

- Plans are a must. Chat to the agent back and forth to build up a common understanding of the problem you want solved. Make sure to say "ask me any follow up questions you think are necessary". This chat is often the longest part of the project - don't skimp on it. You are building the requirements and if you've ever done any dev work you understand how important having good requirements are to the success of the work. Then ask the model to write up the plan into an implementation document with steps. Review this thoroughly. Then use a new agent to start work on it. "Implement steps 1-2 of this doc". Having the work broken down into steps helps to be able to do work more pieces (new context windows). This part is the more mindless part and where you get to catch up on reading HN :)

- The GitHub Copilot chat agent is great. I don't get the TUI folks at all. The Pro+ plan is a reasonable price and can do a lot with it (Sonnet, Codex, etc all available). Being able to see the diffs as it works is helpful (but not necessary) to catch problems earlier.


+1 for generating plans and then clearing context. I typically have a skill and an agent. I use the skill to generate an initial plan for an atomic unit of work, clear context and then use the agent to review said plan. Finally clear context and use the skill to implement the plan phase by phase, ensuring to review each phase for consistency with the next phase and the overall plan. I've had moderate success with this.

I used Claude Opus 4.5 inside Cursor to write RISC-V Vector/SIMD code. Specifically Depthwise Convolution and normal Convolution layers for a CNN.

I started out by letting it write a naive C version without intrinsic, and validated it against the PyTorch version.

Then I asked it (and two other models, Gemini 3.0 and GPT 5.1) to come up with some ideas on how to make it faster using SIMD vector instructions and write those down as markdown files.

Finally, I started the agent loop by giving Cursor those three markdown files, the naive C code and some more information on how to compile the code, and also an SSH command where it can upload the program and test it.

It then tested a few different variants, ran it on the target (RISC-V SBC, OrangePI RV2) to check if it improves runtime, and then continue from there. It did this 10 times, until it arrived at the final version.

The final code is very readable, and faster than any other library or compiler that I have found so far. I think the clear guardrails (output has to match exactly the reference output from PyTorch, performance must be better than before) makes this work very well.


I am really surprised by this. While I know it can generate correct SIMD code, getting a performant version is non trivial, especially for RVV, where the instruction choices and the underlying micro architecture would significantly impact the performance.

IIRC, Depthwise is memory bound so the bar might be lower. Perhaps you can try some thing with higher compute intensity like a matrix multiply. I have observed, it trips up with the columnar accesses for SIMD.


can you share the code?

Sure, here are my own examples:

* I came up with a list of 9 performance improvement ideas for an expensive pipeline. Most of these were really boring and tedious to implement (basically a lot of special cases) and I wasn't sure which would work, so I had Claude try them all. It made prototypes that had bad code quality but tested the core ideas. One approach cut the time down by 50%, I rewrote it with better code and it's saved about $6,000/month for my company.

* My wife and I had a really complicated spreadsheet for tracking how much we owed our babysitter – it was just complex enough to not really fit into a spreadsheet easily. I vibecoded a command line tool that's made it a lot easier.

* When AWS RDS costs spiked one month, I set Claude Code to investigate and it found the reason was a misconfigured backup setting

* I'll use Claude to throw together a bunch of visualizations for some data to help me investigate

* I'll often give Claude the type signature for a function, and ask it to write the function. It generally gets this about 85% right


How did you give Clause access to AWS?

Just awscli

It does ok with using the AWS cli

>My wife and I had a really complicated spreadsheet for tracking how much we owed our babysitter – it was just complex enough to not really fit into a spreadsheet easily. I vibecoded a command line tool that's made it a lot easier.

Ok, please help me understand. Or is this more of a nanny?


Not technically a nanny, but not dissimilar. In this case, they do several types of work (house cleaning, watching 1-3 kids, daytime and overnights, taking kids out.) They are very competent – by far the best we've found in 3 years – and charge different rates for the different types of work. We also need to track mileage etc. for reimbursement.

They had a spreadsheet for tracking but I found it moderately annoying – it was taking 5-10 minutes a week, so normally I wouldn't have bothered to write a different tool, but with vibe coding it was fairly trivial.


Why is your babysitting bill so complicated?

There are several different types of work they can do, each one of which has a different hourly rate. The time of day affects the rate as well, and so can things like overtime.

It's definitely a bit of an unusual situation. It's not extremely complicated, but it was enough to be annoying.


Jesus, are you ok? Can’t you just, like, give em a 20 when you get home?

I find it quite funny you’ve invented this overly complex payment structure for your babysitter and then find it annoying. Now you’ve got a CLI tool for it.


I didn't choose the payment structure, and the point is that a CLI is not a high bar. Something that we used to spend ~10 minutes a week on with spreadsheets is now ~1 minute/week.

Why didn’t you work out a more manageable billing structure with them?! Or to put it another way: if it took you 10 minutes a week with spreadsheets to even figure out what their bill is, how on earth did they verify your invoices were even correct? And if they couldn’t—or if it took more than 10 minutes each week—why wouldn’t they prefer a billing system they could verify they were being paid correctly?

why assume the billing model is being imposed by the customer rather than the service provider?

GP has provided an anecdote with no supporting evidence, nor any code examples. So it is as fair to assume the story is a fabrication as much as it is to assume it has any truth to it

I am really shocked at the response this trivial anecdote has gotten.

I could state it much more generically: we had an annoying Excel sheet that took ~10 minutes a week, I vibe coded a command line tool that brought it down to ~1 minute a week. I don't think this is unusual or hard to believe in any way.


Yes! You should absolutely always assume a random stranger on HN is outright lying about a trivial anecdote to farm meaningless karma.

Or instigating conflict?

What...what conflict do you think I'm instigating, exactly? Whether the command line is a better interface than Excel?

Coding agent is a perfect simulation of a junior developer working under you. Developer that will tell you - “yes I can do that” about any language and any problem and will never ask you any questions trying very hard to appear competent.

Your job is to put them in constraints and give granular and clear tasks. Be aware that junior developer has very basic knowledge about architecture.

The good is that it does not simulate that part when developer tries shift blame or pin it on you. Because you’re to blame at all times.


it also doesnt simulate the part where the junior actually learns and is less clueless 6 months from now, unfortunately

When you first began learning how to program were you building and shipping apps the next day? No.

Agentic programming is a skill-set and a muscle you need to develop just like you did with coding in the past.

Things didn’t just suddenly go downhill after an arbitrary tipping point - what happened is you hit a knowledge gap in the tooling and gave up.

Reflect on what went wrong and use that knowledge next time you work with the agent.

For example, investing the time in building a strong test suite and testing strategy ahead of time which both you and the agent can rely on.

Being able to manage the agent and getting quality results on a large, complex codebase is a skill in itself, it won’t happen over night.

It takes practice and repetition with these tools to level-up, just like any thing else.


Your point is fair, but it rests on a major assumption I'd question: that the only limit lies with the user, and the tooling itself has none. What if it’s more like “you can’t squeeze blood from a stone”? That is, agentic coding may simply have no greater potential than what I've already tried. To be fair I haven't gone all the way in trying to make it work but, even if some minor workarounds exist, the full promise being hyped might not be realistically attainable.

How can one judge potential without fully understanding or having used it to its full potential?

I don’t think agentic programming is some promised land of instant code without bugs.

It’s just a force multiplier for what you can do.


Hang in there. Yes it is possible; I do it every day. I also do iOS and my current setup is: Cursor + Claude Opus 4.5.

You still need to think about how you would solve the problem as an engineer and break down the task into a right-sized chunk of work. i.e. If 4 things need to change, start with the most fundamental change which has no other dependencies.

Also it is important to manage the context window. For a new task, start a new "chat" (new agent). Stay on topic. You'll be limited to about five back-and-forths before performance starts to suffer. (cursor shows a visual indicator of this in the for of the circle/wheel icon)

For larger tasks, tap the Plan button first, and guide it to the correct architecture you are looking for. Then hit build. Review what it did. If a section of code isn't high-quality, tell Claude how to change it. If it fails, then reject the change.

It's a tool that can make you 2 - 10x more productive if you learn to use it well.


A loop I've found that works pretty well for bugs is this:

- Ask Claude to look at my current in-progress task (from Github/Jira/whatever) and repro the bug using the Chrome MCP.

- Ask it to fix it

- Review the code manually, usually it's pretty self-contained and easy to ensure it does what I want

- If I'm feeling cautious, ask it to run "manual" tests on related components (this is a huge time-saver!)

- Ask it to help me prepare the PR: This refers to instructions I put in CLAUDE.md so it gives me a branch name, commit message and PR description based on our internal processes.

- I do the commit operations, PR and stuff myself, often tweaking the messages / description.

- Clear context / start a new conversation for the next bug.

On a personal project where I'm less concerned about code quality, I'll often do the plan->implementation approach. Getting pretty in-depth about your requirements ovbiously leads to a much better plan. For fixing bugs it really helps to tell the model to check its assumptions, because that's often where it gets stuck and create new bugs while fixing others.

All in all, I think it's working for me. I'll tackle 2-3 day refactors in an afternoon. But obviously there's a learning curve and having the technical skills to know what you want will give you much better results.


> When I tried using Codex on my existing codebases, with or without guardrails, half of my time went into fixing the subtle mistakes it made or the duplication it introduced.

If you want to get good at this, when it makes subtle mistakes or duplicates code or whatever, revert the changes and update your AGENTS.md or your prompt and try again. Do that until it gets it right. That will take longer than writing it yourself. It's time invested in learning how to use these and getting a good setup in your codebase for them.

If you can't get it to get it right, you may legitimately have something it sucks at. Although as you iterate might also have some other insights into why it keeps getting it wrong and can maybe change something more substantial about your setup to make it able to get it right.

For example I have a custom xml/css UI solution that draws inspiration both from XML and SwiftUI, and it does an OK job of making UIs for it. But sometimes it gets stuck in ways it wouldn't if it was using HTML or some known (and probably higher quality/less buggy) UI library. I noticed it keeps trying things, adding redundant markup to both the xml and css, using unsupported attributes that it thinks should exist (because they do in HTML/CSS), and never cleans up on the way.

Some amount of fixing up its context made it noticeably better at this but it still gets stuck and makes a mess when it does. So I made it write a linter and now it uses the linter constantly which keeps it closer to on the rails.

Your pet feeding app isn't in this category. You can get a substantial app pretty far these days without running into a brick wall. Hitting a wall that quickly just means you're early on the learning curve. You may have needed to give it more technical guidance from the start, and have it write tests for everything, make sure it makes the app observable to itself in some way so it can see bugs itself and fix them, stuff like that.


Track Mitsuhiko's work and blog posts:

https://news.ycombinator.com/user?id=the_mitsuhiko

https://lucumr.pocoo.org/about/

He has an extensive and impressive body of work in Python and Rust pre LLMs. He's now working on his own startup and doing much of it with AI and documenting his journey. I trust his opinions even though I don't use LLMs as much as he does.


I have written a software which I wanted to do so for past 7-8 years within past 3 months. I have over 6000 pages of conversations between me and chatgpt Claude Gemini. And hoping to get patent soon. It consists of over to 260k loc, works well it is architected to support many different industries without much changes but configuration and has very good headed and headless qa coverage. I have spent about 16-18 hours a day because I am so bought into the idea and the outcome I am getting. My patent lawyer suggested getting provisional patent on my work. So for me it works

6000 pages have converted to 1 millions+ lines of product specs and granularly broken down work in phases and tasks. All tracked in the repo.

What does it do?

1. Start with a plan. Get AI to help you make it, and edit.

2. Part of the plan should be automated tests. AI can make these for you too, but you should spot check for reasonable behavior.

3. Use Claude 4.5 Opus

4. Use Git, get the AI to check in its work in meaningful chunks, on its own git branch.

5. Ask the AI to keep am append-only developer log as a markdown file, and to update it whenever its state significantly changes, or it makes a large discovery, or it is "surprised" by anything.


> Use Claude 4.5 Opus

In my org we are experimenting with agentic flows, and we've noticed that model choice matters especially for autonomy.

GPT-5.2 performed much better for long-running tasks. It stayed focused, followed instructions, and completed work more reliably.

Opus 4.5 tended to stop earlier and take shortcuts to hand control back sooner.


a ralph loop can make claude go til the end, or to a rate limit at least.

opus closes the task and ralph opens it right back up again.

i imagine there's something to the harness for that, too


Interesting! Was kinda disappointed with Codex last time I tried it ~2m ago, but things change fast.

As far as I can tell, there are exactly 3 use cases that have demonstrably worked with AI, in the sense that their stakeholders (not the AI companies, the users) swear it works.

1. training a RAG on support questions for chat or documentation, w/good material

2. people doing GTM work in marketing, for things like email automation

3. people using a combination of expensive tools - Claude + Cursor + something else (maybe n8n, maybe a custom coding service) - to make greenfield apps


I've been having good results lately/finally with Opus 4.5 in Cursor. It still isn't one-shotting my entire task, but the 90% of the way it gets me is pretty close to what I wanted, which is better than in the past. I feel more confident in telling it to change things without it making it worse. I only use it at work so I can't share anything, but I can say I write less code by hand now that it's producing something acceptable.

For sysops stuff I have found it extremely useful, once it has MCP's into all relevant services, I use it as the first place I go to ask what is happening with something specific on the backend.


For me its a major change for personal projects. That said, since about 3 months ago VS Code Github Copilot is remarkably stable in working with existing code base and I could implement changes to those projects that would have taken me a substantially longer time. So at least for this use-case its there. Hidden game changers are Gradio/Streamlit for easy UI.

My experience is the same. In short, agents cannot plan ahead, or plan at a high level. This means they have a blindspot for design. Since they cannot design properly, it limits the kind of projects that are viable to smaller scopes (not sure exactly how small but in my experience, extremely small and simple). Anything that exceeds this abstract threshold has a good chance of being a net negative, with most of the code being unmantainable, unextensible, and unreliable.

Anyone who claims AI is great is not building a large or complex enough app, and when it works for their small project, they extrapolate to all possibilities. So because their example was generated from a prompt, it's incorrectly assumed that any prompt will also work. That doesn't necessarily follow.

The reality is that programming is widely underestimated. The perception is that it's just syntax on a text file, but it's really more like a giant abstract machine with moving parts. If you don't see the giant machine with moving parts, chances are you are not going to build good software. For AI to do this, it would require strong reasoning capabilities, that lets it derive logical structures, along with long term planning and simulation of this abstract machine. I predict that if AI can do this then it will be able to do every single other job, including physical jobs as it would be able to reason within a robotic body in the physical world.

To summarize, people are underestimating programming, using their simple projects to incorrectly extrapolate to any possible prompt, and missing the hard part of programming which involves building abstract machines that work on first principles and mathematical logic.


>Anyone who claims AI is great is not building a large or complex enough app

I can't speak for everyone, but lots of us fully understand that the AI tooling has limitations and realize there's a LOT of work that can be done within those limitations. Also, those limitations are expanding, so it's good to experiment to find out where they are.

Conversely, it seems like a lot of people are saying that AI is worthless because it can't build arbitrarily large apps.

I've recently used the AI tooling to make a docusign-like service and it did a fairly good job of it, requiring about a days worth of my attention. That's not an amazingly complex app, but it's not nothing either. Ditto for a calorie tracking web app. Not the most complex app, but companies are making legit money off them, if you want a tangible measure of "worth".


Right, it has a lot of uses. As a tool it has been transformative on many levels. The question is whether it can actually multiply productivity across the board for any domain and at production level quality. I think that's what people are betting on, and it's not clear to me yet that it can. So far that level looks more like a tradeoff. You can spend time orchestrating agents, gaining some speedup at the cost of quality, or you can use it more like a tool and write things "manually" which is a lot higher quality.

> Anyone who claims AI is great is not building a large or complex enough app

That might be true for agentic coding (caveat below), but AI in the hands of expert users can be very useful - "great" - in building large and complex apps. It's just that it has to be guided and reviewed by the human expert.

As for agentic coding, it may depend on the app. For example, Steve Yegge's "beads" system is over a quarter million lines of allegedly vibe-coded Go code. But developing a CLI like that may be a sweet spot for LLMs, it doesn't have all the messiness of typical business system requirements.


Anything above a simple app and it becomes a tradeoff that needs to be carefully tuned so that you get the most out of it and it doesn't end up being a waste of time. For many use cases and domain combinations this is a net positive, but it's not yet consistent across everything.

From my experience it's better at some domains than others, and also better at certain kinds of app types. It's not nearly as universal as it's being made out to be.


> For example, Steve Yegge's "beads" system is over a quarter million lines of allegedly vibe-coded Go code. But developing a CLI like that may be a sweet spot

Is that really a success? I was just reading an article talking about how sloppy and poorly implemented it is: https://lucumr.pocoo.org/2026/1/18/agent-psychosis/

I guess it depends on what you’re looking to get out of it.


I haven't looked into it deeply, but I've seen people claiming to find it useful, which is one metric of success.

Agentic vibe coding maximalists essentially claim that code quality doesn't matter if you get the desired functionality out of it. Which is not that different from what a lot of "move fast and break things" startups also claim, about code that's written by humans under time, cost, and demand pressure. [Edit: and I've seen some very "sloppy and poorly implemented" code in those contexts, as well as outside software companies, in companies of all sizes. Not all code is artisanally handcrafted by connoisseurs such as us :]

I'm not planning to explore the bleeding edge of this at the moment, but I don't think it can be discounted entirely, and of course it's constantly improving.


> The product has to work, but the code must also be high-quality.

I understand and admire your commitment to code quality. I share similar ideals.

But it's 2026 and you're asking for evidence that agentic coding works. You're already behind. I don't think you're going to make it. Your competitors are going to outship you.

In most cases, your customers don't care about your code. They only want something that works right.


I still think it's useful, but you have to make a heavy use of the 'plan' mode. I still ask the new hires to avoid doing more than just the plan (or at most generating tests cases), so they can understand the codebase before generating new code inside.

Basically my point of view is that if you don't feel comfortable reviewing your coworkers code, you shouldn't generate code with AI, because you will review it badly and then I will have to catch the bugs and fix it (happened 24 hours ago). If you generate code, you better understand where it can generate side effects.


Yes, agentic coding works and has massive value. No, you can't just deploy code unreviewed.

Still takes much less time for me to review the plan and output than write the code myself.


> much less time for me to review the plan and output

So typing was a bottleneck for you? I’ve only found this true when I’m a novice in an area. Once I’m experienced, typing is an inconsequential amount of time. Understanding the theory of mind that composes the system is easily the largest time sink in my day to day.


I don't need to understand the theory of mind, I just tell it what to compose. Writing the actual lines after that takes longer than not writing them!

What do you mean you don’t need to understand? So what do you do when there’s a bug that an LLM can’t fix?

If your bottleneck is typing the code, you must be a junior programmer.


My main rule is never to commit code you don’t understand because it’ll get away from you.

I employ a few tricks:

1- I avoid auto-complete and always try to read what it does before committing. When it is doing something I don’t want, I course correct before it continues

2- I ask the LLM questions about the changes it is making and why. I even ask it to make me HTML schema diagrams of the changes.

3- I use my existing expertise. So I am an expert Swift developer, and I use my Swift knowledge to articulate the style of what I want to see in TypeScript, a language I have never worked in professionally.

4- I add the right testing and build infrastructure to put guardrails on its work.

5- I have an extensive library of good code for it to follow.


When you have a hammer, everything looks like a nail. Ad nauseam.

AI has made it possible for me to build several one-off personal tools in the matter of a couple of hours and has improved my non-tech life as a result. Before, I wouldn't even have considered such small projects because of the effort needed. It's been relieving not to have to even look at code, assuming you can describe your needs in a good prompt. On the other hand, I've seen vibe coded codebases with excessive layers of abstraction and performance issues that came from a possibly lax engineering culture of not doing enough design work upfront before jumping into implementation. It's a classic mistake, that is amplified by AI.

Yes, average code itself has become cheap, but good code still costs, and amazing code, well, you might still have an edge there for now, but eventually, accept that you will have to move up the abstraction stack to remain valuable when pitted against an AI.

What does this mean? Focus on core software engineering principles, design patterns, and understanding what computer is doing at a low level. Just because you're writing TypeScript doesn't mean you shouldn't know what's happening at the CPU level.

I predict the rise in AI slop cleanup consultancies, but they'll be competing with smarter AIs who will clean up after themselves.


I have had similar questions, and am still evaluating here. However, I've been increasingly frustrated with the sheer volume of anecdotal evidence from yay and naysayers of LLM-assisted coding. I have personally felt increased productivity at times with it, and frustrations at others.

In order to better research, I built (ironically, mostly vibe coded) a tool to run structured "self-experiments" on my own usage of AI. The idea is I've init a bunch of hypotheses I have around my own productivity/fulfillment/results with AI-assisted coding. The tool lets me establish those then run "blocks" where I test a particular strategy for a time period (default 2 weeks). So for example, I might have a "no AI" block followed by a "some AI" block followed by a "full agent all-in AI block".

The tool is there to make doing check-ins easier, basically a tiny CLI wrapper around journaling that stays out of my way. It also does some static analysis on commit frequency, code produced, etc. but I haven't fleshed out that part of it much and have been doing manual analysis at the end of blocks.

For me this kind of self-tracking has been more helpful than hearsay, since I can directly point to periods where it was working well and try to figure out why or what I was working on. It's not fool-proof, obviously, but for me the intentionality has helped me get clearer answers.

Whether those results translate beyond a single engineer isn't a question I'm interested in answering and feels like a variant of developer metrics-black-hole, but maybe we'll get more rigorous experiments in time.

The tool open source here (may be bugs, only been using it a few weeks): https://github.com/wellwright-labs/devex


try other harnesses than codex.

ive had more success with review tools, rather than the agent getting the code quality right the first time.

current workflow

1. specs/requirements/design, outputting tasks 2. implementation, outputting code and tests 3. run review scripts/debug loops, outputting tasks 4. implement tasks 5. go back to 3

the quality of specs, tasks, and review scripts make a big difference

one of the biggest things that gets the results better is if you can get a feedback loop in from what the app actually does back to the agent. good logs, being able to interact/take screenshots a la playwright etc

guidelines and guardrails are best if theyre tools that the agent runs, or that run automatically to give feedback.


Yep, it works. Like anything getting the most out of these tools is its own (human) skill.

With that in mind, a couple of comments - think of the coding agents as personalities with blind spots. A code review by all of them and a synthesis step is a good idea. In fact currently popular is the “rule of 5” which suggests you need the LLM to review five times, and to vary the level of review, e.g. bugs, architecture, structure, etc. Anecdotally, I find this is extremely effective.

Right now, Claude is in my opinion the best coding agent out there. With Claude code, the best harnesses are starting to automate the review / PR process a bit, but the hand holding around bugs is real.

I also really like Yegge’s beads for LLMs keeping state and track of what they’re doing — upshot, I suggest you install beads, load Claude, run ‘!bd prime’ and say “Give me a full, thorough code review for all sorts of bugs, architecture, incorrect tests, specification, usability, code bugs, plus anything else you see, and write out beads based on your findings.” Then you could have Claude (or codex) work through them. But you’ll probably find a fresh eye will save time, e.g. give Claude a try for a day.

Your ‘duplicated code’ complaint is likely an artifact of how codex interacts with your codebase - codex in particular likes to load smaller chunks of code in to do work, and sometimes it can get too little context. You can always just cat the relevant files right into the context, which can be helpful.

Finally, iOS is a tough target — I’d expect a few more bumps. The vast bulk of iOS apps are not up on GitHub, so there’s less facility in the coding models.

And any front end work doesn’t really have good native visual harnesses set up, (although Claude has the Claude chrome extension for web UIs). So there’s going to be more back and forth.

Anyway - if you’re a career engineer, I’d tell you - learn this stuff. It’s going to be how you work in very short order. If you’re a hobbyist, have a good time and do whatever you want.


I still don't get what beads needs a daemon for, or a db. After a while of using 'bd --no-daemon --no-db' I was sick of it and switched to beans and my agents seem to be able to make use of it much better, on the one hand its directly editable by them as its just markdown, on the other hand the CLI still gives them structure and makes the thing queryable

Since we are on this topic, how would I make an agent that does this job:

I am writing an automation software that interfaces with a legacy windows CAD program. Depending on the automation, I just need a picture of the part. Sometimes I need part thickness. Sometimes I need to delete parts. Etc... Its very much interacting with the CAD system and checking the CAD file or output for desired results.

I was considering something that would take screenshots and send it back for checks. Not sure what platforms can do this. I am stumped how Visual Studio works with this, there are a bunch of pieces like servers, agents, etc...

Even a how-to link would work for me. I imagine this would be extremely custom.


No joke you should ask one of the latest thinking models to plan this out with you.

What controls the legacy CAD app? Are you using AutoLISP? or VB scripting? Or something else?

I'm using VB.net with visual studio.

Depending on the risk profile of the project, it absolutely works with amazing productivity gains. And the agent of today is the worst agent if will ever be because tomorrow its going to be even better. I am finding amazing results with the ideate -> explore -> plan -> code -> test loop.

The way I see it, is that for non-trivial things you have to build your method piece by piece. Then things start to improve. It's a process of... developing a process.

Write a good AGENTS.md (or CLAUDE.md) and you'll see that code is more idiomatic. Ask it to keep a changelog. Have the LLM write a plan before starting code. Ask it to ask you questions. Write abstraction layers it (along with the fellow humans of course) can use without messing with the low-level detail every time.

In a way you have to develop a framework to guide the LLM behavior. It takes time.


If you're building something new, stick with languages/problems/projects that have plenty of analogues in the opensource world and keep your context windows small, with small changes.

One-shotting an application that is very bespoke and niche is not going to go well, and same goes for working on an existing codebase without a pile of background work on helping the model understand it piece by piece, and then restricting it to small changes in well-defined areas.

It's like teaching an intern.


This is 1/3 response to a short prompt about implementation options for GitHub Runner form broken Server to Github Enterprise Cloud: # EC2-Based GitHub Actions Self-Hosted Runners - Complete Implementation

## Architecture Overview

This solution deploys auto-scaling GitHub Actions runners on EC2 instances that can trigger your existing AWS CodeBuild pipelines. Runners are managed via Auto Scaling Groups with automatic registration and health monitoring.

## Prerequisites

- AWS CLI configured with appropriate credentials - GitHub Enterprise Cloud organization admin access - Existing CodeBuild project(s) - VPC with public/private subnets

## Solution Components

### 1. CloudFormation Template### 2. GitHub Workflow for CodeBuild Integration## Deployment Steps

### Step 1: Create GitHub Personal Access Token

1. Navigate to GitHub → Settings → Developer settings → Personal access tokens → Fine-grained tokens 2. Create token with these permissions: - *Repository permissions:* - Actions: Read and write - Metadata: Read - *Organization permissions:* - Self-hosted runners: Read and write

```bash # Store token securely export GITHUB_PAT="ghp_xxxxxxxxxxxxxxxxxxxx" export GITHUB_ORG="your-org-name" ```

### Step 2: Deploy CloudFormation Stack

```bash # Set variables export AWS_REGION=us-east-1 export STACK_NAME=github-runner-ec2 export VPC_ID=vpc-xxxxxxxx export SUBNET_IDS="subnet-xxxxxxxx,subnet-yyyyyyyy"

# Deploy stack aws cloudformation create-stack \ --stack-name $STACK_NAME \ --template-body file://github-runner-ec2-asg.yaml \ --parameters \ ParameterKey=VpcId,ParameterValue=$VPC_ID \ ParameterKey=PrivateSubnetIds,ParameterValue=\"$SUBNET_IDS\" \ ParameterKey=GitHubOrganization,ParameterValue=$GITHUB_ORG \ ParameterKey=GitHubPAT,ParameterValue=$GITHUB_PAT \ ParameterKey=InstanceType,ParameterValue=t3.medium \ ParameterKey=MinSize,ParameterValue=2 \ ParameterKey=MaxSize,ParameterValue=10 \ ParameterKey=DesiredCapacity,ParameterValue=2 \ ParameterKey=RunnerLabels,ParameterValue="self-hosted,linux,x64,ec2,aws,codebuild" \ ParameterKey=CodeBuildProjectNames,ParameterValue="" \ --capabilities CAPABILITY_NAMED_IAM \ --region $AWS_REGION

# Wait for completion (5-10 minutes) aws cloudformation wait stack-create-complete \ --stack-name $STACK_NAME \ --region $AWS_REGION

# Get stack outputs aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs' \ --region $AWS_REGION ```

### Step 3: Verify Runners

```bash # Check Auto Scaling Group ASG_NAME=$(aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs[?OutputKey==`AutoScalingGroupName`].OutputValue' \ --output text)

aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --region $AWS_REGION

# List running instances aws ec2 describe-instances \ --filters "Name=tag:aws:autoscaling:groupName,Values=$ASG_NAME" \ --query 'Reservations[].Instances[].[InstanceId,State.Name,PrivateIpAddress]' \ --output table

# Check CloudWatch logs aws logs tail /github-runner/instances --follow ```

### Step 4: Verify in GitHub

Navigate to: `https://github.com/organizations/YOUR_ORG/settings/actions/r...`

You should see your EC2 runners listed as "Idle" with labels: `self-hosted, linux, x64, ec2, aws, codebuild`

## Using One Runner for Multiple Repos & Pipelines

### Organization-Level Runners (Recommended)

EC2 runners registered at the organization level can serve all repositories automatically.

*Benefits:* - Centralized management - Cost-efficient resource sharing - Simplified scaling - Single point of monitoring

*Configuration in CloudFormation:* The template already configures organization-level runners via the UserData script: ```bash ./config.sh --url "https://github.com/${GitHubOrganization}" ... ```

### Multi-Repository Workflow Examples### Advanced: Runner Groups for Access Control### Label-Based Runner Selection Strategy

*Create different runner pools with specific labels:*

```bash # Production runners RunnerLabels: "self-hosted,linux,ec2,production,high-performance"

# Development runners RunnerLabels: "self-hosted,linux,ec2,development,general"

# Team-specific runners RunnerLabels: "self-hosted,linux,ec2,team-platform,specialized" ```

*Use in workflows:*

```yaml jobs: prod-deploy: runs-on: [self-hosted, linux, ec2, production]

  dev-test:
    runs-on: [self-hosted, linux, ec2, development]
  
  platform-build:
    runs-on: [self-hosted, linux, ec2, team-platform]
```

## Monitoring and Maintenance

### Monitor Runner Health

```bash # Check Auto Scaling Group health aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].[DesiredCapacity,MinSize,MaxSize,Instances[].[InstanceId,HealthStatus,LifecycleState]]'

# View instance system logs INSTANCE_ID=$(aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[0].InstanceId' \ --output text)

aws ec2 get-console-output --instance-id $INSTANCE_ID

# Check CloudWatch logs aws logs get-log-events \ --log-group-name /github-runner/instances \ --log-stream-name $INSTANCE_ID/runner \ --limit 50 ```

### Connect to Runner Instance (via SSM)

```bash # List instances aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[].[InstanceId,HealthStatus]' \ --output table

# Connect via Session Manager (no SSH key needed) aws ssm start-session --target $INSTANCE_ID

# Once connected, check runner status sudo systemctl status actions.runner. sudo journalctl -u actions.runner.* -f ```

### Troubleshooting Common Issues## Advanced Scaling Configuration

### Lambda-Based Dynamic Scaling

For more sophisticated scaling based on GitHub Actions queue depth:### Deploy Scaling Lambda

```bash # Create Lambda function zip function.zip github-queue-scaler.py

aws lambda create-function \ --function-name github-runner-scaler \ --runtime python3.11 \ --role arn:aws:iam::ACCOUNT_ID:role/lambda-execution-role \ --handler github-queue-scaler.lambda_handler \ --zip-file fileb://function.zip \ --timeout 30 \ --environment Variables="{ ASG_NAME=$ASG_NAME, GITHUB_ORG=$GITHUB_ORG, GITHUB_TOKEN=$GITHUB_PAT, MAX_RUNNERS=10, MIN_RUNNERS=2 }"

# Create CloudWatch Events rule to trigger every 2 minutes aws events put-rule \ --name github-runner-scaling \ --schedule-expression 'rate(2 minutes)'

aws events put-targets \ --rule github-runner-scaling \ --targets "Id"="1","Arn"="arn:aws:lambda:REGION:ACCOUNT:function:github-runner-scaler" ```

## Cost Optimization

### 1. Use Spot Instances

Add to Launch Template in CloudFormation:

```yaml LaunchTemplateData: InstanceMarketOptions: MarketType: spot SpotOptions: MaxPrice: "0.05" # Set max price SpotInstanceType: one-time ```

### 2. Scheduled Scaling

Scale down during off-hours:

```bash # Scale down at night (9 PM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-down-night \ --recurrence "0 21 * * " \ --desired-capacity 1

# Scale up in morning (7 AM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-up-morning \ --recurrence "0 7 * MON-FRI" \ --desired-capacity 3 ```

### 3. Instance Type Mix

Use multiple instance types for better availability and cost:

```yaml MixedInstancesPolicy: InstancesDistribution: OnDemandBaseCapacity: 1 OnDemandPercentageAboveBaseCapacity: 25 SpotAllocationStrategy: price-capacity-optimized LaunchTemplate: Overrides: - InstanceType: t3.medium - InstanceType: t3a.medium - InstanceType: t2.medium ```

## Security Best Practices

1. *No hardcoded credentials* - Using Secrets Manager for GitHub PAT 2. *IMDSv2 enforced* - Prevents SSRF attacks 3. *Minimal IAM permissions* - Scoped to specific CodeBuild projects 4. *Private subnets* - Runners not directly accessible from internet 5. *SSM for access* - No SSH keys needed 6. *Encrypted secrets* - Secrets Manager encryption at rest 7. *CloudWatch logging* - All runner activity logged

## References

- [GitHub Self-hosted Runners Documentation](https://docs.github.com/en/actions/hosting-your-own-runners/...) - [GitHub Runner Registration API](https://docs.github.com/en/rest/actions/self-hosted-runners) - [AWS Auto Scaling Documentation](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-i...) - [AWS CodeBuild API Reference](https://docs.aws.amazon.com/codebuild/latest/APIReference/We...) - [GitHub Actions Runner Releases](https://github.com/actions/runner/releases) - [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide...)

This solution provides a production-ready, cost-effective EC2-based runner infrastructure with automatic scaling, comprehensive monitoring, and multi-repository support for triggering CodeBuild pipelines.


Care to share the pet feeder's code and what the bugs are and how it went off the rails? Seems like a perfect scenario for us to see how much is prompting skill, how much is a setup, how much is just patience for the thing, and how much is hype/lies.

> Last weekend I tried building an iOS app for pet feeding reminders from scratch.

Just start smaller. I'm not sure why people try to jump immediately to creating an entire app when they haven't even gotten any net-positive results at all yet. Just start using it for small time saving activities and then you will naturally figure out how to gradually expand the scope of what you can use it for.


I’ve heard coding agents best described as a fleet of junior developers available to you 24/7 and I think that’s about right. With the added downside that they don’t really learn as they go so they will forever be junior developers (until models get better).

There are projects where throwing a dozen junior developers at the problem can work but they’re very basic CRUD type things.


Don't use it myself. But I have a client who uses it. The bugs it creates are pretty funny. Constantly replacing parts of code with broken or completely incorrect things. Making things that previously worked broken. Deleting random things.

I think of coding agents more like "typing assistants" than programmers. If you know exactly what and how to do what you want, you can ask them to do it with clear instructions and save yourself the trouble of typing the code out.

Otherwise, they are bad.


I have a small-ish vertical SaaS that is used heavily by ~700 retail stores. I have enabled our customer success team to fix bugs using GitHub copilot. I approve the PRs, but they have fixed a surprising number of issues.

Works pretty great for me, especially Spec-driven development using OpenSpec

- Cleaner code - Easily 5x speed minimum - Better docs, designs - Focus more on the product than than the mechanics - More time for family


Really interested in your workflow using OpenSpec. How do you start off a project with it? And what does a typical code change look like?

Honestly, I only use coding agents when I feel too lazy to type lots of boilerplate code.

As in "Please write just this one for me". Even still, I take care to review each line produced. The key is making small changes at a time.

Otherwise, I type out and think about everything being done when in ‘Flow State’. I don't like the feeling of vibe coding for long periods. It completely changes the way work is done, it takes away agency.

On a bit of a tangent, I can't get in Flow State when using agents. At least not as we usually define it.


I did the same experiment as you, and this is what I learned:

https://www.linkedin.com/pulse/concrete-vibe-coding-jorge-va...

The bottom line is this:

* The developer stop been a developer, and becomes a product designer with high technical skills.

  * This is a different set of skills than than a developer or a product owner currently have. It is a mix of both, and the expectations of how agentic development works need to be adjusted.
* Agents will behave like junior developers, they can type very fast, and produce something that has a high probability to work. They priority will be to make it work, not maintainability, scalability, etc. Agents can achieve that if you detail how to produce it.

  * The working with an agent feels more like mentoring the AI than ask and receive.
* When I start to work on a product that will be vibe coded, I need to have clear in my head all the user stories, code architecture, the whole system, then I can start to tell the agent what to build, and correct and annotate in the md files the code quality decisions so it remembers them.

* Use TDD, ask the agent to create the tests, and then code to the test. Don't correct the bugs, make the agent correct them and explain why that is a bug, specially with code design decisions. Store those in AGENTS.md file at the root of the project.

There are more things that can be done to guide the agent, but I need to have clear in an articulable way the direction of the coding. On the other side, I don't worry about implementation details like how to use libraries and APIs that I am not familiar with, the agent just writes and I test.

Currently I am working on a product and I can tell you, working no more than 10 hours a week (2 hours here, 3 there, leave the agent working while I am having dinner with family) I am progressing at I would say 5 to 10 times faster than without it. So, yeah it works, but I had to adjust how I do my job.


> Scaling long-running autonomous coding https://news.ycombinator.com/item?id=46624541

This is exactly the issue I have with what I'm seeing around: lots of "here's something impressive we did" but nearly nothing in terms of how it was actually achieved in clear, reproducible detail.

I'm not sure OP is looking for evidence like this. There are many optimistic articles from people or organizations who are selling AI products, AI courses, or AI newsletters.

claude response to a query to give options for GitHub runner ... it haas generated 3 more files to review test and make it work: # EC2-Based GitHub Actions Self-Hosted Runners - Complete Implementation

## Architecture Overview

This solution deploys auto-scaling GitHub Actions runners on EC2 instances that can trigger your existing AWS CodeBuild pipelines. Runners are managed via Auto Scaling Groups with automatic registration and health monitoring.

## Prerequisites

- AWS CLI configured with appropriate credentials - GitHub Enterprise Cloud organization admin access - Existing CodeBuild project(s) - VPC with public/private subnets

## Solution Components

### 1. CloudFormation Template### 2. GitHub Workflow for CodeBuild Integration## Deployment Steps

### Step 1: Create GitHub Personal Access Token

1. Navigate to GitHub → Settings → Developer settings → Personal access tokens → Fine-grained tokens 2. Create token with these permissions: - *Repository permissions:* - Actions: Read and write - Metadata: Read - *Organization permissions:* - Self-hosted runners: Read and write

```bash # Store token securely export GITHUB_PAT="ghp_xxxxxxxxxxxxxxxxxxxx" export GITHUB_ORG="your-org-name" ```

### Step 2: Deploy CloudFormation Stack

```bash # Set variables export AWS_REGION=us-east-1 export STACK_NAME=github-runner-ec2 export VPC_ID=vpc-xxxxxxxx export SUBNET_IDS="subnet-xxxxxxxx,subnet-yyyyyyyy"

# Deploy stack aws cloudformation create-stack \ --stack-name $STACK_NAME \ --template-body file://github-runner-ec2-asg.yaml \ --parameters \ ParameterKey=VpcId,ParameterValue=$VPC_ID \ ParameterKey=PrivateSubnetIds,ParameterValue=\"$SUBNET_IDS\" \ ParameterKey=GitHubOrganization,ParameterValue=$GITHUB_ORG \ ParameterKey=GitHubPAT,ParameterValue=$GITHUB_PAT \ ParameterKey=InstanceType,ParameterValue=t3.medium \ ParameterKey=MinSize,ParameterValue=2 \ ParameterKey=MaxSize,ParameterValue=10 \ ParameterKey=DesiredCapacity,ParameterValue=2 \ ParameterKey=RunnerLabels,ParameterValue="self-hosted,linux,x64,ec2,aws,codebuild" \ ParameterKey=CodeBuildProjectNames,ParameterValue="" \ --capabilities CAPABILITY_NAMED_IAM \ --region $AWS_REGION

# Wait for completion (5-10 minutes) aws cloudformation wait stack-create-complete \ --stack-name $STACK_NAME \ --region $AWS_REGION

# Get stack outputs aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs' \ --region $AWS_REGION ```

### Step 3: Verify Runners

```bash # Check Auto Scaling Group ASG_NAME=$(aws cloudformation describe-stacks \ --stack-name $STACK_NAME \ --query 'Stacks[0].Outputs[?OutputKey==`AutoScalingGroupName`].OutputValue' \ --output text)

aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --region $AWS_REGION

# List running instances aws ec2 describe-instances \ --filters "Name=tag:aws:autoscaling:groupName,Values=$ASG_NAME" \ --query 'Reservations[].Instances[].[InstanceId,State.Name,PrivateIpAddress]' \ --output table

# Check CloudWatch logs aws logs tail /github-runner/instances --follow ```

### Step 4: Verify in GitHub

Navigate to: `https://github.com/organizations/YOUR_ORG/settings/actions/r...`

You should see your EC2 runners listed as "Idle" with labels: `self-hosted, linux, x64, ec2, aws, codebuild`

## Using One Runner for Multiple Repos & Pipelines

### Organization-Level Runners (Recommended)

EC2 runners registered at the organization level can serve all repositories automatically.

*Benefits:* - Centralized management - Cost-efficient resource sharing - Simplified scaling - Single point of monitoring

*Configuration in CloudFormation:* The template already configures organization-level runners via the UserData script: ```bash ./config.sh --url "https://github.com/${GitHubOrganization}" ... ```

### Multi-Repository Workflow Examples### Advanced: Runner Groups for Access Control### Label-Based Runner Selection Strategy

*Create different runner pools with specific labels:*

```bash # Production runners RunnerLabels: "self-hosted,linux,ec2,production,high-performance"

# Development runners RunnerLabels: "self-hosted,linux,ec2,development,general"

# Team-specific runners RunnerLabels: "self-hosted,linux,ec2,team-platform,specialized" ```

*Use in workflows:*

```yaml jobs: prod-deploy: runs-on: [self-hosted, linux, ec2, production]

  dev-test:
    runs-on: [self-hosted, linux, ec2, development]
  
  platform-build:
    runs-on: [self-hosted, linux, ec2, team-platform]
```

## Monitoring and Maintenance

### Monitor Runner Health

```bash # Check Auto Scaling Group health aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].[DesiredCapacity,MinSize,MaxSize,Instances[].[InstanceId,HealthStatus,LifecycleState]]'

# View instance system logs INSTANCE_ID=$(aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[0].InstanceId' \ --output text)

aws ec2 get-console-output --instance-id $INSTANCE_ID

# Check CloudWatch logs aws logs get-log-events \ --log-group-name /github-runner/instances \ --log-stream-name $INSTANCE_ID/runner \ --limit 50 ```

### Connect to Runner Instance (via SSM)

```bash # List instances aws autoscaling describe-auto-scaling-groups \ --auto-scaling-group-names $ASG_NAME \ --query 'AutoScalingGroups[0].Instances[].[InstanceId,HealthStatus]' \ --output table

# Connect via Session Manager (no SSH key needed) aws ssm start-session --target $INSTANCE_ID

# Once connected, check runner status sudo systemctl status actions.runner. sudo journalctl -u actions.runner.* -f ```

### Troubleshooting Common Issues## Advanced Scaling Configuration

### Lambda-Based Dynamic Scaling

For more sophisticated scaling based on GitHub Actions queue depth:### Deploy Scaling Lambda

```bash # Create Lambda function zip function.zip github-queue-scaler.py

aws lambda create-function \ --function-name github-runner-scaler \ --runtime python3.11 \ --role arn:aws:iam::ACCOUNT_ID:role/lambda-execution-role \ --handler github-queue-scaler.lambda_handler \ --zip-file fileb://function.zip \ --timeout 30 \ --environment Variables="{ ASG_NAME=$ASG_NAME, GITHUB_ORG=$GITHUB_ORG, GITHUB_TOKEN=$GITHUB_PAT, MAX_RUNNERS=10, MIN_RUNNERS=2 }"

# Create CloudWatch Events rule to trigger every 2 minutes aws events put-rule \ --name github-runner-scaling \ --schedule-expression 'rate(2 minutes)'

aws events put-targets \ --rule github-runner-scaling \ --targets "Id"="1","Arn"="arn:aws:lambda:REGION:ACCOUNT:function:github-runner-scaler" ```

## Cost Optimization

### 1. Use Spot Instances

Add to Launch Template in CloudFormation:

```yaml LaunchTemplateData: InstanceMarketOptions: MarketType: spot SpotOptions: MaxPrice: "0.05" # Set max price SpotInstanceType: one-time ```

### 2. Scheduled Scaling

Scale down during off-hours:

```bash # Scale down at night (9 PM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-down-night \ --recurrence "0 21 * * " \ --desired-capacity 1

# Scale up in morning (7 AM) aws autoscaling put-scheduled-action \ --auto-scaling-group-name $ASG_NAME \ --scheduled-action-name scale-up-morning \ --recurrence "0 7 * MON-FRI" \ --desired-capacity 3 ```

### 3. Instance Type Mix

Use multiple instance types for better availability and cost:

```yaml MixedInstancesPolicy: InstancesDistribution: OnDemandBaseCapacity: 1 OnDemandPercentageAboveBaseCapacity: 25 SpotAllocationStrategy: price-capacity-optimized LaunchTemplate: Overrides: - InstanceType: t3.medium - InstanceType: t3a.medium - InstanceType: t2.medium ```

## Security Best Practices

1. *No hardcoded credentials* - Using Secrets Manager for GitHub PAT 2. *IMDSv2 enforced* - Prevents SSRF attacks 3. *Minimal IAM permissions* - Scoped to specific CodeBuild projects 4. *Private subnets* - Runners not directly accessible from internet 5. *SSM for access* - No SSH keys needed 6. *Encrypted secrets* - Secrets Manager encryption at rest 7. *CloudWatch logging* - All runner activity logged

## References

- [GitHub Self-hosted Runners Documentation](https://docs.github.com/en/actions/hosting-your-own-runners/...) - [GitHub Runner Registration API](https://docs.github.com/en/rest/actions/self-hosted-runners) - [AWS Auto Scaling Documentation](https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-i...) - [AWS CodeBuild API Reference](https://docs.aws.amazon.com/codebuild/latest/APIReference/We...) - [GitHub Actions Runner Releases](https://github.com/actions/runner/releases) - [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide...)

This solution provides a production-ready, cost-effective EC2-based runner infrastructure with automatic scaling, comprehensive monitoring, and multi-repository support for triggering CodeBuild pipelines.


I am in the same boat as you.

The only positive antigenic coding experience I had was using it as a "translator" from some old unmaintained shell + C code to Go.

I gave it the old code, told it to translate to Go. I pre-installed a compiled C binary and told it to validate its work using interop tests.

It took about four hours of what the vibecoding lovers call "prompt engineering" but at the end I have to admit it did give me a pretty decent "translation".

However for everything else I have tried (and yes, vibecoders, "tried" means very tightly defined tasks) all I have ever got is over-engineered vibecoding slop.

The worst part of of it is that because the typical cut-off window is anywhere between 6–18 months prior, you get slop that is full of deprecated code because there is almost always a newer/more efficient way to do things. Even in languages like Go. The difference between an AI-slop answer for Go 1.20 and a human coded Go 1.24/1.25 one can be substantial.


You are asking two very different questions here.

i.e. You are asking a question about whether using agents to write code is net-positive, and then you go on about not reviewing the code agents produce.

I suspect agents are often net-positive AND one has to review their code. Just like most people's code.


It seems that people feel code review is a cost, but time spent writing code is not a cost because it feels productive.

I don't think that's quite it - review is a recurring cost which you pay on every new PR, whereas writing code is a cost you pay once.

If you are continually accumulating technical debt due to an over-enthusiastic junior developer (or agent) churning out a lot of poorly-conceived code, then the recurring costs will sink you in the long run


"review is a recurring cost which you pay on every new PR, whereas writing code is a cost you pay once."

Huh ? Every new PR is new code which is a new cost ?


> Every new PR is new code which is a new cost ?

Every new PR interacts with existing code, and the complexity of those interactions increases steadily over time


Treat it as a pair programmer. Ask it questions like "How do I?", "When I do X, Y happens, why is that?", "I think Z, prove me wrong" or "I want to do P, how do you think we should do it?"

Feed it little tasks (30 s-5 min) and if you don't like this or that about the code it gives you either tell it something like

   Rewrite the selection so it uses const, ? and :
or edit something yourself and say

   I edited what you wrote to make it my own,  what do you think about my changes?
If you want to use it as a junior dev who gets sent off to do tickets and comes back with a patch three days later that will fail code review be my guest, but I greatly enjoy working with a tight feedback loop.

Yes, constantly.

I don’t know what I do differently, but I can get Cursor to do exactly what I want all the time.

Maybe it’s because it takes more time and effort, and I don’t connect to GitHub or actual databases, nor do I allow it to run terminal commands 99% of the time.

I have instructions for it to write up readme files of everything I need to know about what it has done. I’ve provided instructions and created an allow list of commands so it creates local backups of files before it touches them, and I always proceed through a plan process for any task that is slightly more complicated, followed by plan cleanup, and execution. I’m super specific about my tech stack and coding expectations too. Tests can be hard to prompt, I’ll sometimes just write those up by hand.

Also, I’ve never had to pay over my $60 a month pro plan price tag. I can’t figure out how others are even doing this.

At any rate, I think the problem appears to be the blind commands of “make this thing, make it good, no bugs” and “this broke. Fix!” I kid you not, I see this all the time with devs. Not at all saying this is what you do, just saying it’s out there.

And “high quality code” doesn’t actually mean anything. You have to define what that means to you. Good code to me may be slop to you, but who knows unless it is defined.


I haven't (yet) tried Claude but have good experiences with Codex CLI the last few weeks.

Previously I tried to use Aider and openAI about 6 or 7 months ago and it was terrible mess. I went back to pasting snippets in the browser chat window until a few weeks ago and thought agents were mostly hype (was wrong).

I keep a browser chat window open to talk about the project at a higher level. I'll post command line output like `ls` and `cat` to the higher level chat and use Codex strictly for coding. I haven't tried to one shot anything. I just give it a smallish piece of work at a time and check as it goes in a separate terminal window. I make the commits and delete files (if needed) and anything administrative. I don't have any special agent instructions. I do give Codex good hints on where to look or how to handle things.

It's probably a bit slower than what some people are doing but it's still very fast and so far has worked well. I'm a bit cautious because of my previous experience with Aider which was like roller skating drunk while juggling open straight razors and which did nothing but make a huge mess (to be fair I didn't spend much time trying to tame it).

I'm not sold on Codex or openAI compared to other models and will likely try other agents later, but so far it's been good.


> The product has to work, but the code must also be high-quality.

I think in most cases the speed at which AI can produce code outweighs technical debt, etc.


But the thing with debt is that it has to be paid eventually.

project also have to be paid off financially. We have been there before - startup used to go fast and break things so that once MVP is validated they slow down and fix things or even rewrite to new tech/architecture. No you can validate idea even faster with AI. And probably there is a lot of code that you write for one time or throw away internal tools etc.

not if you get acquired

Is your argument that it's now someone else's problem? That it must be paid, just by someone else? Thanks, I hate it.

You will probably be able to just keep throwing AI at it in the coming years, as memory systems improve, if not already.

You need to perturb the token distribution by overlaying multiple passes. Any strategy that does this would work.

Googler opinions are my own.

If agentic coding worked as well as people claimed on large codebases I would be seeing a massive shift at my Job... Im really not seeing it.

We have access to pretty much all the latest and greatest internally at no cost and it still seems the majority of code is still written and reviewed by people.

AI assisted coding has been a huge help to everyone but straight up agentic coding seems like it does not scale to these very large codebases. You need to keep it on the rails ALL THE TIME.


Yup, same experience here at a much smaller company. Despite management pushing AI coding really hard for at least 6 months and having unlimited access to every popular model and tool, most code still seems to be produced and reviewed by humans.

I still mostly write my own code and I’ve seen our claude code usage and me just asking it questions and generating occasional boilerplate and one-off scripts puts me in the top quartile of users. There are some people who are all in and have it write everything for them but it doesn’t seem like there’s any evidence they’re more productive.


as a second annectdote, at amazon last summer things swapped from nobody using llms to almost everyone using them in ~2months after a fantastic tech talk and a bunch of agent scripts being put together

said scripts are kinda available in kiro now, see https://github.com/ghuntley/amazon-kiro.kiro-agent-source-co... - specifically the specs, requirements, design, and exec tasks scripts

that plus serena mcp to replace all of gemini cli's agent tools actually gets it to work pretty well.

maybe google's choice of a super monorepo is worse for agentic dev than amazon's billions of tiny highly patterned packages?


I would think this is reasonable. My general understanding at Amazon is that things are expected to work via API boundaries (not quite the case at Google).

Do not blame the tools? Given a clear description (overall design, various methods to add, inputs, outputs), Google Antigravity often writes better zero shot code than an average human engineer - consistent checks for special cases, local optimizations, extensive comments, thorough text coverage. Now in terms of reviews, the real focus is reviewing your own code no matter which tools you used to write it, vi or agentic AI IDE, not someone else reviewing your code. The later is a safety/mentorship tool in the best circumstances and all too often just an excuse for senior architects to assert their dominance and justify their own existence at the expense of causing unnecessary stress and delaying getting things shipped.

Now in terms of using AI, the key is to view yourself as a technical lead, not a people manager. You don't stop coding completely or treat underlying frameworks as a black box, you just do less of it. But at some point fixing a bug yourself is faster than writing a page of text explaining exactly how you want it fixed. Although when you don't know the programming language, giving pseudocode or sample code in another language can be super handy.


It works in the sense that there are lots of professional (as in they earn money from software engineering) developers out there who do the work of exactly same quality. I would even bet they are the majority (or at least were prior to late 2024).

Claude Cowork was apparently completely written by Claude Code. So this appears to yet again be a skill issue.

> apparently completely written by Claude Code

https://www.promptarmor.com/resources/claude-cowork-exfiltra...

> Claude Cowork Exfiltrates files

That explains it


1. give it toy assignment which is a simplified subcomponent of your actual task

2. wait

3. post on LinkedIn about how amazing AI now is

4. throw away the slop and write proper code

5. go home, to repeat this again tomorrow


I’m honestly kind of amazed that more people aren’t seeing the value, because my experience has been almost the opposite of what you’re describing.

I agree with a lot of your instincts. Shipping unreviewed code is wrong. “Validate behavior not architecture” as a blanket rule is reckless. Tests passing is not the same thing as having a system you can reason about six months later. On that we’re aligned.

Where I diverge is the conclusion that agentic coding doesn’t produce net-positive results. For me it very clearly does, but perhaps it's very situation or condition dependent?

For me, I don’t treat the agent as a junior engineer I can hand work to and walk away from. I treat it more like an extremely fast, extremely literal staff member who will happily do exactly what you asked, including the wrong thing, unless you actively steer it. I sit there and watch it work (usually have 2-3 agents working at the same time, ideally on different codebases but sometimes they overlap). I interrupt it. I redirect it. I tell it when it is about to do something dumb. I almost never write code anymore, but I am constantly making architectural calls.

Second, tooling and context quality matter enormously. I’m using Claude Code. The MCP tools I have installed make a huge different: laravel-boost, context7, and figma (which in particular feels borderline magical at converting designs into code!).

I often have to tell the agent to visit GitHub READMEs and official docs instead of letting it hallucinate “best practices”, the agent will oftentimes guess and get stack, so if it's doing that, you’ve already lost.

Third, I wonder if perhaps starting from scratch is actually harder than migrating something real. Right now I’m migrating a backend from Java to Laravel and rebuilding native apps into KMP and Compose Multiplatform. So the domain and data is real and I can validate against a previous (if buggy) implimentation). In that environment, the agent is phenomenal. It understands patterns, ports logic faithfully, flags inconsistencies, and does a frankly ridiculous amount of correct work per hour.

Does it make mistakes? Of course. But they’re few and far between, and they’re usually obvious at the architectural or semantic level, not subtle landmines buried in the code. When something is wrong, it’s wrong in a way that’s easy to spot if you’re paying attention.

That’s the part I think gets missed. If you ask the agent to design, implement, review, and validate itself, then yes, you’re going to get spaghetti with a test suite that lies to you. If instead you keep architecture and taste firmly in human hands and use the agent as an execution engine, the leverage is enormous.

My strong suspicion is that a lot of the negative experiences come from a mismatch between expectations and operating model. If you expect the agent to be autonomous, it will disappoint you. If you expect it to be an amplifier for someone who already knows what “good” looks like, it’s transformative.

So while I guess plenty of hype exists, for me at least, they hype is justified. I’m shipping way (WAY!) more, with better consistency, and with less cognitive exhaustion than ever before in my 20+ years of doing dev work.


lol no. it is all fomo clown marketing. they make outlandish claims and all fall short of producing anything more than noise.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: