Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks for sharing this — I appreciate your motivation in the README.

One suggestion, which I have been trying to do myself, is to include a PROMPTS.md file. Since your purpose is sharing and educating, it helps others see what approaches an experienced developer is using, even if you are just figuring it out.

One can use a Claude hook to maintain this deterministically. I instruct in AGENTS.md that they can read but not write it. It’s also been helpful for jumping between LLMs, to give them some background on what you’ve been doing.





In this case, instead of a prompt I wrote a specification, but later I had to steer the models for hours. So basically the prompt is the sum of all such interactions: incredibly hard to reconstruct to something meaningful.

This steering is the main "source code" of the program that you wrote, isn't it? Why throw it away. It's like deleting the .c once you have obtained the .exe

It's more noise than signal because it's disorganized, and hard to glean value from it (speaking from experience).

I wasn’t exactly suggesting this. The source code (including SVG or DOCX or HTMl+JS for document work) is the primary ground truth which the LLM modifies. Humans might modify it too. This ground truth is then rendered (compiled, visualized) to the end product.

The PROMPTS.md is communication metadata. Indeed, if you fed the same series of prompts freshly, the resultant ground truths might not make sense because of the stochastic nature of LLMs.

Maybe “ground truth” isn’t exactly the right word, but it is the consistent, determined basis which formed from past work and will evolve with future work.


> because of the stochastic nature of LLMs.

But is this "stochastic nature" inherent to the LLM? Can't you make the outputs deterministic by specifying a version of the weights and a seed for the random number generator?

Your vibe coding log (i.e. your source code) may start like this:

    fix weights as of 18-1-2026
    set rng seed to 42

    write a program that prints hello world
Notice that the first two lines may be added automatically by the system and you don't need to write or even see them.

> But is this "stochastic nature" inherent to the LLM?

At any kind of reasonable scale, yes. CUDA accelerators, like most distributed systems, are nondeterministic, even at zero temperature (which you don't want) with fixed seed.


I see what you are saying, and perhaps we are zeroing in on the importance of ground truths (even if it is not code but rather PLANs or other docs).

For what you're saying to work, then the LLM must adhere consistently to that initial prompt. Different LLMs and the same LLM on different runs might have different adherence and how does it evolve from there? Meaning at playback of prompt #33, will the ground truth gonna be the same and the next result the same as in the first attempt?

If this is local LLM and we control all the context, then we can control that LLM's seeds and thus get consistent output. So I think your idea would work well there.

I've not started keeping thinking traces, as I'm mostly interested in how humans are using this tech. But, that could get involved in this as well, helping other LLMs understand what happened with a project up to a state.


I've only just started using it but the ralph wiggum / ralph loop plugin seems like it could be useful here.

If the spec and/or tests are sufficiently detailed maybe you can step back and let it churn until it satisfies the spec.


Isn't the "steering" in the form of prompts? You note "Even if the code was generated using AI, my help in steering towards the right design, implementation choices, and correctness has been vital during the development." You are a master of this, let others see how you cook, not just taste the sauce!

I only say this as it seems one of your motivations is education. I'm also noting it for others to consider. Much appreciation either way, thanks for sharing what you did.


Doesn’t Claude Code allow to just dump entire conversations, with everything that happened in them?

All sessions are located in the `~/.claude/projects/foldername` subdirectory.

Doesn't it lose prompts prior to the latest compaction?

I’ve sent Claude back to look at the transcript file from before compaction. It was pretty bad at it but did eventually recover the prompt and solution from the jsonl file.

It’s loses them in the current context (say 200k tokens), not in its SQLite history db (limited by your local storage).

I did not know it was SQLite, thx for noting. That gives the idea to make an MCP server or Skill or classical script which can slurp those and make a PROMPTS.md or answer other questions via SQL. Will try that this week.

It doesn't lose the prompt but slowly drains out of context. Use the PreCompact hook to write a summary.

aider keeps a log of this, which is incredibly useful.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: