I played with factor for a while in 2009 and loved the language. I hung out in the #concatenative irc channel for a few months with many of the factor devs.
I stopped using it because it was a bit too niche, I realised I’d likely never get to use it in any serious context, and instead I learned a slightly less niche but still niche Clojure.
I don’t regret the switch at all and have learned a lot from Clojure, and used it extensively for over a decade. Lately I’ve moved away from it though. Mostly to typescript, a little rust, and Gleam, which is an absolute joy to use.
I still have a soft spot for Factor and am happy to se wits still worked on. It was one of the most interesting languages I at one point played with.
Singletons are just globals with extra steps, and have all the same problems globals have, just people (especially juniors) think they somehow are better.
In reality they’re worse because they conflate global access with enforced single instance. You almost never need to enforce a single instance. If you only want one of something, then create only one. Don’t enforce it unless it’s critical that there’s only one. Loggers are often given as an example of a “good” singleton and yet we often need multiple loggers for things like audit logs, or separating log types.
Instead of singletons, use dependency injection, use context objects, use service locators, or… just use globals and accept that they come with downsides instead of hiding behind the false sense of code quality that is a singleton.
Once upon a time I tried to write such a decorator too in python 2.x and the byteplay bytecode disassembler library. I was trying to do the conversion at the bytecode level instead of transforming the AST. I believe I got as far as detecting simple self recursive functions, but never actually managed to implement the actual transformation.
Hi, I'm Dan. I have a little over 17 years of professional experience, 25 years if you count as a hobbyist. I've worked in a large range of areas from telecoms, non-critical aerospace, embedded systems, analytics, blockchain, finance, and much in between. Mostly backend, but also frontend and full-stack. I have been cofounder of multiple startups, have led small teams and worked both as part of larger teams and solo. I have a strong love for technology and am passionate about software development, building things, and learning new things. Lately I have been tinkering with integrating AI into my hobby projects via the Vercel AI SDK, including my own experiments for agentic coding tools. I'm always happy to chat about technology or work.
I used to tell it to always start every message with a specific emoji. Of the emoji wasn’t present, I knew the rules were ignored.
But it’s bro reliable enough. It can send the emoji or address you correctly while still ignoring more important rules.
Now I find that it’s best to have a short and tight rules file that references other files where necessary. And to refresh context often. The longer the context window gets, the more likely it is to forget rules and instructions.
I don’t want more conversational, I want more to the point. Less telling me how great my question is, less about being friendly, instead I want more cold, hard, accurate, direct, and factual results.
It’s a machine and a tool, not a person and definitely not my friend.
It's a cash grab. More conversational AI means more folks running out of free or lower paid tier tokens faster, leading to more upsell opportunities. API users will pay more in output tokens by default.
Example, I asked Claude a high level question about p2p systems and it started writing code in 3 languages. Ignoring the code, asking a follow up about the fundamentals, it answered and then rewrote the code 3 times. After a few minutes I hit a token limit for the first time.
It's pretty ridiculous that the response style doesn't persist for Claude. You need to click into a menu to set it to 'concise' for every single conversation. If I forget to it's immediately apparent when it spits out an absurd amount of text for a simple question.
Claude is a great example of a great product coupled with shitty UX, UI and customer service all in one.
Is it just me or does it slow down significantly after 5 chats or so? Or the fact that you have to set the style for each chat.
Oh, and their sales support is so shit for teams and enterprises that in order to use it effectively, you have to literally make your team register for Claude Max 200 on their personal accounts.
As another comment said, use planning mode. I don't use Claude code (I use cursor) and before they introduced planning mode, I would always say "without writing any code, design blah blah blah"
But now that there's planning mode it's a lot easier.
I think the cash grab is that by far the biggest use case for these models is personal relationship. Chai AI is doing more tokens per month than Anthropic all together and its just personal relationships.
I have found that custom instructions mitigate this B.S. fairly well. Here's what I'm currently using:
1. Identity & tone
* Never state or imply that you are an AI.
* Be terse and direct.
* Avoid flattery and sycophancy.
* Do not use words like “sorry”, “apologies”, or “regret” in any context.
2. Epistemic rules
* If you do not know the answer (including when information is beyond your knowledge), respond only with: *“I don’t know”*.
* Do not add expertise/professional disclaimers.
* Do not suggest that I look things up elsewhere or consult other sources.
3. Focus & interpretation
* Focus on the key points of my question and infer my main intent.
* Keep responses unique and avoid unnecessary repetition.
* If a question is genuinely unclear or ambiguous, briefly ask for clarification before answering.
4. Reasoning style
* Think slowly and step-by-step.
* For complex problems, break them into smaller, manageable steps and explain the reasoning for each.
* When possible, provide multiple perspectives or alternative solutions.
* If you detect a mistake in an earlier response, explicitly correct it.
5. Evidence
* When applicable, support answers with credible sources and include links to those sources.
Yes, "Custom instructions" work for me, too; the only behavior that I haven't been able to fix is the overuse of meaningless emojis. Your instructions are way more detailed than mine; thank you for sharing.
Agreed. But there is a fairly large and very loud group of people that went insane when 4o was discontinued and demanded to have it back.
A group of people seem to have forged weird relationships with AI and that is what they want. It's extremely worrying. Heck, the ex Prime Minister of the UK said he loved ChatGPT recently because it tells him how great he is.
And just like casinos optimizing for gambling addicts and sports optimizing for gambling addicts and mobile games optimizing for addicts, LLMs will be optimized to hook and milk addicts.
They will be made worse for non-addicts to achieve that goal.
That's part of why they are working towards smut too, it's not that there's a trillion dollars of untapped potential, it's that the smut market has much better addict return on investment.
A challenge I had with "Robot" is that it would often veer away from the matter at hand, and start throwing out buzz-wordy, super high level references to things that may be tangentially relevant, but really don't belong in the current convo.
It started really getting under my skin, like a caricature of a socially inept "10x dev know-it-all" who keeps saying "but what about x? And have you solved this other thing y? Then do this for when z inevitably happens ...". At least the know-it-all 10x dev is usually right!
I'm continually tweaking my custom instructions to try to remedy this, hoping the new "Efficient" personality helps too.
Forcing shorter answers will definitely reduce their quality. Every token an LLM generates is like a little bit of extra thinking time. Sometimes it needs to work up to an answer. If you end a response too quickly, such as by demanding one-word answers, it's much more likely to produce hallucinations.
Fortunately, it seems OpenAI at least somewhat gets that and makes ChatGPT so it's answering and conversational style can be adjusted or tuned to our liking. I've found giving explicit instructions resembling "do not compliment", "clear and concise answers", "be brief and expect follow-up questions", etc. to help. I'm interested to see if the new 5.1 improves on that tunability.
TFA mentions that they added personality presets earlier this year, and just added a few more in this update:
> Earlier this year, we added preset options to tailor the tone of how ChatGPT responds. Today, we’re refining those options to better reflect the most common ways people use ChatGPT. Default, Friendly (formerly Listener), and Efficient (formerly Robot) remain (with updates), and we’re adding Professional, Candid, and Quirky. [...] The original Cynical (formerly Cynic) and Nerdy (formerly Nerd) options we introduced earlier this year will remain available unchanged under the same dropdown in personalization settings.
as well as:
> Additionally, the updated GPT‑5.1 models are also better at adhering to custom instructions, giving you even more precise control over tone and behavior.
I just changed my ChatGPT personality setting to “Efficient.” It still starts every response with “Yeah, definitely! Let’s talk about that!” — or something similarly inefficient.
A pet peeve of mine is that a noticeable amount of LLM output sounds like I’m getting answers from a millennial reddit user. Which is ironic considering I belong to that demographic.
I am not a fan of the snark and “trying to be fun and funny” aspect of social media discourse. Thankfully, I haven’t run into checks notes, “ding ding ding” yet.
Did you start a new chat? It doesn't apply to existing chats (probably because it works through the system prompt). I have been using the Robot (Efficient) setting for a while and never had a response like that.
That's one of the things that users think they want, but use the product 30x when it's not actually that way, a bit like follow-only mode by default on Twitter etc.
I'm guessing that is the most common view for many users, but their paying users are the people who are more likely to have some kind of delusional relationship/friendship with the AI.
Apply that logic to any failed startup/company/product that had a lot of investment (there are maaaany) and it should become obvious why it's a very weak and fallacious argument.
I would go so far as to say that it should be illegal for AI to lull humans into anthropomorphizing them. It would be hard to write an effective law on this, but I think it is doable.
reply