I've dialed down a lot as well. The answers I got for my queries were too often plain wrong.
I instead started asking where I might look something up - in what man page, or in which documentation. Then I go read that.
This helps me build a better mental map about where information is found (e.g., in what man page), decreasing both my reliance on search engines, and LLMs in the long run.
LLMs have their uses, but they are just a tool, and an imprecise one at that.
> Some of you may wonder why ZubanLS isn’t open source, unlike Jedi. The honest answer is that open source never worked out for me financially. Beyond some small donations and a small recurring compensation from Tidelift, I was never able to make a living from it. I'm no longer a student, and with a family to support, it became clear I needed a sustainable path.
> Joining a company like Astral (makers of Ruff) could have been an option—but I’ve also grown skeptical of venture capital as a model. VC-backed companies often need to aim for massive success or risk disappearing within a decade. I want to build something that lasts. I’ve already spent more than ten years in this space, and I plan to continue.
I'm working at XWiki SAS [1] on tools to migrate from Confluence to XWiki [2], an open source and powerful wiki software that was born at about the same time as Confluence.
We have all this. We offer support and consulting, including for handling your migration. Our migration tools try to keep as much of the content and its feature as possible, and we work on compatibility macros for this.
I think this misses the biggest need (which one might not consider if coming from Confluence):
- consolidation of wiki together with your code's READMEs and generated docs (e.g. sphinx, mkdocs, swagger, etc., anything that outputs documentation from your codebase)
Note on the "integrated diagram editor", this brings up another feature (though less critical than above):
- by standardizing on a docs-as-code abstraction like mermaid or kroki, you can then leverage (a) diffable diagrams as code, and (b) quite a few relevant OSS editors.
See VSCode extensions for a few different implementations, but that said, if you pick mermaid, then the same diagrams work in the wiki tool as on GitHub, as well as local-first open content format tools like Foam, Dendron, or Obsidian.md, which is nice.
The diagrams as code is not a one size fit all. I'd use mermaid for some technical things, but it's failing for communication/presentation purposes. You need to have the ability for arbitrary placement, annotations, flow. Mermaid is all fun until you want to connect two previously distant boxes and everything explodes - for long-term documentation purposes it may not even matter, but if I'm about to show it to anyone, I'm going to Excalidraw.
I work on the Mermaid Chart product team and your comment here is extremely relevant to a big project I'm working on that will solve a lot of the problems you mentioned! Would you be willing to tell me more and give your thoughts on our plan for solutions?
If you're open to it, please send me an email at dominic@mermaidchart.com!
I'd be happy to exchange an Amazon gift card for your time too!
Bookstack is quite nice but last I saw it doesn't support multi user collaboration which this tool seems to include as a core feature. That can be a big differentiator for a lot of people.
2. Diagrams will come too. MermaidJs is next on the line. Other diagram providers like Draw.io and Excalidraw will come once I figure out an efficient way to handle storing and retrieving their raw data.
3. There is support for page history. No diff comparison yet though.
Can anyone explain the `wrmsr -a 0xc0011029 $(($(rdmsr -c 0xc0011029) | (1<<9)))`? It seems to help on my system, but I don't understand what it does, and I don't know how to unset it.
CPU designers know that some features are risky. Much like how web apps may often have "feature flags" that can be flipped on and off by operators in case a feature goes wrong, CPUs have "chicken bits" that control various performance enhancing tricks and exotic instructions. By flipping that bit you disable the optimization.
An msr is a "model specific register", a chicken bit can configure cpu features.
They don't persist across a reboot, so you can't break anything. You can undo what you just did without a reboot, just use `... & ~(1 << 9)` instead (unset the bit instead of set it).
I think their proxy could have been written from scratch. Some management, billing, API etc too but under the hood, it's all standard open source stuff like kvm, firecracker and such?
As long as there is creativity in programming, and I think there is a fair bit of that, AI is just going to be a tool.
GPT-4 is great at sourcing human knowledge, but I think it can't really walk unbeaten paths. This is where humans shine.
Case in point: I tried to ask the AI to come up with a new World War Z chapter set in Switzerland, and it was unable to produce anything original. I had to keep feeding it ideas, so it could add something new and interesting.
I instead started asking where I might look something up - in what man page, or in which documentation. Then I go read that.
This helps me build a better mental map about where information is found (e.g., in what man page), decreasing both my reliance on search engines, and LLMs in the long run.
LLMs have their uses, but they are just a tool, and an imprecise one at that.