Terminal.app also closes the window when the shell exits if you change a setting: Settings -> Profiles -> Shell -> When the shell exits -> Close the window
Software that takes text input should interpret that as the end of the input.
Shells decide that end of input means it's time to exit. Terminals usually decide that if the shell exits, there's nothing else to do and so close the window.
macOS Terminal.app instead prints "Process exited", which I can't quite fathom the value of. I guess it's marginally less confusing than making the window disappear. :)
(Note though -- I can't find it in Terminal.app settings right now, but there must be a way to change the behaviour to close the window instead. Mine is configured that way, but it's not the default)
Process exited is somewhat useful if you want to look at the results-but even then I think it exits automatically if you opened it by double clicking a script.
In fact I know the ctrl-d stuff, yet it is too much strain on your wrist if you use a finger to press the CTRL. Takes longer but typing exit is way easier.
Alternatively I use the intersection of my palm and left pinky to press CTRL.
Teams has really poor code block compared to slack or any other tool. You can feel the arrogance of the Microsoft PM each time you paste code or paste text that randomly render as html. Somehow, slack still has a better text input compared to teams.
Hard disagree here. GitHub does encourage this sort of thing, but even there for my PRs to be easily reviewable, I like to keep my commits organized and with good messages explaining things. That way the reviewer can walk the commits, and see why each thing was thing was done.
I also like it for myself, when I’m going over my own PRs before asking for a review - I will often amend commits to ensure the work is broken down correctly, each thing that should go together, does.
In a way, stacked PRs are just a higher-level abstraction of this too - same idea, keep work that goes together in the same place.
Fully agree with you here. Blunt squashing is a bandaid to the problem of lazy commits. Commits should IMHO be specific and atomic. Like fixing one bug or implementing one feature. Obviously there are cases where this ideal isn't practical, but the answer is still not squash everything, it's to think for 10 more seconds about the commit and do your best.
Yeah, I think over use of GitHub, which seems to encourage squash-merging, has led to this where a lot of people I’ve seen treat a PR as essentially one commit - because it ends up being one in the end.
If you keep your PRs small I guess the end result is the same, but even then I like things in individual commits for ease of review.
I want to see detailed atomic commits during PR review, and once it's reviewed I'm happy to have it squashed. If the PR produces so much code/changes that main branch needs detailed atomic commits for future reference, then the PR was too large to begin with, imo.
I do agree that this is a good compromise. For me, if I do a git blame and eventually can find the PR that led to change, if it has nice clean commits, that’s good enough.
Its not a if. it's necessary for the sake of people reviewing your code.
Unless you work alone on your pet project and always push to master you never work alone.
Helix boast itself as a no config tool, but you have to find a lsp, installl it, edit a toml to activate it. I dont think we can call that "config less".
Success would be something like LazyVim but without the nagging of updates each time you open it.
I think most people in the target audience for either editor wouldn't mind a bit of tinkering. The difference is in basically building your own setup from the ground up with plugins and config files vs starting with a reasonably featureful setup OOTB that you can then tweak to your liking.
I use LazyVim and it works pretty well. But whenever I want to change some small aspect of its behaviour it takes a while to get familiar with the right part of the configuration. And if an update were to break it tomorrow I'm not sure how easily I could put it back together.
I am whole off the vim (&friends) trend but my 2c-
The helix situation is still miles better for up and running asap compared to dancing with files/lua on lazyvim. Just having to refer to docs to install a plugin, writing sane remaps etc eats up time. If you really just speedrun everything under an hour good for you. But for the rest, a lsp is a one package manager install away (even on windows scoop seems to have become the de facto), editing a toml is much easier than fiddling with the lua api/vimscript "just" to set some variables.
(Not a helix user though I have tried both vim/nvim/helix)
The only problem for me was the keybindings work good unless my vim instincts kick in where I become slow. The other one was lack of plugins.
I agree with all you said. Its an improvment over nvim situation. I still think for common languages like python and markdown lsp should be setup by default. I am not sure if i am willing to forget all the muscle memory I have just yet.
Also I miss being able to ZZ to exit my file
I don't know about LazyVim but my LSP configuration in Neovim is really simple:.
Use Mason to install the LSP server (just type :LspInstall or use the Mason UI) that will then activate automatically and reuse an existing configuration from lsp-config.
getClaims() uses a multi-level cache + WebCrypto API to verify JWTs signed with an asymmetric key locally.
Cache is this:
1. Origin server is always Supabase Auth, which like all auth servers is difficult to distribute globally. It serves /auth/v1/.well-known/jwks.json with a 10 minute cache-control header.
2. Supabase's Edge caches this response closest to where it was requested. Re-requesting within 10 minutes here can be as fast as 10ms but usually around 20ms. This latency comes from peering latency between whoever is hosting the server requesting the resource, and the edge.
3. This response is further cached in memory in the client library for 10 minutes.
Now when it's pulled from the memory cache, the latency is really the speed of the WebCrypto API which is super fast and done in microseconds (not milliseconds!).
Depending where you use getClaims(), the memory cache may not actually be used. For instance Vercel's Fluid compute has persistent RAM between requests so you're in for a super nice treat for most requests.
If not using Fluid compute, memory isn't shared between requests so only the Edge cache would apply. This means the values are cached close (but not inside) Vercel's network, so you'd see consistent 10-20ms (give or take, very approximate numbers) here.
Anyway, if 10-20ms is still not acceptable, you can pass an option to getClaims() with a static JSON Web Key Set configuration. No cache is used now and it all depends on the WebCrypto API -- so microseconds.
This isn't recommended (unless you absolutely know what you're doing) as key revocation will be difficult for you in the future. The client library does its best, but if the signing key has leaked you must manually revoke by updating your backends.
Brampton is a suburb of Toronto with a large Indian immigrant population. This is leading to tensions within Brampton as different Indian political factions attempt to influence Indian politics from Canada[1][2][3].