Hacker Newsnew | past | comments | ask | show | jobs | submit | input_sh's commentslogin

Depends on the country. People living in richer European countries buy more new cars on average, while people in poorer European countries give second life to those same cars once those "rich people" decide to sell them and fail to buy any buyer domestically.

This is still a net positive even in poorer countries. If you can't afford a new car, you buy as close to a new car as you can afford. The newer the car is, the higher the EURO standard is that it had to abide by when it was sold brand-new, achieving the same result of reducing pollution.

I live in one of those poorer ones where most people can't afford new cars, but even if you can, the percentage of brand-new ICE cars that are even available for purchasing is going down pretty fast in recent years. So those better off are slowly being pushed towards EVs (or at least hybrids), and the vast majority of others still relies on importing like 15 years old second-hand cars (EURO 5 standard) to replace their 25yo cars (EURO 3 standard). In the capital, cars below EURO 4 are even banned when air pollution gets really bad, but the vast majority doesn't even realise this rule exists because their cars are now EURO 4 or above.


According to their latest annual report, Windows earns them less than half as much as Office and less than a quarter of what they make by selling server products.

They haven't made their money from selling Windows for a very long time, these types of mistakes are gonna have precisely 0 impact on their stock price.


That is true in the short term but most of the rest of their business rests on the fact that windows is the default OS.

If windows ever gets so bad that people actually do defect to macos/linux en masse that absolutely will affect their stock price, but so far it hasn't happened


Corporate users are the only users they even pretend they care about and they know they have pretty good lock in with Windows and office.

Also obviously this is someone else's problem some other quarter.. so.. like who cares?


It's definitely on a very long fuse, but if they lose control of the windows codebase to the point where bugs are regularly getting shipped to production that cause issues for corporate IT departments, and an increasing number of employees use MacOS or Linux at home and need training at work to learn how to use windows, it could change.

Short term no but long term these rotations do happen, otherwise we'd all still be using IBM


Oh trust me, it's not like their server offerings are any better at being bug-free. I can't go into the specifics, but here's how Microsoft truly makes their money:

I'm currently stuck in some sort of an infinite loop where a bug in Microsoft's server offerings causes us to waste some money each month, my management is pushing me towards re-creating the same ticket with Microsoft's support in hopes of getting rid of those extra costs, and Microsoft's support partners waste my time by telling me to check the same 5 things I've already checked before they close the ticket due to "inactivity" once (heaven-forbid) some other task on my plate deserves my attention and I fail to re-check those same 5 things fast enough.


So they treat their corporate customers the same way they treat devs or consumers on their forums? Lmao. Shifting responsibility is real. "Actually it is YOUR problem that we broke something*

I don't disagree with you and in fact I hope there were quicker ramifications. Any company that forgets their customers and assumes such arrogant self serving stance should get a proverbial slap in face rather sooner than later. Unfortunately our mechanism for serving that said slap in the face are rather limited and as a single consumer (or even as a single enterprise) serving that slap only serves to slap ourselves in the face in the process by inconveniencing ourselves given the lack of viable/drop in alternatives. This is why we need regulation to get the corporate greed in check.

You're also right that incentives are misaligned - Satya might well be fully aware that he's running the company into the ground but he doesn't care.

He'll be gone in a few years with all his bonuses and RSUs intact and there'll be absolutely no consequences for him if his actions cause MS to fall apart in 2035


I think that transition, from OS/Desktop company to a Cloud Services Provider, is where the rot comes from.

The financial incentives are to upsell incompetent IT departments onto forever subscriptions. The poor products lead to fat over-engineering in the cloud and huge running bills that are very hard to undo. Sloppy LLM integrations, and sloppy LLM advice about IT needs, would seem to feed into that same strategy.


Purchase a domain and point the MX record(s) to any provider you'd like. If you end up not liking that provider, you simply point the domain to a different one.

That's the main advantage of not using the provider's domain as a part of your email address (like @gmail.com, @outlook.com or whatever).


Correct! This is just Meta doing malicious compliance by being "compatible" with companies with no actual product, three-months old waitlist, no actual users within the EU, and nobody to push back on WhatsApp's definition of interoperability. Then when some real product tries to actually become interoperable down-the-line, Meta's gonna be like "well these two did it just fine according to this backwards implementation, why can't you?"

They're both b2b products that are gonna try to find their first users by pitching the idea that you can use their products to spam WhatsApp users.

Haiket doesn't even try to hide its connection to Meta. All you have to do is to go to their website, click on press, and see in the only press release they've ever posted that its CEO holds patents in use by Meta. Here, let me save you a click: https://haiket.com/press/release-nov11.html

> Alex holds over 10 patents in voice and communication technologies, assigned to and used by Google and Facebook.


> Haiket doesn't even try to hide its connection to Meta. All you have to do is to go to their website, click on press, and see in the only press release they've ever posted that its CEO holds patents in use by Meta. […] Alex holds over 10 patents in voice and communication technologies, assigned to and used by Google and Facebook.

How does this imply he has any connection to Meta? Companies license patents all the time.


Okay, what about three sentences above that one?

> Before Haiket, Alex founded a number of technology start-ups and helped develop innovative voice solutions for Facebook and Google.

At the very least, I think it's safe to say he has some connections within Meta that he utilised for this purpose. He's definitely not a complete outsider whose startup (with no actual product) just happened to be picked by Meta.


> what about three sentences above that one?

My bad. I searched for “Meta” instead of “Facebook.” Quite a few other red flags in that press release.

> Haiket is launching the Beta trial from today, with a pipeline of future innovation for early adopters, including a pioneering silencing technology that will allow users to speak privately in public, with voice communication that only your device can hear.


>> including a pioneering silencing technology that will allow users to speak privately in public, with voice communication that only your device can hear.

Does anyone else think this sounds beyond ridiculous?


> voice communication that only your device can hear.

This is fairly straightforward - you have the device spew out noise with similar characteristics to human speech (ie. random overlapping syllables in the speaker's voice). Take a recording then subtract the random syllables.

Only your device can do the subtraction, because only your device knows the waveform it transmitted.

Obviously in a room with lots of reverb this will be a bit harder, since you will also need to subtract the reflection of what was transmitted with a room profile and deal with the phone moving in the room, but it sounds far from impossible.


Countermeasure: set up four microphones some distance apart, use autocorrelation to pinpoint the sound sources, and then isolate them, recovering the "masked" speech. The countercountermeasure would be to fully surround your mouth and vocal tract with an active noise cancelling system and then produce noise (to push whatever little sound gets through far below the noise floor: the signal is unpredictable enough that you can't use averaging techniques to recover it). The countercountercountermeasure would be to use a camera in the radio band to look at the vocal tract directly, using the phone as a light source, and recover the phonemes that way. The countercountercountercountermeasure would be to construct an isolated box… at which point you're no longer having a voice call in public: you have a portable privacy booth.

> you have the device spew out noise with similar characteristics to human speech

Surely this only works if you're using the phone as a speakerphone (and are therefore almost certainly being an arsehole in public[0])?

[0] Because if it was an actual speakerphone situation, hiding your voice would be stupid.


I see a second round of legislation might be needed. They'll get it right eventually.

Eh, there's no specific definition of interoperability written in the Digital Markets Act. It's decided on a case-by-case basis and I'm sure that the legislators in charge of this case will push back on this piss-poor implementation in like a year from now.

By the time this back-and-forth reaches its end, these two will find some shady b2b customers and are gonna be touted as "successful European startups".


They never got cookie popups right. What makes you so confident?

They got cookie pop-ups right, current rules:

- the default choice needs to be "strictly necessary cookies

- with other less prominent buttons for "allow all" and "deny all"

- a site is not allowed to force you to have the press a bunch of buttons or select a bunch of things to deny most/all cookies

The problem lies in enforcement. Unless you are a huge player, there is almost nil chance you're gonna get fined.

I think about the only thing missing is that they should have RFC'd a standard akin to Do Not Track, except this would have communicated to sites if your default is "strictly necessary", "allow all" or"deny all". With it being set to "strictly necessary" by default.


> The problem lies in enforcement. Unless you are a huge player, there is almost nil chance you're gonna get fined.

I am curious: why is that difficult? Define the fine as a percentage of the revenue of the company, have users report links, and pay someone to check the link and send the fine.

Sounds like easy money... I mean it's very profitable to pay people to check parking lots and fine drivers who don't follow the regulations. This should be even more profitable?


If I am business outside Europe, why would I send Europe what my revenue is?

If you don't care about GDPR at all, then you don't have a cookie banner. So if you have the abusive cookie banner, I think it's fair to say that you care about Europe.

I don't know — why do businesses outside Europe care about GDPR compliance at all? They could just track Europeans all they want to, without any cookie banners.

Tbh most do. It makes sense only for big companies with a multinational presence.

But admitting you are subject to the laws of a country/entity is one thing, sending them your books (when your company is not based there) is kind of on a different level


Optimistic. They've got sideloading done, browser and search choice done, ad transparency done, more choice for payments done, many dark patterns banned.

The gears are turning slowly, but they're doing really useful work.


In the reality called the European Union, where phones already have to ask you to pick the default browser during device setup, and Android specifically has to also ask you to pick a default search engine (Apple doesn't have to because they don't own one). AirDrop and iMessage (and WhatsApp) have already been legally ruled against and forced to open up, but that's not the reality as of yet. It will be in a few years from now.

> What I think the author is referring to are the minor concessions Apple has made in some territories, mainly the EU. And even there, they're using every dirty trick at their disposal to do the absolute bare minimum.

It's not Apple specifically, not even a little bit. All of this is a consequence of one piece of legislation called the Digital Markets Act and it applies to everyone that is defined as the "digital gatekeeper" according to that piece of legislation, but the exact steps they need to take are not written in the law and are decided on a case-by-case basis. Such malicious compliance tricks are normal on a short timescale, but on a large-enough timescale they get ironed out and we all get to live in a less monopolistic world as a consequence.

You can join that reality too! One properly thought out piece of legislation can turn the whole thing around.

> Anti-competitive moats are still alive and well, and growing larger. It's curious that the author is positive about "AI", when that is the ultimate moat builder right now. Nobody can basically touch the largest players, since they have the most resources and access to mind-bogglingly large datacenters.

If they become large enough to matter, they will also be designated as "digital gatekeepers", and then the steps they need to do to open up will be decided. They are not that large (within the European Union) as of yet.


The EU is clearly leading the way in consumer protection, but it still leaves a lot to be desired, and in some ways it's regressing.

The GDPR was well intentioned, but poorly specified, so companies resorted to all sorts of loop holes. It also wasn't enforced well or harshly enough, so fines just became the cost of doing business for companies. Now it's being rolled back to meet "growth" demands and appease "AI" companies.

The Chat Control regulation is on the horizon, and bound to be passed in some form soon. I suppose we must sacrifice privacy to protect the children.

So it's good that some effort is being made to protect consumers, but the pressure from tech companies and the desire to not be left behind in tech innovation by the US and China will likely continue to be higher priorities. Along with some puzzling self-sabotaging decisions and increasingly right-leaning influence, all of this is undermining most of that work.


I've read this same comment for every model since GPT-3.

so have I and this time I wrote it, this broken clock only has to be right once, forever

its beneficial to check it


That's because it's been true for every major new model release since GPT-3.

HN is full of people who tried the free version of ChatGPT a couple of years ago, got a load of random hallucinated slop, and concluded it was all a bunch of useless hype. They enjoy parroting a lot of obsolete stuff they read once about stochastic parrots, without the slightest sense of irony.

When I was growing up, my old man recalled engineers who reacted the same way to transistors, which sucked even more than early-generation LLMs when they first came out. Most of them got over vacuum tubes eventually, though. So will most of the HN'ers, likely including you.


I don't know whose thoughts you're trying to attribute to me, but I've been pretty up-to-date myself (while still keeping myself to a $20/month budget that I switch between providers every few months).

Every generation has at best been a 5% improvement, and nothing revolutionary compared to the last generation. Absolutely no significant improvements between me selecting say Claude 3.7 Sonnet, Claude Sonnet 4, or Claude Sonnet 4.5. Still the same somewhat useful tool for specific purposes, still not good enough to be let loose on production, still not better than what I can do myself with 10 or so years of experience.

Genuinely useful from time to time? Sure, I agree completely. Anywhere near being revolutionary enough as people here insist on gaslighting themselves to be? Absolutely not. Not even remotely.


I'll grant that progress in coding hasn't been as impressive, although if you don't see a difference between Claude 3.7 Sonnet and 4.5 Opus (which is the actual SotA for Claude) I'd suspect a skill issue. I haven't seen miraculous progress in baseline code generation, but there has been a lot of progress in tool use. I can tell Claude CLI to install a complex package, for instance, and it will handle all the Python versioning and dependency-hell issues for me, then test and evaluate the results. 3.7 Sonnet couldn't do that, at least not reliably.

What's really different now is that LLMs have become useful research tools. Three or four years ago, models couldn't cite their sources at all. Two or three years ago, they started to gain the capability, but a large proportion of the citations would either be hallucinated or irrelevant. About a year ago, significant improvements started to emerge. At this point, I can give Gemini 3 Pro or GPT 5.2 Pro a multilayered research task, and end up with a report indistinguishable in accuracy and bibliographic quality from what a good human might produce in a couple of weeks.

It might take a half hour to get the answer, and I'm not sure if you could get the same result at the $20/month level, but the hype and promises we started hearing a couple of years ago are starting to bear fruit. The research models are now capable of performing at grad-student level. Not all the time, and not without making stuff up on occasion... but to argue that no progress at all has been made is nothing more or less than moon-landing denial.


> (which is the actual SotA for Claude)

Cool, how much of it can I use with a $20/month subscription? I'd reach the monthly limit within hours while gaining nothing. It's not that I never tried it, it's that I did and the difference is nowhere near worth 10x my money (nor even the wait time needed to get results).

> I'd suspect a skill issue.

There's 0 skill involved with asking a magic 8 ball to solve your problem for you. There's some skill involved with making it better for yourself consistently by cramming as much useful info as possible into the context window by writing very specific custom agents / skills / MCP servers / whatever, but I have yet to find a client that would pay me to spend an ungodly amount of time fucking around with my toolbelt instead of delivering results.

> Three or four years ago, models couldn't cite their sources at all. Two or three years ago, they started to gain the capability, but a large proportion of the citations would either be hallucinated or irrelevant. About a year ago, significant improvements started to emerge.

Let's translate that: You don't understand what they're doing under the hood when they "do that", you don't understand where that "improvement" is coming from, and I'm not interested in spending my time teaching you for free. For as little as €50/h, I'd be happy to! contact at my username.

> but the hype and promises we started hearing a couple of years ago are starting to bear fruit.

You're gaslighting yourself again in order to justify the price you're currently paying. Downgrade yourself back to $20 subscription, stick to it for one whole month, learn its limits properly, and then if you feel like you need to upgrade to a higher tier again, do so! Spoiler alert: it's gonna appear "significantly crappier" for about a week, and then after that week, you will get used to it and realise you've wasted a fuck ton of money on nothing.

> but to argue that no progress at all has been made is nothing more or less than moon-landing denial.

Where's that coming from? You're putting words into my mouth again. I specifically acknowledged small improvements, but I dismissed drastic improvements. Zero of what you said convinced me otherwise. For the love of all holy, use your own brain a little bit to actually understand the tools you're advocating for. Saying that you drank too much Kool-Aid would've been a severe understatement.


> but I have yet to find a client that would pay me to spend an ungodly amount of time fucking around with my toolbelt instead of delivering results.

right, yeah, I got started by being able to expense the $200 plan, and now I don't expense it but there's no going back, mostly because this is extremely undervalued and its unclear how long this deal will be around

Claude's $200/mo Max x20 plan is the best deal in the game, its equivalent to about $2600 worth of API credits, and is 4x more ability than the $100 plan (Max x5)

all roads to not really... caring... about why you're resistant to this. it's going to be too late when Anthropic raises the prices, all the reasons behind your reservations grow, unless someone else undercuts them just as well.


This is so true, and it's been genuinely painful for me to realize how fucking lazy and close-minded many in our industry actually are when you get down to it.

They sold AI giants enterprise downloads in order for them not to hammer Wikimedia's infrastructure by downloading bulk data the usual way available to everyone else. You really have to twist the truth to turn it into something bad for any of the sides.

> Those subreddits label content wrong all the time.

Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.


Also, most Reddit users are AI.

Yes, /s, they're advocating for it to be more of a work-in-progress document, and not considered something final in its current state.

The last pull request got accepted into main in 1992 after being stuck in the per review stage for no less than 202 years. The latest one out of 4 that still remain open ("no child labour") celebrated its 100th year anniversary 18 months ago because for some reason 15 states rejected to approve it and 2 of the states haven't even bothered to address it. 12 of the 28 that gave their approval also rejected it initially but then changed their opinion down the line.


That 2021 event was very much not about EVs-in-general, but about supporting UAW as a union at a time when a lot of their jobs were about to be disrupted because Biden just signed an EO (14037) that pushed traditional automakers towards making more EVs (target being 40-50% by 2030, but it was non-binding).

So, why were Ford, GM and Stellantis there but Tesla wasn't? Because Tesla was already making EVs only, because none of its workforce is a part of UAW (due to Tesla being anti-union) and because this EO had no impact on Tesla's workforce what so ever. Elon being butthurt about it doesn't change the fact that it would've made zero sense to have Tesla there.

You don't have to take my word for it, Jen Psaki directly addressed this at a press briefing:

> Asked if Tesla being a nonunion company was the reason it wasn’t included Thursday, Psaki replied, “Well, these are the three largest employers of the United Auto Workers, so I’ll let you draw your own conclusions.”

https://edition.cnn.com/2021/08/05/business/tesla-snub-white...


Thanks; actually, your background context is a lot more informative than the quote. It makes a lot more sense that they were excluded because it didn't make sense for them to be there because it was an event targeted at the union(s) because the administration had just made things difficult for them.

(The quote sounds simply like "we excluded Tesla because they are anti-union"...)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: