Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's crazy how many security vulnerabilities are just people pinging http endpoints in ways they didn't expect. You would think in order to "hack" a system in 2025 you would need to be doing some crazy computer science wizardry but it really is just lazy engineers. Like how do you ship an API and have no rate-limiting. It literally takes a line to implement in Nginx.


> It literally takes a line to implement in Nginx.

"Yeah but it wasn't in the docker tutorial I skimmed so I have no idea what it means."


Soon to be... "Yeah, it was the Ai, I have no idea how any of this works"


Though once s hits the fan, you can just tell AI “I have no idea how any of this works andI don’t really even care but I need rate limiting, so do what you must, I trust you”.


Except the vibe coders aren't going to know to even ask about rate limiting.


At least on the flipside. Code scanning tools are getting increasingly good. We finally moved to github at work and it's scanned the whole repo and pointed out tons of concerning security issues in the code. Not sure if it's powered by AI in any way (I assume not since they would scream from the rooftops if it was) but it's pretty useful.


for sure, coding scanning tools are indispensable, just like linting and testing.

They are likely a bit of both, increasingly more so going forward.

- some checks are straightforward and it would be dumb to use AI for them

- some checks require AI


Obviously software development in general has become more ingenious (by some metrics) over the past few decades but very little of its growth has involved secure development principles. Often the primary goal is efficiency and scalability with as little friction for the customer. The priority is enabling commerce, not protecting user data (slightly more so company data, but not by much). I speak to devs every week who are unfamiliar with things like JavaScript injection and SSRF, things that can be exploited by virtually complete beginners. From their perspective they were just building a neat feature, that it could be used to render external scripts or internal file paths literally did not occur to them. This isn’t a judgement of them, I appreciate the chance to help them, but just to say development has unfortunately always had other priorities.


> It literally takes a line to implement in Nginx.

Lots of things are really simple. But you have to know about them first.


I would hardly consider someone that doesn't even know what rate limiting is to be a "developer."


> You would think in order to "hack" a system in 2025 you would need to be doing some crazy computer science wizardry

Never heard of the wrench technique? It's always gonna work out great. Way cheaper and easier than "wizardy" too.


I once went to a B-Sides talk of a person that paid off their mortgage via API related bounties - you wouldve confused their presentation with a Postman 101 video if you were only half listening.


for quite a while I through many of those dump "internal network scanning automatized pentests" where pretty pointless

but after having seen IRL people accidentally overlooking very basic things I now (since a few years) think using them is essential, even through they often suck(1).

(1): Like due to false positives, wrong severity classifications, wrong reasoning for why something is a problem and in generally not doing anything application specific, etc.

I mean who would be so dump to accidentally expose some RCE prone internal testing helper only used for local integration tests on their local network (turns out anyone who uses docker/docker-compose with a port mapping which doesn't explicitly define the interface, i.e. anyone following 99% of docker tutorials...). Or there is no way you forget to set content security policies I mean it's a ticket on the initial project setup or already done in the project template (but then a careless git conflict resolution removed them). etc.


Ratelimiting doesn't solve anything, you can just parallelize your queries across IP addresses.


The whole "defense in depth" principle disagrees. Having a layered defense can not only buy defenders time, but downgrades attacks from 100% data exfiltration to <10%


Increasing the barrier to entry from "trivial" to "less trivial" is always a good start.


Yup. This is some of the stuff that gets missed when understanding Security.

Ultimately, you're just buying time, generating tamper evidence in the moment, and putting a price-tag on what it takes to break in. There's no "perfectly secure", only "good enough" to the tune of "too much trouble to bother for X payout."


or like, are people going to wonder why we dropped the ball so hard, or are they going to be impressed by what the attackers pulled off.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: