Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The analogy is that AI is suppose to be able to do _What humans do_ but better.

But you also want AI to be more secure. To make it more secure, you'll have to prevent the user from doing things _they already do_.

Which is impossible. The current LLM AI/Agent race is a non-deterministic GIGO and will never be secure because it's fundamentally about mimicing humans who are absolutely not secure.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: