Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just wanted to second the previous comment, and this is even for adjacent fields. Also a PhD AI/ML grad, and so many of us are out of work at the moment that we'll happily settle for prompt engineering roles, let alone RAG etc., just to maintain appearances on CVs/eligibilty for possible future roles.


Kinda surprised of that, actually. Sure, I get that research interest in any if the "traditional" ML methods (SVMs, markov models, decision trees, that kind of stuff) is probably essentially dead right now, but I had thought interest in neural networks and "understanding" what LLMs do internally to be ballooning.

I could imagine that even those "ancient" techniques might some day make a comeback. They're far inferior to LLMs in terms of expressive power, but they also require literally orders of magnitude less memory and computation power. So when the hype dies down, interest in solutions that don't require millions in hardware cost or making your entire business dependent on what Sam Altman and Donald Trump had for breakfast might have a resurgence. Also, interestingly enough, LLMs could even help in this area: Most of those old techniques require an abundance of labeled training data, which was always hard to achieve in practice. However, LLMs are great at either labeling existing data or generating new synthetic data that those systems could train on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: