Hacker Newsnew | past | comments | ask | show | jobs | submit | tvirosi's commentslogin

I agree, they're awful. But on the other hand the horror market exists so somehow people enjoy seeing things like this (in certain moods).


Yeah maybe there is something useful here for the gaming industry?

A typical horror movie only has a few seconds or minutes of footage of the actual gross horrifying stuff.

Games on the other hand… I bet someone would be willing to pay for an endless stream of unique gross monsters to kill.


I admit that I never really understood this speech. Could you explain why it was significant to you and what it means?


https://fs.blog/2012/04/david-foster-wallace-this-is-water/

Wallace says it himself in the speech: " Twenty years after my own graduation, I have come gradually to understand that the liberal arts cliché about teaching you how to think is actually shorthand for a much deeper, more serious idea: learning how to think really means learning how to exercise some control over how and what you think. It means being conscious and aware enough to choose what you pay attention to and to choose how you construct meaning from experience. Because if you cannot exercise this kind of choice in adult life, you will be totally hosed. Think of the old cliché about “the mind being an excellent servant but a terrible master.”

I second this nomination. This is one of the most insightful essays I've ever read. Wallace's central points of consciously choosing A) what you give your attention to and B) how you construct meaning from experience REALLY DO lie at the heart of learning how to think and live a meaningful life.

("Water" is the noise in life that surrounds us every day which we've learned to ignore. But it's still the lens through which an untrained eye sees everything, that shapes who we are and how we live, since most of us don't act deliberately so much as react to life. Being oblivious to water is about remaining clueless to this noise, just going with the flow, unquestioningly and passively allowing your view of the world and your role in life to be shaped by mostly meaningless turbulence.)


Agree. The gifs are so distracting, I could barely get through it.


Or the 47k no fly number is just a lie


It's pretty easy to check, but I'm guessing it's just far easier to get yourself on the watch list.


Must be really annoying when your terrorist cousin comes over and uses your wifi on the holidays.


47k vs 1.9M, meaning it's apparently 40x easier to get on one than the other.


Probably the model weight files


Yes, if the model files are deleted then the software attempting to scan files would obviously have to error out.


Onnx has really poor support by microsoft. I suspect they've basically abandoned it internally (their onnxjs variant is orders of magnitude slower than tfjs[1]). It's a good 'neutral standard' for the moment but we should all eventually probably move away from it long term.

[1] https://github.com/microsoft/onnxjs/issues/304


ONNX != onnxjs

ONNX is a representation format for ML models (mostly neural networks). onnxjs is a just a browser runtime for ONNX models. While it may be true that onnxjs is neglected, please note that the 'main' runtime, onnxruntime, is under heavy active development[1].

Moreover, Microsoft is not the sole steward of the ONNX ecosystem. They are one of many contributors, alongside companies like Facebook, Amazon, Nvidia, and many others [2].

I don't think ONNX is going away anytime soon. Not so sure about the TF ecosystem though.

[1] https://github.com/microsoft/onnxruntime/releases

[2] https://onnx.ai/about.html


I've tried inference on the python version of onnx and it usually varies between hitting a OOM limit (while with TF it works fine) to being an order of magnitude slower. Even if the codebase is still being changed I don't see much reason for people to use it other than as a convenient distribution format.


Interesting, I did not encounter such discrepancies in my work with these tools.

There could be multiple reasons for the degraded performance:

- Are we comparing apples to apples here (heh), e.g. ResNet-50 vs ResNet-50?

- Was the ONNX model ported from TF? There are known issues with that path (https://onnxruntime.ai/docs/how-to/tune-performance.html#my-...)

- Have you tried tuning an execution provider for your specific target platform?(https://onnxruntime.ai/docs/reference/execution-providers/#s...)


Or for criminals to generate perceptually similar illegal images that are no longer triggered as a 'bad' hash.


Really disgusting idea: I wonder if it's possible for someone to use this as a 'discriminator' in a GAN to configure a generator to recreate the CP this is trying to avoid distributing in the first place.


Not really; there's not enough information in the NeuralHashes. You'd get pictures like this,[0] (from [1]) instead.

[0]: https://user-images.githubusercontent.com/1328/129860810-f41...

[1]: https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...


That is assuming that adding plausibility constraints wouldn't fix this issue. I don't know if this is feasible though.


That's not really correct. That's a forcibly created colliding image, that's not the output of the NeuralHash. Also as reported elsewhere, it's absolutely possible to do so.


No, it’s not possible.

If you think there is a credible mechanism, please link to it.


It might have to do with the output possibly being a probability vector as opposed to a binary hash. The whole thing is thus differentiable and optimizable (if a dog image was incorrectly placed in the bad hash bucket it might only be on the border of it while the real CP corresponding to the hash is found at the probabilistic maxima of the hash bucket). Just guessing.


That isn’t correct, nor is it credible.

See: https://www.apple.com/child-safety/pdf/Security_Threat_Model...


Where am I supposed to look in that pdf to understand that it isn't correct or credible? It is certainly true that the model has differentiable and thus optimizable outputs.


Sorry - I posted that link in the wrong place.

Either way, if the claim is that it’s possible to reverse engineer CSAM from the hashes, proof is needed, and nobody has provided even a proof of concept.

The person I responded to was claiming it had been demonstrated. I asked for a link to evidence. You just made a hypothesis about how it might work. That’s not helpful.


Are you trying to stop abuse of children, or enforce a standard that the idea of images of children is bad?

If you’re actually trying to stop abuse, having the computer create fake CP seems like an ideal outcome, since it would avoid the need for abuse of children.

Flooding the market with fakes and then directing consumers of the fakes to whatever mental health resources are available seems like it would fit the claimed problem far better than what apple is currently trying.


This would be bizarre - wouldn't this mean that Apple are essentially shipping illegal images with their OS? (Subject to some as yet unknown decoder)


With the right algorithm you can turn any certain string of bits into a certain other string of bits. So is the image in the data, or is it really in the algorithm?


If the decoder was "trained" on and only works with predictable data, then it might be the algorithm that's illegal, but if a completely new illegal image is created, hashed, fed into the decoder and the decoder produces a valid illegal image, then the illegal data must be in the input, not the algorithm.

This is basically rule 1 of testing neural networks: if the testing data is different from the training data and the results are still correct, your network is "reading" the data correctly and not just memorising a list of known values. I guess this means you'd also need to prove that the decoder doesn't turn most hashes of non-illegal images into illegal images, but if you also did that, you'd have a pretty strong case that the illegal data is in the hash.


Did Apple use the bad images to train the neural network? If yes, I suppose that makes this possibility more realistic.


> Did Apple use the bad images to train the neural network?

NCMEC did, certainly, but I don't think Apple ever got the actual images themselves; just the resultant hashes.


Makes me wonder if there's a possibility of e.g. faces on fbi's most wanted being snuck into the dataset somewhere in the chain.


> if there's a possibility of e.g. faces on fbi's most wanted being snuck into the dataset

Sure, it's possible, but that doesn't seem to have happened in the past decade of PhotoDNA scanning cloud photos to match hashes provided by NCMEC - why would it suddenly start happening now?


> Sure, it's possible, but that doesn't seem to have happened in the past decade of PhotoDNA scanning cloud photos to match hashes provided by NCMEC

If it's happened, it's unlikely the public would know about it.


You really don't understand the difference in scale distributed sensor netwise between the two different capabilities do you?

Server centric is the primitive that gives you periodic batch. Client resident let's you build up a real-time detection network.

Also, as they say in the financial world: past performance is not indicative of future results. No one would have thought to do so because this step hadn't been done. Now that this step has been done it is an easier to sell prospect. This is how the slippery slope works.


> periodic batch [] real-time detection network

What's the realistic difference here between "my phone scans the photo on upload to iCloud Photos" and "iCloud Photos scans the photo when it's uploaded"?

Latency of upload doesn't come into play here because the scan results are part of the uploaded photo metadata; they're not submitted distinctly according to Apple's technical description.

(And given the threshold needed before you can decrypt any of the tagged photos with the client side system, the server side scanning would be much more "real-time" in this case, no?)


This might totally work and it's kind of impressive if it does. I'm still biased towards ultra skepticism towards all of this since the trustworthiness of all demos like this is completely corrupted at this point due to cherry picking and other deceptive tricks.


If you got an invite for GPT-3, give it a shot. I discounted it at first, but then I gave it a few shots and was actually crept out a bit. Even though it is "randomly" making things up as it goes, it does show what seems like intelligence just from the sheer amount of data it is trained on.

One thing I was amazed by: GPT-3 could be a great autocompletion engine for any programming language or configuration schema. Things like Grub configuration file, xkb file could be intuitively completed by GTP-3. And even more: GTP-3 could build basic "concepts" and apply them to that domain knowledge. This seems to emerge naturally rather than something pre-planned by OpenAI. After all, I don't think OpenAI has planned for GPT-3 to understand xkb keyboard layouts.


Keep in mind that it's somewhere between "random" and "intelligent". It's more or less very complicated fuzzy pattern matching.

I do like the idea of generating configuration files, at least as a starting point for users in applications with big complicated configuration set ups. As with all things fuzzy, the output probably won't be perfect, but it might help users save time in getting set up.


Very complicated multilayer fuzzy pattern matching.

If you look at it that way, it's not dissimilar to the brain.


not really, the brain can verify the correctness of the pattern matching, and use that to infer other possibly correct patterns. also this model can't really infer intentionality or discern between variants and weigh in pros and cons. that being said i think we're not far from agi, we just need a few more pieces


It can sort of discern between variants. And intentionality and pros/cons are just another kind of pattern. What it cannot do is any kind of recursive, reflective reasoning (except by unrolling).


If only we knew how the brain worked…


It's the same with gpt-3 though. All demos show only where it works well. Only when you get to try it yourself do you get to explore all the many areas it fails.


The comment you're replying to literally says the opposite thing.


You make up things randomly as you go. You never thought ahead of any thought. Every thought you’ve ever had is essentially a procedurally generated prompt based on your biased models.


> You make up things randomly as you go. You never thought ahead of any thought.

I don't think that's true.

Usually you need to think a lot about something before coming up with "the right" thought(s).

> Every thought you’ve ever had is essentially a procedurally generated prompt based on your biased models.

That may be. But the interesting part is that those models change as you use them just by using them.


> I don't think that's true.

How long did you have to think to produce that thought? Or did it just pop into your head instantly?

The point is, you cannot think of an upcoming thought, before you have it in your head. Otherwise you would be seeing into the future.

What you are talking about in your comment is reaching a conclusion based on previous thoughts. Yes, often we link our thoughts together into a narrative or a conclusion after we've had the thoughts, but the thoughts themselves? Those seem to come out of nowhere.


The mind does a lot of unconscious work before coming up with some conscious results.

That's true even for very simple things like motion. You can measure things in the brain before those things become conscious thoughts. (Those experiments caused by the way a lot of fuss whether we have free will or are completely predestined in all we do; but that's another topic).

The consciousness only observes a small portion of the thought process. So for it a lot of thoughts seem to come out of nowhere. But the unconscious parts of thinking are very important to the whole process and it's outcomes. I think nobody disputes this by this time.


I love The Darkness that Comes Before, where this observation is explored and exploited, in case you have not read it.


I have used GPT-3 and it works for most of the time. But it fails some of the time too. And thats the problem for use cases like programming or generating config files. Because if you cant trust the output 100% you are pretty much reading the output every time.

So, the only time save is that GPT-3 makes you type less.

In any case I don't type much anyway now a days. Its mostly copy paste from stack overflow update parameters etc.

GPT-3 will be useful, maybe a year from now.


I can't help but think of this scene in Westworld (spoiler S1) whenever GPT 3 (or earlier text prediction models) and this topic come up together: https://www.youtube.com/watch?v=ZnxJRYit44k


That's a pretty bold model for human cognition. It's not something you can just assume.


Sure, but we can think behind our thoughts. And we can sound things out before we say them. And we have mutable long-term memory.

There's not much between us and GPT, but there is some distance still.


The skepticism is warranted for any bleeding edge technology. I wonder if there's another version of a Turing test when a technology can be considered sufficiently advanced when it's indistinguishable from a fake version you've seen in sci-fi. E.g, the Boston Dynamics' dancing robot video (https://www.youtube.com/watch?v=fn3KWM1kuAw) still looks fake to me because it's at the level that I would expect to see from Hollywood CGI rather than a real tech demo. If I saw the video anywhere else but on the BD page, I would have enjoyed it and forgotten about it since it's an average CGI video.


I genuinely don't understand your position. Are you saying a tech demo is only impressive if it can do things that can't be simulated? What can't be shown via simulation or CGI with enough time and money today? If we're limiting ourselves to video there's no interactive component.

Even though that dancing video likely had hundreds of takes, the part that makes it impressive is that it's real. I swear I'm not trying to be disagreeable here - I honestly don't understand your perspective.


I think what the author is trying to say is that if a technology is sufficiently advanced it seems like it can’t be real, meaning it’s something only possible with CGI. So we see these dancing robots, think “just more CGI”, then are astounded when we find out it’s real


Exactly. CGI is just movie magic. And now some real world tech demos are sufficiently advanced to be indistinguishable from CGI/magic.


Hrm. Is uncanny locomotion to modern robotics what uncanny valley is to CGI?

Fun to ponder.


I had to try a few times to get the prompt right, but that's the limit of the cherrypicking. You're correct that it doesn't work nearly as well on more complex, less temporally stable sites like Reddit.


I don't believe in this perspective. I think most people are sick of it (and would live healthier lives if they were allowed to focus more attention on their immediate surroundings) but that it keeps being refueled by profit desperate news corporations.


I think they are sick of it, but not because the grandparent post is wrong. They're sick of it because the emergency is "supposed" to be something you can overcome, feel triumphant about, and move on to the next thing.

But the emergencies in the media don't work that way. It doesn't matter how much you recycle, the media will keep screaming about catastrophe. It doesn't matter how many solar cells adorn your roof, the media will keep screaming. It doesn't how careful you are with your children, the media will keep screaming about child abductions. It doesn't matter if you pour hundreds of hours into organizing your neighborhood and fighting local crime, the media will keep screaming about crime.

It's the nature of this particular beast.

Consequently, you're never able to escape the "crisis" atmosphere, and stay stressed full time. That's not normal, especially in the absence of real, proximal crisis of that degree.


Yeah the trick is to turn off and tune out the media altogether. You really don't need to be hearing about everything that happens in the world and a non-stop partisan narrative interpretation of it. It's really not good for you, or anyone else. Turn it off. Go outside, go hike, go fish, play video games, whatever, just turn off the news. Remove the news feed from your phone. Add extensions that block news feeds on social media. Delete social media apps with feeds. Cancel your cable subscription. Go outside, enjoy your short life.


What I gather from your comment is that the media thrusts big societal issues (sustainability, climate change, child abuse, crime) upon the individual. When the individual tries to address these issues with their own actions or purchases, it's has zero impact on the reporting of the crisis.

I can conclude that the only sane things for an individual to do is not to counter these issues with individual actions, but to instead organize for a societal response strong enough to change the media narratives, or switch off the media completely and live life as best as one can.


> switch off the media completely and live life as best as one can

This is what I do. And I am confident that no matter what choices I make individually, they have no impact whatsoever on a global scale.


That's partly true, but it's also true young people look out for adventure and meaning. They've been fed the story that they can delay starting a family until well into your 30s. Dating apps don't help. They're overeducated and underemployed.

So you have young people in their 20s with no sense of purpose. Why not LARP on twitter and pretend you're saving the world by trying to ruin the lives of people you don't agree with?


> So you have young people in their 20s with no sense of purpose. Why not LARP on twitter and pretend you're saving the world by trying to ruin the lives of people you don't agree with?

I’m not sure if you’re suggesting this, but I think it’s more about attention and feelings of social acceptance, than it is about purpose.


Maybe. The reason I think it's more about purpose than attention and social acceptance is that Twitter is probably the platform with the most vocal social activism and its relatively pseudonymous. So your Twitter profile doesn't really carry any weight in real life social interactions, so it's less signaling than something like Instagram. And twitter is pretty ephemeral as well, so attention doesn't stay. I've seen someone with 15 followers dunk on some popular figure on Twitter, get mentioned in the NY Times (which is pretty weird) and they gain maybe 5 or 10 followers. I'm not even exaggerating. So it's not even 15 minutes of fame. It's nothing

People find purpose in dunking on others and they do it almost as a job. They even give up other social obligations and post regularly between certain hours.


People like this are easy to manipulate in the extreme, I fear we're in the midst of a pandemic of sociopaths practicing their art, more than anything. There's no end in sight either.


If they were sick of it, they wouldn’t lap it up. People like drama. If they didn’t we wouldn’t have reality TV


> If they were sick of it, they wouldn’t lap it up

The evidence does not support that.

People will often ignore their own plights (from which they are tired and feel trapped or impotent) to engage in remote problems with a feeling of authority and power. Voting for a federal office is the ultimate diversion. Rather than argue and campaign for local changes where they understand exactly what impact they can make (or not make), they invest in a far away problem for which they feel like there is a different performance profile.

People are sick of some crisis mongering, they just like a change of pace.


Yeah, it reminds me of the anecdotal phenomenon I’ve observed, that the kind of people to say “god, i just haaate getting involved in drama in general” are the ones to be most likely to stir up that drama in the first place.


Hmm, I think there's a difference between heavily fictionalized external personal drama and the diffuse, anxiety-provoking miasma of social media and mainstream news headlines feeds.

Even if there isn't a huge difference, I feel like the relationship is more like that of an alcoholic or other addict. Do addicts really "like" their drug? Surely most of them know at some level that it's really unhealthy, and there's diminishing pleasurable returns even in the short term, but they still crave it as a release from their short term anxieties and problems.


I agree. How much not-for-profit, non-state media, non-clout chasing fear mongering content is there? It does exist but the scale is small.


Honestly I think that there is quite a lot of non-fearmongering media out there. The Economist strikes me as a good example. And in general I think if you read any "respectable" (Economist, NYT, WSJ, WaPo, La Monde, etc.) newspaper as a whole on a day-to-day basis you wouldn't come away from the experience being especially scared or fearful. The problem is that it all gets dumped into the social media slurry and what a lot of people consume (in practice) is a sort of curated selection of the most sensationalist individual stories across the entire media ecosystem.


The Economist does a fair bit of fearmongering when it comes to China and inflation.


I think one dimension of crisis mongering is how drastic the stakes are portrayed. Its one thing to discuss discerning trends or potential problems, and another to portray every problem as a struggle of between good and evil in which the balance of the universe hangs. The Economist tends to stick to the former side of the spectrum than the latter in my experience.


That’s fair


If most people were truly sick of it, there would be no market for news corporations to exploit.

People are also “allowed” to focus their attention on whatever they want. It seems like you’re saying we are powerless to ignore the news, although I am curious if you mean something else.


People are "allowed" to ignore drugs and nicotine but it's pretty hard and culture and laws help us constrain the lower bound will power you have to have in order to live a life. I'm not necessarily saying drugs and news addiction are as strong or as destructive but there should probably be some "let's act in a way that's good for the public" ideas floating around rather than pure "the free hand means it's moral" motivated greed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: