I mean, there might be a secret AI technology that is so advanced to the point that it can mimic a real human, make posts and comments that looks like its written by a human and even intentionally doing speling mistakes to simulate human errors. How do we know that such AI hasn’t already infiltrated the internet and everything that you see is posted by this AI? If such AI actually exists, it’s probably so advanced that it almost never fails barring rare situations where there is an unexpected errrrrrrrrrorrrrrrrrrrr…
[Error: The program “Human_Simulation_AI” is unresponsive]
Ah, the dead internet “theory”? Ultimately, it doesn’t matter.
Let’s pretend that you’re the last human on the internet, and everyone else (including me) is a bot. This means that at least some bots pass the Turing test with flying colours; they’re undistinguishable from human beings and do the exact same sort of smart and dumb shit that humans do. Is there any real difference between “this is a human being, I’ll treat them as such” vs. “this is a bot, but it behaves like a human being and I need to treat it as a human being”?
Turing test isn’t any good to discern a human from a bot, since many real people wouldn’t pass it.
We can simply treat those “real people” as bots, problem solved. :-)
But serious now: the point is that, if it quacks like a duck, walks like a duck, then you treat it like a duck. Or in this case like a human.
I made an observation that Turing test is too flawed a tool to be any reliable. If you want to find out who is who, you need something better, more like Voight-Kampff…
Sure. The test itself doesn’t matter that much, contextually speaking; just that you have some way to distinguish between humans and bots, and yet the internet would be filled with bots that pass as humans.
I guess that the RL equivalent of the Voight-Kampff would be trolling? We have no access to respiration or heart rate across the internet (and if we had, it could be counterfeit), but humans would react differently to being trolled than bots would. Unless the bots are so advanced that they react to trolling the same as we do, and show angry words in response.
Interesting. I didn’t think about it, but pushing correct buttons and observation of the reactions might indeed be good foundation for some “humanity” test.
You may be onto something, man. Good job! 👏
Well it would definitely matter at least for practical purposes, like if you wanted to meet up with somebody.
This is a good answer, because it prevents the dehumanization trap that these theories fall into:
Basically, the belief that some beings don’t have “souls”, and don’t have to be treated with conscience.
The “we are in a simulation” conspiracy fans toy with an idea of NPC that is horrifying: That some humans are just acting like humans very convincingly, but they are just thin shells that don’t really really feel pain or happiness. Whatever you do to them can’t be morally wrong.
It is also similar to how some religions have ideas that people can have their soul taken by Satan and are just demonic possession vessels here to corrupt us. They behave very much like humans but do not be tricked!
Europeans used to think Africans had no souls, they were just animals that were very good at imitating human behavior.
These thoughts are all extremely strong tools for any fascist movement needing some vague excuse to commit atrocities to their opponents and scapegoats.