- cross-posted to:
- ghazi@lemmy.blahaj.zone
- cross-posted to:
- ghazi@lemmy.blahaj.zone
and subsequent update to the headline, which reads like kind of a backpedal from the CEO:
Update: Tinybuild CEO Alex Nichiporchik says a recent talk that indicated the publisher uses AI to monitor staff was “hypothetical.”
Update (07/14/23): In a separate response sent directly to Why Now Gaming, Nichiporchik said the HR portion of his presentation was purely a “hypothetical” and that Tinybuild doesn’t use AI tools to monitor staff.
“The HR part of my presentation was a hypothetical, hence the Black Mirror reference. I could’ve made it more clear for when viewing out of context,” reads the statement. “We do not monitor employees or use AI to identify problematic ones. The presentation explored how AI tools can be used, and some get into creepy territory. I wanted to explore how they can be used for good.”
This is a Pandora box situation, when potential use for malicious purposes on AI on the ponderance of evidence outweigh the goods, one have to conclude that it is necessary to ban it from the purpose of monitoring. This have immense impact on disabled workers for instance.