For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
It’s really the least of the issues in this current case but I despise how these things talk like a human, saying it feels sorry for the CSAM it made. It doesn’t feel anything, it’s not a sentient being. Stop making it speak as though those statements mean anything.
It’s honestly the most pathetic thing that people buy into this yes man buddy bullshit that LLM Chatbots are programmed to use.
If I were ever to use some kind of ai research assistant, I want it to deliver me information and information only. I hate this chummy bullshit where it sucks me off with compliments while delivering some shit it made up that sounds like something that could be a response to what I said or ask for.
Like all I want is a voice activated search engine, if that. I already didn’t use voice to text for searches before all this. But they can’t even just do that shit and actually just make their previously strong search functionality worse because of this faux digital friend bullshit.
We needed the Star Trek Computer but we got the sycophantic talking doors from Hitchhikers Guide to the Galaxy
Roddenberry Vs Adams
One painted a sci-fi utopia. The other showed us ourselves.