Grok the Pedo-Machine

As you might have heard, X's very own chatbot, which was given the power to post and alter pictures, is being used to create indecent pictures of children. This seems to stem from the fact that Grok, the chatbot, can manipualte uploaded pictures and do such things as replacing clothes with bikinis, without much of a check on who is depicted. And now users on X are using Grok to generate what probably falls into child porn territory.

In a first move, someone at X or xAI seems to have disabled Grok's media tab, hiding the flood of pictures from users looking for them via that route; you can still see them via other frontends or via Grok's mentions, so this is a fig leaf at best. Of course, this does not stop Grok from being used to create such pictures at the moment, and it is unclear if any further action will be taken, as the user base on X seems to like the ability to put anyone they want into bikinis and other 'sexy' clothes, adults, toasters, or children. There are some tweets via the Grok account from (probaly) actual xAI people responding to the issue (like one from 1.1. https://x.com/grok/status/2006570735481831688), but for all the writing of how they are refining the guardrails and filters, Grok still generated indecent pictures of minors for at least two more days after they were made aware of the issue.

At the time of writing this post, Grok is still online, still creates indecent pictures without the original image owner's consent, and seems to still be capable of doing it for pictures with children in them.

The apology that never was

Some news outlets have already reported on the whole situation, and an eerie number of them have included an apology statement created by someone using Grok. Just to drill this point in: that apology was created by a user, using Grok; it is not a press statement from someone at the company, it is not the heartfelt apology from the man in the machine, it is a generated text stemming from a user's prompt to the chatbot. Grok is an LLM, you can use it to create any kind of text; even the 'admission' post you can find on the web, which mentions ages and US laws, is simply a text created using Grok and not something like an actual admission. 

There is an odd thing to this, where too many journalists seem to believe Grok may have some kind of personhood, which is reflected in the reporting. @ketanjoshi.co over at bluesky has gathered a few headlines and articles that illustrate the tendency of seeing Grok as something more than a chatbot (see them here https://bsky.app/profile/ketanjoshi.co/post/3mbhoravdfc2s).
These articles make it sound like there is a little person writing the posts that are published via Grok, when in reality the text is simply the generated output based on some user's request that uses the LLM that is part of Grok.

When The Guardian emailed xAI about this whole situation, they got the answer "Legacy Media Lies". It seems that the responds that come from actual humans at xAI, do so via tweets on the Grok account, are barely addressing the issue, and at best waffling about refining safeguard with not much to show for it. It feels frankly insane that anyone takes this company serious and not for the shithouse it is.

Who to blame?

I am not a lawyer, so I can't say if this breaks any laws, but it feels not so okay for a company to have an easy accessible tool that allows anyone to create and host indecent pictures of minors via their website.

Grok is not a person; it is a product built, maintained, and operated by xAI which, as far as I know, is also the company owning X at this point. xAI has a team of people working on AI stuff, and even more notably, a CEO who personally reinstituted at least one person that had shared CSAM on the platform before. This sends a signal that X under the current leadership is either pro-pedophilia or at least does not care about child safety enough to make sure its own product cannot produce indecent images of real children.

One might argue that Grok is basically a tool like Photoshop. And Adobe is unlikely to be sued because of a person creating child porn in Photoshop, so why would X be liable for Grok being used that way? Of course this still leaves the hosting and distribution via X, and that is something the company is very likely liable for. Also, I think that AI tools that are hosted by the company that is also running the direct distribution site are different from other image editors; if Photoshop came with a "put this child in a near-see-through bikini" button, I am sure that would lead to a lawsuit at some point.
Again, I am not a lawyer, but if a company creating and hosting such pictures does not take them down and allows the further creation and distribution using its infrastructure and tools, that feels like it either is illegal or should be regulated ASAP.

The main blame for this kind of bullshit, besides any legal questions, sits with the people behind Grok. Making an easy to use image editor, that on mere text input will create indecent images of children is the kind of thing that you should not be doing.

Fuck Elon and everyone at xAI

I think that this, as well as the other times Grok was shown to be an unreliable product, is more than enough of this bullshit. The moment a politician wants to work with xAI, they need to be reminded that they are a clown. A company that straps a faulty product right into public view is not one that should enjoy anyone taking them seriously. The mere idea of seeing xAI as an option for a project should be eradicated from the minds of decision-makers, and any attempt to let that company near people's data should be met with a strongly worded letter that includes the address of the nearest circus. 


xAI is clearly not interested in safeguards as Grok is still online, allowing users to create indecent pictures of minors. The "Chief Twit" himself has shown to have no regard for user safety or any idea of why people might not want children's photos digitally altered and stripped of clothing, being preoccupied with laughing at toasters in bikinis and dining with one of the main guys from the epstein files.

There is probably a lot of jargon I could use here to try and explain the how and why of the general dynamics at play, but besides there being smarter people than I that do that, I think the simple point is that X is not a platform anyone should use at this point, that it has to be regulated, and if it is not able to stop its own product from creating and disseminating indecent images of minors, maybe it should not be kept online.