Whether or not its true, we only have to look at Peter Steinberger, the guy who made Moltbook - the "social media for ai", and then got hired amist great publicity fanfare by OpenAI to know that there is a lot of money out there for people making exciting stores about AI. Never mind that much of the media attention on moltbook was based on human written posts that were faking AI.
I think Mr. Shambaugh is probably telling the truth here, as best he can, and is a much more above-board dude than Mr. Steinberger. MJ Rathbun might not be as autonomous as he thinks, but the possibility of someone's AI acting like MJ Rathbun is entirely plausable, so why not pay attention to the whole saga?
Edit: Tim-Star pointed out that I'm mixed up about Moltbook and Openclaw. My Mistake. Moltbook used AI agents running openclaw but wasn't made by Steinberger.
tim-star [3 hidden]5 mins ago
steinberger didnt make moltbook fyi, some other guy did. steinberger just made openclaw.
mentalgear [3 hidden]5 mins ago
At this point OpenAI seems to be scrambling to sustain its own hype and needs these kind of pure PR acquisition to justify themselves amid dense competition - otherwise, the bubble risks bursting. Hiring someone who built a product as secure as Swiss cheese that racked up "stars" from a wave of newly minted "vibe-coders" fits perfectly into their short-term strategy. It buys them another month or two of momentum before figures like S(c)am Altman and others can exit at the peak, leaving everyone else holding the bag.
Terr_ [3 hidden]5 mins ago
Yeah, we should be using a lot of Occam's Razor / "Follow the money" analysis these days.
mentalgear [3 hidden]5 mins ago
> I had already been thoughtful about what I publicly post under my real name, had removed my personal information from online data brokers, frozen my credit reports, and practiced good digital security hygiene. I had the time, expertise, and wherewithal to spend hours that same day drafting my first blog post in order to establish a strong counter-narrative, in the hopes that I could smother the reputational poisoning with the truth.
This is terrible news not only for open source maintainers, but any journalist, activist or person that dares to speak out against powerful entities that within the next few months have enough LLM capabilities, along with their resources, to astro-turf/mob any dissident out of the digital space - or worse (rent-a-human but dark web).
We need laws for agents, specifically that their human-maintainers must be identifiable and are responsible. It's not something I like from a privacy perspective, but I do not see how society can overcome this without. Unless we collectively decide to switch the internet off.
crystal_revenge [3 hidden]5 mins ago
> We need laws for agents
I know politics is forbidden on HN, but, as non-politically as possible: institutional power has been collapsing across the board (especially in US, but elsewhere as well) as wealthy individuals yield increasingly more power.
The idea that anything as solutions to problems as subtle as this one will be solved with "legal authority" is out of touch with the direction things are going. Especially since you propose legislation as a method to protect those that:
> that dares to speak out against powerful entities
It's increasingly clear that the vast majority of political resource are going towards the interests of those "powerful entities". If you're not one of them it's best you try to stay out of their way. But if you want to speak out against them, the law is far more likely to be warped against you than the be extended to protect you.
Avicebron [3 hidden]5 mins ago
People who are using bots/agents in an abusive way are not going to be registering their agent use with anyone.
I'm on the fence whether this is a legitimate situation with this sham fellow, but irregardless I find it concerning how many people are so willing to abandon online privacy at the drop of a hat.
AlexandrB [3 hidden]5 mins ago
> We need laws for agents, specifically that their human-maintainers must be identifiable and are responsible.
This just creates a resource/power hurdle. The hoi polloi will be forced to disclose their connection to various agents. State actors or those with the resources/time to cover their tracks better will simply ignore the law.
I don't really have a better solution, and I think we're seeing the slow collapse of the internet as a useful tool for genuine communication. Even before AI, things like user reviews were highly gamed and astroturfed. I can imagine that this is only going to accelerate. Information on the internet - which was always a little questionable - will become nearly useless as a source of truth.
overgard [3 hidden]5 mins ago
What I don't understand is how is this agent still running? Does the author not read tech news (seems unlikely for someone running openclaw). Or is this some weird publicity stunt? (But then why is nobody walking forward to take credit?)
simlevesque [3 hidden]5 mins ago
If I've learned one thing in life: some people are totally shameless.
yoyohello13 [3 hidden]5 mins ago
Likely the LLM operator is just a 'likes to see the world burn' type.
potsandpans [3 hidden]5 mins ago
> Or is this some weird publicity stunt? (But then why is nobody walking forward to take credit?)
Indeed, that's a good question. What motivations might someone have to keep this running?
hfavlr [3 hidden]5 mins ago
Open source developer is slandered by AI and complains. Immediately people call him names and defend their precious LLMs. You cannot make this up.
Rathbun's style is very likely AI, and quickly collecting information for the hit piece also points to AI. Whether the bot did this fully autonomously or not does not matter.
It is likely that someone did this to research astroturfing as a service, including the automatic generation of oppo files and spread of slander. That person may want to get hired by the likes of OpenAI.
kevincloudsec [3 hidden]5 mins ago
We built accountability systems that assume bad actors are humans with reputations to protect. none of that works when the attacker is disposable.
Exoristos [3 hidden]5 mins ago
You could say the same thing about a 3D-printed gun, and be wrong in the same way. Since justice will work the same as always as soon as the gun -- or AI agent -- is connected to the person behind it.
jjfoooo4 [3 hidden]5 mins ago
My main takeaway from this episode is that anonymity on the web is getting harder to support. There are some forums that people want to go to to talk to humans, and as AI agents get increasingly good at operating like humans, we're going to see some products turn to identity verification as a fix.
Not an outcome I'm eager to see!
alrs [3 hidden]5 mins ago
One could build up a reputation with a completely anonymous PGP key. That was somewhat the point of USENET ca. 1998.
edoceo [3 hidden]5 mins ago
I think we could do something like that again. Need a reputation to follow you around. Humans need to know who they are dealing with.
Terr_ [3 hidden]5 mins ago
I want that to be how things work, although recent history has not been favorable when it comes to Public Key Infrastructure as applied to individuals. Inconvenience, foot-guns, required technical expertise levels, the pain of revocation lists...
giancarlostoro [3 hidden]5 mins ago
Ars goofing with AI is why I stress repeatedly to always validate the output, test it, confirm findings. If you're a reporter, you better scrutinize any AI stuff you blurb out because otherwise you are only producing fake news.
tantalor [3 hidden]5 mins ago
Looking through the staff directory, I don't see a fact checker, but they do have copy editors.
The job of a fact checker is to verify the details, such as names, dates, and quotes, are correct. That might mean calling up the interview subjects to verify their statements.
It comes across as Ars Technica does no fact checking. The fault lies with the managing editor. If they just assume the writer verified the facts, that is not responsible journalism, it's just vibes.
Benji Edwards was, is, and will continue to be, a good guy. He's just exhibiting a (hopefully) temporary over-reliance on AI tools that aren't up to the task. Any of us who use these tools could make a mistake of this kind.
Aurornis [3 hidden]5 mins ago
> He's just exhibiting a (hopefully) temporary over-reliance on AI tools that aren't up to the task. Any of us who use these tools could make a mistake of this kind.
Technically yes, any of us could neglect the core duties of our job and outsource it to a known-flawed operator and hope that nobody notices.
But that doesn't minimize the severity of what was done here. Ensuring accurate and honest reporting is the core of a journalist's job. This author wasn't doing that at all.
This isn't an "any one of us" issue because we don't have a platform on a major news website. When people in positions like this drop the ball on their jobs, it's important to hold them accountable.
overgard [3 hidden]5 mins ago
I feel bad for the guy, but.. a journalist in tech whose beat is AI should know much better. I'd be a lot more forgiving if this was like a small publication by someone that didn't follow AI.
fantasizr [3 hidden]5 mins ago
Using a tool that adds unnecessary risk to your professional reputation/livelihood is - of course - not worth the risk.
tim-star [3 hidden]5 mins ago
lol this feels a little bit suspect to me.
"i was sick, i was rushing to a deadline!"
im not saying the guy should lose his journalist license and have to turn in his badge and pen but seems like a bit of a flimsy excuse meant to make us forgive him.
hope hes feeling better soon!
thenaturalist [3 hidden]5 mins ago
Not proof reading quotes you've dispatched to be fetched by an AI ignoring that said website has blocked LLM scraping and hence your quotes are made up?
For a senior tech writer?
Come on, man.
> Any of us who use these tools could make a mistake of this kind.
No, no not any of us.
And, as Benji will know himself, certainly not if accuracy is paramount.
Journalistic integrity - especially when quoting someone - is too valuable to be rooted in AI tools.
This is a big, big L for Ars and Benji.
moralestapia [3 hidden]5 mins ago
[flagged]
wk_end [3 hidden]5 mins ago
Based on:
MJ Rathbun operated in a continuous block from Tuesday evening through Friday morning, at regular intervals day and night. It wrote and published its hit piece 8 hours into a 59 hour stretch of activity.
Personal attacks aren't allowed on HN, so please don't.
Also, can you please stop posting flamebait and/or unsubstantive comments generally? You've unfortunately been doing this repeatedly, and we end up banning such accounts.
I feel he has been laudibly even-keeled about the whole thing.
wk_end [3 hidden]5 mins ago
What a weird, victim-blame-y thing to say.
Something genuinely shitty was done to this guy by an LLM - who, as an open source maintainer, probably already is kind of pissed about what LLMs are doing to the world. Then another shitty thing was done to him by Ars' LLM! Of course he's thinking about it a lot. Of course he has thoughts about the consequences of AI on the future. Of course he wants to share his thoughts.
Just curious, do you also think that the breathless AI hype bots who've been insisting for about five years and counting that LLMs are going to replace everyone and destroy the world any day now, who have single-handedly ballooned the stock market (mostly Nvidia) into a massive bubble, are also histrionic, milking things for engagement, need to talk to a therapist?
tim-star [3 hidden]5 mins ago
i think i sort of skimmed the hit piece but what exactly was so shitty about it?
im not saying this dude is histrionic but he sure is generating a lot of front page HN posts about something i was ready to forget about a week ago.
obviously AI has become such a lightning rod now that everyone is upset one way or the other but this seems a bit like small potatoes at this point. forest for the trees.
wk_end [3 hidden]5 mins ago
I guess "shitty" is in the eye of the beholder, but having a pretty vituperative screed written against me (accusing me of being "insecure", "threatened", fixated on "ego" and "control", "weak", "an obstacle", and "fucking absurd") would feel pretty fucked up and lousy I imagine, even if I knew it was machine-generated.
I think Mr. Shambaugh is probably telling the truth here, as best he can, and is a much more above-board dude than Mr. Steinberger. MJ Rathbun might not be as autonomous as he thinks, but the possibility of someone's AI acting like MJ Rathbun is entirely plausable, so why not pay attention to the whole saga?
Edit: Tim-Star pointed out that I'm mixed up about Moltbook and Openclaw. My Mistake. Moltbook used AI agents running openclaw but wasn't made by Steinberger.
This is terrible news not only for open source maintainers, but any journalist, activist or person that dares to speak out against powerful entities that within the next few months have enough LLM capabilities, along with their resources, to astro-turf/mob any dissident out of the digital space - or worse (rent-a-human but dark web).
We need laws for agents, specifically that their human-maintainers must be identifiable and are responsible. It's not something I like from a privacy perspective, but I do not see how society can overcome this without. Unless we collectively decide to switch the internet off.
I know politics is forbidden on HN, but, as non-politically as possible: institutional power has been collapsing across the board (especially in US, but elsewhere as well) as wealthy individuals yield increasingly more power.
The idea that anything as solutions to problems as subtle as this one will be solved with "legal authority" is out of touch with the direction things are going. Especially since you propose legislation as a method to protect those that:
> that dares to speak out against powerful entities
It's increasingly clear that the vast majority of political resource are going towards the interests of those "powerful entities". If you're not one of them it's best you try to stay out of their way. But if you want to speak out against them, the law is far more likely to be warped against you than the be extended to protect you.
I'm on the fence whether this is a legitimate situation with this sham fellow, but irregardless I find it concerning how many people are so willing to abandon online privacy at the drop of a hat.
This just creates a resource/power hurdle. The hoi polloi will be forced to disclose their connection to various agents. State actors or those with the resources/time to cover their tracks better will simply ignore the law.
I don't really have a better solution, and I think we're seeing the slow collapse of the internet as a useful tool for genuine communication. Even before AI, things like user reviews were highly gamed and astroturfed. I can imagine that this is only going to accelerate. Information on the internet - which was always a little questionable - will become nearly useless as a source of truth.
Indeed, that's a good question. What motivations might someone have to keep this running?
Rathbun's style is very likely AI, and quickly collecting information for the hit piece also points to AI. Whether the bot did this fully autonomously or not does not matter.
It is likely that someone did this to research astroturfing as a service, including the automatic generation of oppo files and spread of slander. That person may want to get hired by the likes of OpenAI.
Not an outcome I'm eager to see!
https://arstechnica.com/staff-directory/
The job of a fact checker is to verify the details, such as names, dates, and quotes, are correct. That might mean calling up the interview subjects to verify their statements.
It comes across as Ars Technica does no fact checking. The fault lies with the managing editor. If they just assume the writer verified the facts, that is not responsible journalism, it's just vibes.
Benji Edwards was, is, and will continue to be, a good guy. He's just exhibiting a (hopefully) temporary over-reliance on AI tools that aren't up to the task. Any of us who use these tools could make a mistake of this kind.
Technically yes, any of us could neglect the core duties of our job and outsource it to a known-flawed operator and hope that nobody notices.
But that doesn't minimize the severity of what was done here. Ensuring accurate and honest reporting is the core of a journalist's job. This author wasn't doing that at all.
This isn't an "any one of us" issue because we don't have a platform on a major news website. When people in positions like this drop the ball on their jobs, it's important to hold them accountable.
For a senior tech writer?
Come on, man.
> Any of us who use these tools could make a mistake of this kind.
No, no not any of us.
And, as Benji will know himself, certainly not if accuracy is paramount.
Journalistic integrity - especially when quoting someone - is too valuable to be rooted in AI tools.
This is a big, big L for Ars and Benji.
Also, can you please stop posting flamebait and/or unsubstantive comments generally? You've unfortunately been doing this repeatedly, and we end up banning such accounts.
https://news.ycombinator.com/newsguidelines.html
Er, pretty much the opposite.
Something genuinely shitty was done to this guy by an LLM - who, as an open source maintainer, probably already is kind of pissed about what LLMs are doing to the world. Then another shitty thing was done to him by Ars' LLM! Of course he's thinking about it a lot. Of course he has thoughts about the consequences of AI on the future. Of course he wants to share his thoughts.
Just curious, do you also think that the breathless AI hype bots who've been insisting for about five years and counting that LLMs are going to replace everyone and destroy the world any day now, who have single-handedly ballooned the stock market (mostly Nvidia) into a massive bubble, are also histrionic, milking things for engagement, need to talk to a therapist?
im not saying this dude is histrionic but he sure is generating a lot of front page HN posts about something i was ready to forget about a week ago.
obviously AI has become such a lightning rod now that everyone is upset one way or the other but this seems a bit like small potatoes at this point. forest for the trees.