HN.zip

Be Worried

79 points by theli0nheart - 27 comments
mbgerring [3 hidden]5 mins ago
Before LLMs were mainstream, rationalists and EA types would come on Hacker News to convince people that worrying about how "weak" AI would be used was a waste of time, because the real problem was the risk of "strong" AI.

Those arguments looked incredibly weak and stupid when they were making them, and they look even stupider now.

And this isn't even their biggest error, which, in my opinion, was classifying AI as a bigger existential risk than climate change.

An entire generation of putatively intelligent people lost in their own nightmares, who, through their work, have given birth to chaos.

kqr [3 hidden]5 mins ago
I think the solution is to not aim to go online to "consume content". Instead, go online to learn new techniques and investigate well-reasoned opinions.

Generic "content" is that which fills out the space between the advertisements. That's never been good for you, whether written by humans or matrix multiplication.

highwaylights [3 hidden]5 mins ago
The author needn't regret not publishing this two years ago, it's a thought that had occurred to pretty much everyone long before then. It's just not clear that anything can be done to stop the snowball from gathering speed.
roxolotl [3 hidden]5 mins ago
I think it’s more that there’s no will to do anything about it. As a piece earlier this week pointed out nothing about tech is genuinely inevitable[0]. There are humans making decisions to keep the snowball gathering speed.

0: https://deviantabstraction.com/2025/09/29/against-the-tech-i...

dabockster [3 hidden]5 mins ago
> nothing about tech is genuinely inevitable

This reminds me of when everyone was saying that "everything on the internet is written in ink" - especially during the height of social media in the 2010s. So imagine my surprise in the first half of the 2020s when tons of content starts getting effectively deleted from the internet - either through actual deletion or things like link rot. Heck, I literally just said "the height of social media" - even that has pulled back.

So yeah, remember that tech ultimately serves people. And it only happens so long as people are willing to enable it to happen.

trhway [3 hidden]5 mins ago
We had more than half a century of for example sci-fi literature describing that future, and over all those decades nobody was able to come up with even half-good plausible/feasible idea of how to deal with that. That suggests that it is outside of human intelligence capabilities to stop that snowball. Personally, i read a lot of sci-fi in my youth and i'm prepared to accept such my/our fate, even happily working where i can to speed it up (the faster the changes, the faster the human evolution (or at least adaptation) and what can be more exciting than that).

Current humans can't even deal with very simple and obvious issue of global warming. Thus it seems very unreasonable to expect any effective dealing with significantly more complex issues.

zetanor [3 hidden]5 mins ago
Television and the commercial Internet are optimized to consume as much life as possible so that part of the captured attention can be auctioned to advertisers and propagandists for pennies a minute. Returning to doing the same thing but Certified With No AI™ doesn't substantially reduce the badness of the thing.
fny [3 hidden]5 mins ago
In the long run, the internet will be so riddled with trash that no one will trust it. Instead people will turn to authorities they trust for the truth the same way they did with encyclopedias and local papers. Information provenance will be a massive market.

The return to that world will be very painful and chaotic however.

fullshark [3 hidden]5 mins ago
Nah, they will just find the content to confirm their bias and not seek truth. This is essentially already the state of affairs for the internet.
jplusequalt [3 hidden]5 mins ago
>Instead people will turn to authorities they trust for the truth the same way they did with encyclopedias and local papers

I think a large portion of the population actively distrust experts.

fny [3 hidden]5 mins ago
Which is why I said "authorities they trust."
drdaeman [3 hidden]5 mins ago
I have an issue with "inherently superior ... by dopamine output" part. It's the foundation of the whole article/worry but it's not supported by anything (The Matrix quotes don't count), making the whole article hang on a dubious premise of impending doom that is not shown to exist in reality.
jackphilson [3 hidden]5 mins ago
for fear of being overly utilitarian here its not really an issue that people are manipulatable but that they are manipulated into doing the wrong things (consumerism, political divide-and-conquer strategies)

and rejecting manipulation from a deontological stance reduces agency and output for doing good in the real world

manipulation = campaigns = advertisements = psyops (all the same, different connotations)

randallsquared [3 hidden]5 mins ago
> distracting us from a scarier notion

A more immediate notion, perhaps, but definitely not scarier than human extinction.

LostMyLogin [3 hidden]5 mins ago
It's of my opinion that there is going to be a market for wrangling AI content from a consumer perspective to help maintain human-to-human knowledge transfer. I just have no idea what that looks like.
api [3 hidden]5 mins ago
> Therefore, increasing proportions of people consuming text online will be unwittingly mind-controlled by LLMs and their handlers.

The "and their handlers" part is the part I find frightening. I would actually be less concerned if the AIs were autonomous.

I'm not really afraid of AI. I'm afraid of humans using AI to mass-manipulate other humans. Humanity is its own greatest adversary.

Bolwin [3 hidden]5 mins ago
Honestly the biggest way in which LLMs have changed society is in the desperate, almost pathetic way every every business leader, career influencer, advice guru insists that they must use AI, that you should "learn" AI, that AI is taking over.

Anyway, in terms cultural change, I think the emerging image and video models will be a lot more disruptive. Text has been easy to fake for a while now, and barely gets people's attention anymore.

TrainedMonkey [3 hidden]5 mins ago
I think there is a big difference from other fads here, listing some from memory - SL, Cloud Computing, Web 3.0, NFC, Big Data, Blockchain, 3D printing, IOT, VR, metaverse, NFTs.

If we plot all of these on a scale of how much it impacted day to day experience of an average user there is something highly unusual about AI. The slop is everywhere, every single person who is interacting with digital media is affected. I don't really know what this means, but this is pretty unusual when compared with other fads.

codr7 [3 hidden]5 mins ago
Or stand together and demand the madness stops, rather than pretend there's nothing to do about it; which could actually help improve the situation.

The people involved in making these decisions deserve to be locked up for life, and I'm sure they will be eventually.

AstroBen [3 hidden]5 mins ago
The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities
jplusequalt [3 hidden]5 mins ago
>The genie's out of the bottle. I think it's better that everyone have access to it, and are fully aware of its capabilities, rather than it being unknown to everyone and under the control of specific entities

The majority of people only have access to proprietary models, whose weights and training are closed source. The prospect of a populace that all out source their thinking to Google's LLM is horrifying.

jplusequalt [3 hidden]5 mins ago
Technological inevitability is a plague. There was a good article shared on HN about this the other day.
graydot [3 hidden]5 mins ago
Are there any grassroots(?) organizations doing activism such as FSF and ACLU in the AI space with local chapters? if not, it might be time for something like that, though with all the money flooding into LLMs (ignoring LLMs' manipulative power if it put its mind to it), we probably don't stand a chance.
joshgree8859 [3 hidden]5 mins ago
What new human madness has ever been stopped?
ttctciyf [3 hidden]5 mins ago
At risk of Godwinisation, there's a very obvious example.
ryandrake [3 hidden]5 mins ago
As we are recently seeing, it was only paused temporarily.
MaxfordAndSons [3 hidden]5 mins ago
Wouldn't exactly call that a grassroots effort, though...