HN.zip

Updates to Consumer Terms and Privacy Policy

708 points by porridgeraisin - 483 comments
troad [3 hidden]5 mins ago
You can opt out, but the fact that it's opt-in by default and made to look like a simple T/C update prompt leaves a sour taste in my mouth. The five year retention period seems... excessive. I wonder if they've buried anything else objectionable in the new terms.

It was the kick in the pants I needed to cancel my subscription.

jmward01 [3 hidden]5 mins ago
The 5 year is the real kicker. Over the next 5 years I find it doubtful that they won't keep modifying their TOS and presenting that opt out 'option' so that all it will take is one accidental click and they have all your data from the start. Also, what is to stop them from removing the opt out? Nothing says they have to give that option. 4 years and 364 days from now TOS change with no opt out and a retention increase to 10 years. By then the privacy decline will have already have been so huge nobody will even notice that this 'option' was never even real.
wzdd [3 hidden]5 mins ago
Everywhere else in Anthropic's interface, yes/no switches show blue when enabled and black when disabled. In the box they're showing about this change the slider shows grey in both states: visit it in preferences to see the difference! It's not just disappointing but also kind of sad that someone went to the effort to do this.
senko [3 hidden]5 mins ago
Just did and it behaves as expected for me in the Android app (ie. not the dark pattern you described)
BalinKing [3 hidden]5 mins ago
I can confirm it's grey on both sides on the website.
tln [3 hidden]5 mins ago
I get blue (on) / black (off) on the website. Or blue / white in light mode.

https://claude.ai/settings/data-privacy-controls

It was easy to not opt-in, I got prompted before I saw any of this.

I think they should keep the opt-in behavior past Sept 28 personally.

IAmGraydon [3 hidden]5 mins ago
They’re likely A/B testing the interface change, which is why people are getting inconsistent results
Aurornis [3 hidden]5 mins ago
It works correctly (blue on, grey off) in the iOS app. I just did it now.
riz_ [3 hidden]5 mins ago
This is probably because there are laws in some countries that restrict how these buttons/switches can look (think cookie banners, where sometimes there is a huge green button to accept, and a tiny greyed out text somewhere for the settings).
soulofmischief [3 hidden]5 mins ago
Can you provide an example?
gpm [3 hidden]5 mins ago
This link is not even remotely close to an example of the behavior you described.
merelysounds [3 hidden]5 mins ago
> opt-in by default

Nitpicking: “opt in by default” doesn’t exist, it’s either “opt in”, or “opt out”; this is “opt out”. By definition an “opt out” setting is selected by default.

benterix [3 hidden]5 mins ago
This is not nitpicking, this is a sane reaction to someone modifying the meaning of words on the fly.
klabb3 [3 hidden]5 mins ago
To be fair it trips people up all the time. Even precise terminology isn't great if people misuse it. Maybe it would have been better to just use "enabled by default".
troad [3 hidden]5 mins ago
The original meaning of sane is "physically healthy". Its usual modern meaning is "mentally healthy". You're using it to mean "reasonable".

At which exact point is language prohibited from evolving, and why it super coincidentally the exact years you learnt it?

danans [3 hidden]5 mins ago
> At which exact point is language prohibited from evolving

Never?

https://en.m.wikipedia.org/wiki/Semantic_change

troad [3 hidden]5 mins ago
Yes, that was my point.
card_zero [3 hidden]5 mins ago
And here it is, evolving before your eyes: we're killing off the maladaptive mutant which was "opt-in by default". That's the evolution that is happening here.
soraminazuki [3 hidden]5 mins ago
Diluting the distinction between opt-in and opt-out is gaslighting, not "evolution."
tln [3 hidden]5 mins ago
> By definition an “opt out” setting is selected by default.

No, (IMO) an "opt out" setting / status is assumed/enabled without asking.

So, I think this is opt-in, until Sept 28.

Opt-in, whether pre-checked/pre-ticked or not, means the business asks you.

GDPR requires "affirmative, opt-in consent", perhaps we use that term to mean an opt-in, not pre-ticked.

whilenot-dev [3 hidden]5 mins ago
Regardless whether it's opt-in or opt-out, the business will need to confirm anything it opted for you by asking. If you don't select the opposing choice in a timely fashion, then the business assumes that it opted correctly in your interest and on your behalf.

> So, I think this is opt-in, until Sept 28.

If the business opted for consent, then you will effectively have the choice for refusal, a.k.a. opt-out.

I_am_tiberius [3 hidden]5 mins ago
"five year retention". If it's in a model once, it's there forever.
whimsicalism [3 hidden]5 mins ago
yes, it’s a very big loophole. and if it’s a generative model, you can just launder the data through synthetic generation/distillation to future models
disconcision [3 hidden]5 mins ago
this is somewhat true but i'm not sure how load bearing it is. for one, i think it's going to be a while until 'we asked the model what bob said' is as admissible as the result of a database query
Hnrobert42 [3 hidden]5 mins ago
Is that true? Do models get rebuilt from scratch each time or do they get iterated on?
I_am_tiberius [3 hidden]5 mins ago
I believe the big models currently get built from scratch (with random starting weights). That wasn't my point though. I meant a model created once, might be used for a very long time. Maybe they even release the weights at one point ("open source").
JohnnyMarcone [3 hidden]5 mins ago
I got a pop-up when I opened the app explaining the change and an option to opt out. That seems very transparent to me.
elashri [3 hidden]5 mins ago
> That seems very transparent to me

Implicit consent is not transparent and should be illegal in all situations. I can't tell you that unless you opt out, You have agreed to let me rent you apartment.

You can say analogy is not straightforward comparable but the overall idea is the same. If we enter a contract for me to fix your broken windows, I cannot extend it to do anything else in the house I see fit with Implicit consent.

mystraline [3 hidden]5 mins ago
As a real world counterexample, medical in the USA does this shit all the time.

Local office will do a blood draw, send it to a 3rd party analysis which isn't covered by insurance, then bill you full. And you had NO contractual relationship with the testing company.

Same scam. And its all because our government is completely captured by companies and oligopoly. Our government hasn't represented the people in a long time.

cube00 [3 hidden]5 mins ago
> That seems very transparent to me.

Grabbing users during start up with the less privacy focused option preselected isn't being "very transparent"

They could have forced the user to make a choice or defaulted to not training on their content but they instead they just can't help themselves.

felideon [3 hidden]5 mins ago
> seems very transparent

Except not:

> The interface design has drawn criticism from privacy advocates, as the large black "Accept" button is prominently displayed while the opt-out toggle appears in smaller text beneath. The toggle defaults to "On," meaning users who quickly click "Accept" without reading the details will automatically consent to data training.

Definitely happened to me as it was late/lazy.

ornornor [3 hidden]5 mins ago
It’s not. And also whether you move the toggle to on or off, you still have to click accept which really isn’t clear whether you’re accepting to share your data or not.

Never mind the complete 180 on privacy.

oblio [3 hidden]5 mins ago
Opt-in leads to very low adoption and is the moral choice.

Opt-out leads to very high adoption and is the immoral choice.

Guess which one companies adopt when not forced through legislation?

insane_dreamer [3 hidden]5 mins ago
It should be off be default, with the option to opt in.
DrillShopper [3 hidden]5 mins ago
It should be opt-in, not opt-out.

The fact that there's no law mandating opt-in only for data retention consent (or any anti-consumer "feature") is maddening at times

Joker_vD [3 hidden]5 mins ago
> You can opt out

You can say that you want to opt out. What Anthropic will decide to do with your declaration is a different question.

AlexandrB [3 hidden]5 mins ago
I look forward to this setting getting turned on again "accidentally" when new models are released or the ToS is updated.
monegator [3 hidden]5 mins ago
I'm super duper sure that my data won't be stored and eventually used if i opt out
darepublic [3 hidden]5 mins ago
it's almost like this multi billion dollar company is misanthropic, despite their platitudes. Should I not hold my breath on Anthropic helping facilitate "an era of AI abundance for all"? (To quote a rejected PR applicant to Anthropic from the front page)
javcasas [3 hidden]5 mins ago
You can request your data to not be used. Your request will appropriately be read and redirected to /dev/null.
episteme [3 hidden]5 mins ago
What will you use instead? I’m finding Claude the best experience since ChatGPT 5 is so slow and not any better answers than 4.
teekert [3 hidden]5 mins ago
Granted, it is a stretch and not near the features of Claude (no code etc), but at least Proton's Lumo [0] is very privacy oriented.

I have to admit, I've used it a bit over the last days and still reactivated my Claude pro subscription today so... Let's say it's ok for casual stuff? Also useful for casual coding questions. So if you care about it, it's an option.

[0] https://lumo.proton.me/

soiltype [3 hidden]5 mins ago
Since I don't use LLMs to directly code for me, I'm going to (mis?)place my trust in Kagi assistant entirely for the time being. It claims not to associate prompts with individual accounts. Small friction of keeping a browser tab open is worth it for me for now.
nocommandline [3 hidden]5 mins ago
If you aren't using it for coding or advanced uses like video, etc, you can try running models locally on your machine using Ollama and others like it.

Self plug here - If you aren't technical and still want to run models locally, you can try our App [1]

1] https://ai.nocommandline.com

weregiraffe [3 hidden]5 mins ago
>What will you use instead? I’m finding Claude the best experience since ChatGPT 5 is so slow and not any better answers than 4.

You could try programming with your own brain

javierluraschi [3 hidden]5 mins ago
ehnto [3 hidden]5 mins ago
From the frypan into the fire. I think the reality, proven by history and even just this short five years, is no company will hold onto their ethics in this space. This should surprise no one since the first step of the enterprise is hoovering up the worlds data without permission.
Arubis [3 hidden]5 mins ago
Worse by every measure.
weberer [3 hidden]5 mins ago
What metrics are you looking at? Grok 4 outperforms Claude 4 Opus in the Artificial Analysis Intelligence Index.

https://artificialanalysis.ai/leaderboards/models

mac-attack [3 hidden]5 mins ago
What sane person would downgrade to Grok
smallerfish [3 hidden]5 mins ago
Settings > Privacy > Privacy Settings
kossTKR [3 hidden]5 mins ago
i don't see any setting related to this? just:

Export data

Shared chats

Location metadata

Review and update terms and conditions

I'm in the EU, maybe that's helping me?

croes [3 hidden]5 mins ago
Have you clicked "Review and update terms and conditions"?

It's part of the update

kossTKR [3 hidden]5 mins ago
Oh i see thanks. That's a dark design pattern, hiding stuff like that.

No one cares about anything else but they have lots of superflous text and they are calling it "help us get better", blah blah, it's "help us earn more money and potentially sell or leak your extremely private info", so they are lying.

Considering cancelling my subscription right this moment.

I hope EU at leat considers banning or extreme-fining companies trying to retroactively use peoples extremely private data like this, it's completely over the line.

klabb3 [3 hidden]5 mins ago
EU or not, it baffled me that people don't see this glaring conflict of interest. AI companies both produce the model and rent out inference. In other words, you're expecting that the company that (a) desperately crave your data the most and (b) that also happen to collect large amounts of high quality data from you will simply not use it. It's like asking a child to keep your candy safe.

I'd love to live in a society where laws could effectively regulate these things. I would also like a Pony.

croes [3 hidden]5 mins ago
>It's like asking a child to keep your candy safe

That's why we don't hand billions of dollars to a child. Maybe we should treat AI companies similar.

kossTKR [3 hidden]5 mins ago
This is why we need actual regulation, and not the semi fascist monopolist corporatocracy we've evolved into now.

Its only utopian because it's become so incredibly bad.

We shouldn't expect less, we shouldn't push guilt or responsibility onto the consumer we should push for more, unless you actively want your neighbour, you mom, and 95% of the population to be in constant trouble with absolutely everything from tech to food safety, chemicals or healthcare - most people aren't rich engineers like on this forum and i don't want to research for 5 hours every time i buy something because some absolute psychopaths have removed all regulation and sensible defaults so someone can party on a yacht.

frm88 [3 hidden]5 mins ago
Bravo! This has to be the most coherent and well-formulated rant I have read in a longtime. Thank you!
kordlessagain [3 hidden]5 mins ago
> It was the kick in the pants I needed to cancel my subscription.

As if barely two 9s of uptime wasn't enough.

ethagnawl [3 hidden]5 mins ago
I wonder what happens if I don't accept the new T&C? I've been successfully dismissing an updated T&C prompt in a popular group messaging application for years -- I lack the time and legal acumen to process it -- without issue.

Also, for others who want to opt-out, the toggle is in the T&C modal itself.

layer8 [3 hidden]5 mins ago
The new privacy policy automatically becomes effective on September 28, if you don’t already agree to it before. Anthropic states that “After September 28, you’ll need to make your selection on the model training setting in order to continue using Claude.”
nicce [3 hidden]5 mins ago
I tried to do that with WhatsApp and it eventually stopped working.
energy123 [3 hidden]5 mins ago
Has anyone asked why OpenAI has two very separate opt-out mechanisms (one in settings, the other via a formal request that you need to lodge via their privacy or platform page)? That always seemed likely to me to be hiding a technicality that allows them to train on some forms of user data.
nicce [3 hidden]5 mins ago
OpenAIs temporary chat still advertises that chats are stored for 30 days while there is court order that everything must be retained indefinitely. I wonder why they are not obligated to state this quite extreme retention.
demarq [3 hidden]5 mins ago
Are you sure the opt out isn’t only training? The retention does not seem affected by the toggle.
jasona123 [3 hidden]5 mins ago
From the PR update: https://www.anthropic.com/news/updates-to-our-consumer-terms

“If you do not choose to provide your data for model training, you’ll continue with our existing 30-day data retention period.“

From the support page: https://privacy.anthropic.com/en/articles/10023548-how-long-...

“If you choose not to allow us to use your chats and coding sessions to improve Claude, your chats will be retained in our back-end storage systems for up to 30 days.”

zenmaster10665 [3 hidden]5 mins ago
it seems really badly designed or maybe it is meant to be confusing. It does not make it clear that the two are linked together, and you have to "accept" the both together even though there is only a toggle on the "help us make the model better" item.
perihelions [3 hidden]5 mins ago
What are you replacing it with?
troad [3 hidden]5 mins ago
Two weeks left in the sub to figure it out, but I'm not yet sure. I was never all in on all the tooling, I mostly used it as smart search (e.g. ImageMagick incantations) and for trivial scripting that I couldn't be bothered writing myself, so I might just stick to whatever comes with Kagi, see if that doesn't cover me.
perihelions [3 hidden]5 mins ago
How does Kagi (claim that they) enforce privacy rights on the major LLM providers? Have they negotiated a special contract?

I'm looking at

> "When you use the Assistant by Kagi, your data is never used to train AI models (not by us or by the LLM providers), and no account information is shared with the LLM providers. By default, threads are deleted after 24 hours of inactivity. This behavior can be adjusted in the settings."

https://help.kagi.com/kagi/ai/assistant.html#privacy

And trying to reconcile those claims with the instant thread. Anthropic is listed as one of their back-end providers. Is that data retained for five years on Anthropic's end, or 24 hours? Is that data used for training Anthropic models, or has Anthropic agreed in writing not to, for Kagi clients?

FergusArgyll [3 hidden]5 mins ago
They are using llm's through the API where it's the b2b world and you can get privacy
vinnyorvinny [3 hidden]5 mins ago
There is an option to opt out right? So I assume they just make sure to always opt out.
fnordlord [3 hidden]5 mins ago
I'm mostly replying because I was truly using it for an ImageMagick incantation yesterday. I use the API rather than chat, if that's an option for you. I put $20 into it every few months and it mostly does what I need. I'm using Raycast for quick and dirty questions and AnythingLLM for longer conversations.
ivape [3 hidden]5 mins ago
I like think using OpenRouter is better, but there’s absolutely no guarantee from any of the individual providers with respect to privacy and no logging.
Syzygies [3 hidden]5 mins ago
Claude assists me in my math research.

The scenario that concerns me is that Claude learns unpublished research ideas from me as we chat and code. Claude then suggests these same ideas to someone else, who legitimately believes this is now their work.

Clearly commercial accounts use AI to assist in developing intellectual product, and privacy is mandatory. The same can apply to individuals.

vdfs [3 hidden]5 mins ago
> Claude assists me in my math research.

> Claude then suggests these same ideas to someone else, who legitimately believes this is now their work.

Won't this mean that claude assisted you with someone else work? Sure it's not from a "chat" but claude doesn't really know anything other than it's training data

kmacdough [3 hidden]5 mins ago
If you have an idea and are putting it together, you might use Claude for a few things:

- Search the web for related ideas. This could help if someone's already had the idea or if there are things to learn from related ideas. - Review your writeup or proofs for mistakes and clarity

None of these things make the idea Claude's. Claude merely helped with some of the legwork.

But Claude now has your idea in clear, plain text to train on. The next time someone hits on even a similar idea, Claude might well suggest your idea outright. Not seeing your idea published, the user has no way to know it isn't a novel idea. If the person is less diligent/thorough, they may well publish first and claim it as there own, without any nefarious intent.

simpaticoder [3 hidden]5 mins ago
There is a stark difference between using the public web to do research and searching through your colleagues' private notebooks and discussions to do research.
iaw [3 hidden]5 mins ago
> claude doesn't really know anything other than it's training data

I've seen cases where Claude demonstrates novel behaviors or combines existing concepts in new ways based on my input. I don't think it's as simple as memorization anymore.

ffsm8 [3 hidden]5 mins ago
If I am standing in Finland and look out on the ocean, and the whole sky is green... Is the sky actually green?

You're equating your own perspective as objective truth, which is a very common pitfall and fallacy

Syzygies [3 hidden]5 mins ago
To clarify, I see AI as an association engine of immense scope. Others are responding with variations on this model in mind.

It has long been a problem in math research to distinguish between "no one has had this idea" and "one person has had this idea". This used to take months. With the internet and MathSciNet, ArXiv online it took many iterations of guessing keywords. Now, I've spent six months learning how to coax rare responses from AI. That's not everyone's use case.

What complicates this is AI's ability to generalize. My best paper, we imagined we were expressing in print what everyone was thinking, when we were in fact connecting the dots on an idea that was latent. This is an interesting paradox: People see you as most original when you're least original, but you're helping them think.

With the right prompts AI can also "connect the dots".

Deegy [3 hidden]5 mins ago
If your work was truly novel, wouldn't the odds of it showing up in later models be extremely low given that these are probabilistic?

In a sense these machines are outputting the aggregate of the collective thoughts of the commons. In order for concepts to be output they have to be quite common in the training data. Which works out kind of nice for privacy and innovation because by the time concepts are common enough to show up through inference they probably deserve to be part of the public knowledge (IP aside).

bluecalm [3 hidden]5 mins ago
They might optimize learning to weight novel/unexpected parts more in the future. The better the models become (the more the expect) the more value they will get from unexpected/new ideas.
Deegy [3 hidden]5 mins ago
Good point. But can the models even behave that way? They depend on probability. If they put a greater weight on novel/unexpected outputs don't they just become undependable hallucination machines? Despite what some people think, these models can't reason about a concept to determine it's validity. They depend on recurring data in training to determine what might be true.

That said, it would be interesting to see a model tuned that way. It could be marketed as a 'creativity model' where the user understands there will be a lot of junk hallucination and that it's up to them to reason whether a concept has validity or not.

ceroxylon [3 hidden]5 mins ago
Temperature plays a large role in fine tuning model output, you're correct that there is a theoretical sweet spot:

https://towardsdatascience.com/a-comprehensive-guide-to-llm-...

Aurornis [3 hidden]5 mins ago
When you get the pop-up about the new terms, select the “opt out” option. Then your chats will not be used for training.
Klonoar [3 hidden]5 mins ago
Well, theoretically they won’t.

Anyone who’s worked in an engineering team is familiar with someone forgetting to check ‘if(doNotDoThisCondition)’.

This is why (among many other reasons) opt-in is more user respecting here than opt-out.

SoftTalker [3 hidden]5 mins ago
Forgetting. Riiighht.
thisOtterBeGood [3 hidden]5 mins ago
This perfectly describes one of the biggest dillema with AI. Where does an AI company stop to utilize human knowledge it does not actually own. Where do they draw the line. Apparently it's possible there aren't any lines drawn at all.
sneak [3 hidden]5 mins ago
You can’t own knowledge. Intellectual property is a legal fiction invented to prop up industries.

You can no more own knowledge or information than you can own the number 2.

AvAn12 [3 hidden]5 mins ago
So Anthropic should have no property rights to its own source code?
wolvesechoes [3 hidden]5 mins ago
Property itself is a legal fiction. Every other right you enjoy is a legal fiction.

So what?

sneak [3 hidden]5 mins ago
> Property itself is a legal fiction.

Maybe real property (which only exists because of a property record held in a government building), but it is self-evident to me (and, I believe, most people) that personal property is a natural right.

One only need look up some TikTok videos of Americans getting pickpocketed in Europe to see how large groups of people feel on the matter.

mitthrowaway2 [3 hidden]5 mins ago
But you won't feel the same way about a pickpocket who borrows the source code to the software you derive your livelihood from, your sales team's customer list, your would-be-bestselling novel manuscript, your company's secret formula for a rust-proof coating, or that scientific paper that you and your grad students have spent all summer getting ready to submit for publication?

Thank you for your generosity!

dns_snek [3 hidden]5 mins ago
If you talk to a human they're free to discuss your ideas with someone else. Why should LLMs be any different? The likelihood of these models reproducing your ideas word for word is essentially zero anyway.

More to the point, respecting your wishes to keep those conversations confidential would risk stifling human progress, so they have to be disregarded for the greater good.

dmbche [3 hidden]5 mins ago
Love to see people being directly and fully against the concept of "confidentiality"
dns_snek [3 hidden]5 mins ago
Not in the slightest! The only thing I'm against is hypocrisy.

LLM enthusiasts are staunch defenders of the argument that use of everyone's ideas and labour in LLM training isn't just fair use, but a moral imperative in order to advance science, art, and human progress as a whole.

It's beyond hypocritical for beneficiaries of this paradigm to then turn around and expect special treatment by demanding that "their" ideas, "their" knowledge, "their" labour be excluded from this process.

dmbche [3 hidden]5 mins ago
Gotcha - right with you. Gotta get my sarcasm detector checked.
andrewmcwatters [3 hidden]5 mins ago
A lot of people doing cat-and-mouse threat detection development are keeping their work outside of public LLMs right now, so it sounds like you’re in the same boat as a lot of us.
Ardren [3 hidden]5 mins ago
> Claude assists me in my math research.

Pulling up the ladder behind you :-)

notrealyme123 [3 hidden]5 mins ago
Unpublished work Vs. Published.
bluecalm [3 hidden]5 mins ago
Math research or anything new/clever in a particular niche. Imagine you optimized a piece of code to get an advantage or came up with some clever trick to solve a common problem in your niche and then everyone gets it from free from Claude believing, as you pointed out, that it's now their work.

I had this exact conversation with my business partner a few days ago. Our "secret sauce" might not be worth that much after many years but still I am not comfortable exposing it to Claude. Fortunately it's very easy to separate in our project so Claude gets the other parts and is very helpful.

JCM9 [3 hidden]5 mins ago
Not a surprise. All the major players have reached the limits of training on existing data—they’re already training on essentially the whole internet plus a bunch of content they allegedly stole (hence various lawsuits). There haven’t been any major breakthroughs in model architecture from the major players recently and thus they’re now in a battle for more data to train on. They need data, and they want YOUR data, now, and are gonna do increasingly shady things to get it.
klabb3 [3 hidden]5 mins ago
> They need data, and they want YOUR data, now, and are gonna do increasingly shady things to get it.

But unlike the 100s of data brokers that also want your data, they have an existing operational funnel of your data already that you voluntary give them every day. All they need is dark pattern ToS changes and manage the minor PR issue. People will forget about this in a week.

threetonesun [3 hidden]5 mins ago
Seems hard to believe legal teams at corporations are going to forget this in a week. I've always assumed the market play for these companies was spinning off an "Amazon basics" version of other companies software, this seems like another step towards that.
cube00 [3 hidden]5 mins ago
It's nice to see the newer models are suffering after being exposed to training on their own slop.

If they had done this in a more measured way they might have been able to separate human from AI content such as doing legal deals with publishers.

However they couldn't wait to just take it all to be first and now the well is poisoned for everyone.

theshackleford [3 hidden]5 mins ago
> It's nice to see the newer models are suffering after being exposed to training on their own slop.

I've seen zero evidence anything of the such is occurring, and that if it was, it's due to what you claim. I'd be highly interested in research suggesting both or either is occurring however.

cube00 [3 hidden]5 mins ago
"AI models collapse when trained on recursively generated data"

https://news.ycombinator.com/item?id=41058194

freejazz [3 hidden]5 mins ago
It's not alleged that they stole the content. They told the courts they pirated the materials.
whamlastxmas [3 hidden]5 mins ago
Infringement, not theft :)
freejazz [3 hidden]5 mins ago
Reread my post.
imiric [3 hidden]5 mins ago
Yeah, this is hardly surprising.

To AI companies, data is even more of a gold mine than to adtech companies. It is existentially important.

The truly evil behavior will emerge at the intersection of these two industries. I'm sure Google and Facebook are already using data from one to power the other, even if it's currently behind closed doors. I can hardly wait for the use cases these geniuses will think of once this is publicly acceptable and in widespread use by all companies.

xyst [3 hidden]5 mins ago
Further proof why guardrails/regulation is needed.
superposeur [3 hidden]5 mins ago
Everyone seems to be unsurprised by this move, but I’m genuinely shocked. What a shoot your own foot business decision. Google, evil though it be, doesn’t post the text of your gmails in its search results because who would consider using Gmail after that? This is the llm equivalent. Am I missing something?
rs186 [3 hidden]5 mins ago
Gmail used to serve ads based on your emails for many years until 2017. https://www.npr.org/sections/thetwo-way/2017/06/26/534451513...
skylurk [3 hidden]5 mins ago
And in 2010 they made https the default. Different times :)
j4hdufd8 [3 hidden]5 mins ago
I don't think https is responsible for that. Google owns the data, it doesn't matter how it is transported. It does, however, matter how it is stored (which I hope is encrypted in a way only you can retrieve it)
aleph_minus_one [3 hidden]5 mins ago
> Am I missing something?

I think you do:

According to the article https://www.perplexity.ai/page/anthropic-reverses-privacy-st...

"Enterprise and educational customers will continue operating under their existing privacy protections, as the policy changes specifically exclude Claude for Work and Claude for Education services. These commercial accounts remain governed by separate contractual agreements that maintain stricter data handling standards.

Organizations using Claude through business partnerships or educational licenses can continue their operations without concern for the new training policies affecting their sensitive communications or proprietary information."

Thus, I think your claim

> What a shoot your own foot business decision.

likely does not hold: the non-commercial accounts likely led to Anthropic loosing money, so they are not liked by Anthropic anyway (but are a an "inconvenient necessity" to get people to notice and try out your product offering). With this new decision, Anthropic makes this "free-riding" less attractive.

I bet that Anthropic will soon release a press statement (that exists in the drawers for quite a long time) "We are listening to your concerns, and will thus extend our 'privacy-conscious offering' to new groups of customers. Only 30 $ per month."

ceroxylon [3 hidden]5 mins ago
> With this new decision, Anthropic makes this "free-riding" less attractive

Certainly not for any users like you and me, it takes two seconds and three clicks to review the new terms and decline chat training. This is more like Anthropic getting easy training from people who are unaware or don't care.

soiltype [3 hidden]5 mins ago
Seems the same thing. They're giving plausible deniability, but knowing they'll still scoop up a worthwhile amount of data/profit from some % of users.
827a [3 hidden]5 mins ago
Gmail is free. It would still be incredibly bad for Gmail to start publishing the content of free users' emails to Google search.

But also, Anthropic has said that this new policy also applies to their Pro ($20/mo) and Max ($200/mo) plans. So its not free versus not free.

layer8 [3 hidden]5 mins ago
Well, it means that LLMs used for business use cases will be trained on input from non-business use cases of non-privacy-conscious users.
KoolKat23 [3 hidden]5 mins ago
This data is useful for reinforcement learning. All the others do it.

And most importantly, you can just opt-out.

behnamoh [3 hidden]5 mins ago
Just because all the others do it doesn’t make it right. Many users chose Anthropic exactly because they were not like the others.
wolvesechoes [3 hidden]5 mins ago
> Many users chose Anthropic exactly because they were not like the others.

Oh the naivety.

Sooner or later they all become the same, soon after "investors" or "shareholders" arrive.

behnamoh [3 hidden]5 mins ago
> Sooner or later they all become the same, soon after "investors" or "shareholders" arrive.

They already arrived. Google was one of the main investors of Anthro.

KoolKat23 [3 hidden]5 mins ago
There's no reason to be shocked by the practice however.
superposeur [3 hidden]5 mins ago
Ok, to be clear, let’s say I’m dumb and accidentally go with the default (I get the color of the opt out button wrong or something). As if there’s a “publish my private emails to the internet” default-on button in email. Then, I use it to edit a rec letter for student X, with my signature Y. (Yes I know this is dumb and I try changing names when editing but am sure some actual names may slip through.) A few months later the next model is released trained on the data. Student X asks Claude what Y would write in a rec letter about X. Such a button is a “wings stay on / wings fall off” button on a plane.
franga2000 [3 hidden]5 mins ago
You're severely overestimating the ability of the model to recall a single mostly uninteresting item from it's billions of input documents.
turnsout [3 hidden]5 mins ago
You can't opt out of the data retention policy.
smca [3 hidden]5 mins ago
The data retention period is 30 days if you don't choose to improve model training. https://www.anthropic.com/news/updates-to-our-consumer-terms...
turnsout [3 hidden]5 mins ago
Oh, I didn't catch this—that's good news
podgorniy [3 hidden]5 mins ago
What a framing. Like there is exactly a surprise behing all these reactions.
einpoklum [3 hidden]5 mins ago
Google mines the bejeezus out of your email, and uses it to any number of ends, including manipulating you into buying things, and also passing your correspondence on to the US government. While this is not the same as outright making your emails universally searchable - training Claude on your emails is also not the same as posting their contents.

And - this behavior of Google's has not been penalized, I'm afraid.

shadowgovt [3 hidden]5 mins ago
The LLM equivalent is what Google does do, which is train its spam filters on the contents of your emails coupled to the signal of what human beings flag as spam.

(It was one of the first significant value-adds of GMail: at its scale, Google could create a global-concept understanding of the content and pattern of spam across hundreds of millions of users. That was the kind of Big Data that made it possible to build filters where one could confidently say "This is tuned on all spam in the wild, because we've seen all spam in the wild").

lewdwig [3 hidden]5 mins ago
TBH I’m surprised it’s taken them this long to change their mind on this, because I find it incredibly frustrating to know that current gen agentic coding systems are incapable of actually learning anything from their interactions with me - especially when they make the same stupid mistakes over and over.
const_cast [3 hidden]5 mins ago
Okay they're not going to be learning in real time. Its not like you're getting your data stolen and then getting something out of it - you're not. What you're talking about is context.

Data gathered for training still has to be used in training, i.e. a new model that, presumably, takes months to develop and train.

Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.

ethagnawl [3 hidden]5 mins ago
> Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.

I wonder about this. In the future, if I correct Claude when it makes fundamental mistakes about some topic like an exotic programming language, wouldn't those corrections be very valuable? It seems like it should consider the signal to noise ratio in these cases (where there are few external resources for it to mine) to be quite high and factor that in during its next training cycle.

vjerancrnjak [3 hidden]5 mins ago
They wouldn’t be able to learn much from interactions anyway.

Learning metric won’t be you, it will be some global shitty metric that will make the service mediocre with time.

nicce [3 hidden]5 mins ago
Or get more value from the users with the same subscription price. I doubt they are giving any discounts.
diggan [3 hidden]5 mins ago
It's actually pretty clever (albeit shitty/borderline evil), start off by saying you're different by the competitors because you care a lot about privacy and safety, and that's why you're charging higher prices than the rest. Then, once you have a solid user-base, slowly turn on the heat, step-by-step, so you end up with higher prices yet same benefits as the competitors.
Deegy [3 hidden]5 mins ago
I guess I'll take the other side of what most are arguing in this thread.

Isn't it a great thing for to us to collectively allow LLM's to train on past conversations? LLM's probably won't get significantly better without this data.

That said I do recognize the risk of only a handful of companies being responsible for something as important as the collective knowledge of civilization.

Is the long term solution self custody? Organizations or individuals may use and train models locally in order to protect and distribute their learnings internally. Of course costs have to come down a ridiculous amount for this to be feasible.

int_19h [3 hidden]5 mins ago
It is a great thing if it were reciprocated. But when I'm paying $20/mo to access Claude, why should I give training data to Anthropic for free?
monsieurbanana [3 hidden]5 mins ago
You mean collectively allow us to train Claude's llm? Pretty big omission there
Deegy [3 hidden]5 mins ago
I believe I addressed that in my third paragraph?

It does suck that there are only a few companies with enough resources to offer these models. But it's hard to escape the power laws.

I'm hoping that costs come down to the point where these things are basically a commodity with thousands of providers.

monsieurbanana [3 hidden]5 mins ago
Save your prompts, anonymize them and offer them to anyone that wants to train a LLM, that is us collectively training LLMs.

Giving Claude your private data ensures that there will not be thousands of providers, since the limiting factor isn't power but data.

lacoolj [3 hidden]5 mins ago
Proprietary data (your company's app repository, a script for upcoming movie) and sensitive data (health, finance) become exposed
mitthrowaway2 [3 hidden]5 mins ago
I'm okay with LLMs not getting better.
jimbokun [3 hidden]5 mins ago
It's not clear that most people will benefit from LLMs getting significantly better. It's looking more like a net negative.
freejazz [3 hidden]5 mins ago
> LLM's probably won't get significantly better without this data.

Yeah and Facebook couldn't scale without ignoring the harms it causes people. Should we just let that be? Society seems to think so but I don't think it's a good idea at all.

cowpig [3 hidden]5 mins ago
> That said I do recognize the risk of only a handful of companies being responsible for something as important as the collective knowledge of civilization.

It's not just the risk of irresponsible behaviour (which is extremely important in a situation with so much power imbalance)

It's also just the basic properties of monopolistic markets: the smaller the number of producers, the closer the equilibrium price of the good maximizes the producers' economic surplus.

These companies operate for-profit in a market, and so they will naturally trend toward capturing as much value as they can, at the expense of everyone else.

If every business in the world depends on AI, this effectively becomes a tax on all business activity.

This is obviously not in the collective interest.

Of course, this analysis makes simplifying assumptions about the oligopoly. The reality is much worse: the whole system creates an inherent information asymmetry. Try and imagine what the "optimal" pricing strategy is for a product where the producer knows intimate details about every consumer.

nh43215rgb [3 hidden]5 mins ago
All the bad news seem to drop on Friday. Is it just a coincidence?
estimator7292 [3 hidden]5 mins ago
No, it's a very deliberate strategy. Companies do this every Friday
ljosifov [3 hidden]5 mins ago
Excellent. What were they waiting for up to now?? I thought they already trained on my data. I assume they train, even hope that they train, even when they say they don't. People that want to be data privacy maximalists - fine, don't use their data. But there are people out there (myself) that are on the opposite end of the spectrum, and we are mostly ignored by the companies. Companies just assume people only ever want to deny them their data.

It annoys me greatly, that I have no tick box on Google to tell them "go and adapt models I use on my Gmail, Photos, Maps etc." I don't want Google to ever be mistaken where I live - I have told them 100 times already.

This idea that "no one wants to share their data" is just assumed, and permeates everything. Like soft-ball interviews that a popular science communicator did with DeepMind folks working in medicine: every question was prefixed by litany of caveats that were all about 1) assumed aversion of people to sharing their data 2) horrors and disasters that are to befall us should we share the data. I have not suffered any horrors. I'm not aware of any major disasters. I'm aware of major advances in medicine in my lifetime. Ultimately the process does involve controlled data collection and experimentation. Looks a good deal to me tbh. I go out of my way to tick all the NHS boxes too, to "use my data as you see fit". It's an uphill struggle. The defaults are always "deny everything". Tick boxes never go away, there is no master checkbox "use any and all of my data and never ask me again" to tick.

AlexandrB [3 hidden]5 mins ago
I think I'd have more understanding for this position if I thought that these companies were still fundamentally interested in serving their users. They are not. Any information you provide is more likely to be used against your interests (even if that's "just" targeting you with some ads for a scammy product) than for your benefit.

Basically all AI companies are fruit from the same VC-poisoned tree and I expect these products will get worse and more user-hostile as they try to monetize. We're currently living in the "MoviePass"[1] era of AI where users are being heavily subsidized to try to gain market share. It will not last and the potential for abuse is enormous.

[1] https://en.wikipedia.org/wiki/MoviePass

ljosifov [3 hidden]5 mins ago
Whether Google is interested in serving me or not, is not only untestable (i.e. what counts as 'Google', 'interested', and 'serving' there - one could argue to end of time) - but besides the point. I want to be able to tell Google "My home is XYZ", and for Google to use that information about me in all of Google ecosystem. When I talk to Gemini it should know what/where "LJ home" is, when I write in Gdoc it should know my home address (so to insert it if I want it), ditto for Gmail, when I search in Google photos "photos taken at home" it should also know what "home" is for me.

Atm Google vaguely knows, and uses that for Ads targeting, sometimes. Most of the time - the targeting is bad, very low quality slop. To the level of "he bought a mattress yesterday, will keep buying mattresses in the next 30-60 days". I have the impression that we ended up in the worst case scenario. People I don't want to have my data, have access to it. People I do want to have my data, are afraid to touch it, and use it - yes! - for theirs, but also for my benefit too. The current predicament seems to me the case of "public lies, private truths."

A small cadre of vocal proponents of a particular view, established "the ground truth to what is desirable". (in this case - maximum privacy, ideally zero information sharing) The public goes with it in words, pays lip service, while in deeds, the revealed preferences show, they value their data privacy very cheaply, almost zero. Even one click extra, to share their data less, is one click too many, effort too high, for most people. Again - these are revealed preferences, for people keep lying when asked. It's not even the case of "you are lying to me" - no, it's more like "you are lying to yourself."

The conventional opinion is that the power imbalance coming from the information imbalance (state/business know a lot about me; I know little about them) is that us citizens and consumers should reduce our "information surface" towards them. And address the imbalance that way. But. There exists another, often unmentioned option. And that option is for state/business to open up, to increase their "information surface" towards us, their citizens/consumers. That will also achieve information (and one hopes power) rebalance. Yes there is extra work on part of state/business to open their data to us. But it's worth it. The more advanced the society, the more coordination it needs to achieve the right cooperation-competition balance in the interactions between ever greater numbers of people. There is an old book "Data For the People" by an early AI pioneer and Amazon CTO Andreas Weigend. Afaics it well describes the world we live in, and also are likely to live even more in the future.

827a [3 hidden]5 mins ago
Its incredible to me how seriously people can hold an opinion they've so clearly critically interrogated so little.
soraminazuki [3 hidden]5 mins ago
It makes sense when you see it as indoctrination than a mere opinion. Quashing critical thinking is the point. How else can you convince people to work against their own interests?
ljosifov [3 hidden]5 mins ago
I put it to you - consider that you maybe wrong. That I indeed know what's best for me. The same way my default is that you know what's best for you. "Critical thinking" and "indoctrination" - you are on path to the dark side there. I grew up in a socialist/communist country. One of the ways in which vast majority of the population were oppressed, mis-treated etc or worse, was by them being denied agency and capability for critical thinking, for recognising their own interests, by a mechanism called "false consciousness". The ideas you expressed in your comment are of similar kind.
soraminazuki [3 hidden]5 mins ago
Says the person advocating for companies to get rid of consent, the bare minimum they can do when screwing over people for profit. That's not deciding what's best for you. That's you unilaterally deciding that no one deserves consumer protection. You are trying to force on everyone what 96% of people are opposed to [1]. So don't you dare pull off that DARVO nonsense and accuse me of being an oppressive dictator.

Also in what universe are utter fantasies like "'no one wants to share their data' is just assumed" or "the defaults are always 'deny everything'" true? Tech companies are bypassing user consent all [2] the [3] time [4].

[1]: https://arstechnica.com/gadgets/2021/05/96-of-us-users-opt-o...

[2]: https://hn.algolia.com/?q=opt%20out

[3]: https://hn.algolia.com/?q=opt%20in

[4]: https://hn.algolia.com/?q=consent

ljosifov [3 hidden]5 mins ago
I see reading comprehension is not something you enjoy to indulge with.

These -

> utter fantasies like "'no one wants to share their data' is just assumed" or "the defaults are always 'deny everything'" true?

...far from being fantasies, are my personal experiences in the UK medical systems. This -

https://news.ycombinator.com/item?id=45066321

soraminazuki [3 hidden]5 mins ago
See, this is what I meant by indoctrination. I showed you links containing dozens, maybe even hundreds of examples showing how companies don't obtain consent from users. But you ignore all that and cherry pick your highly exaggerated spin on the UK medical system. "I'm reasonably sure some people will have died because of this." Sigh, give me a break. Your take on privacy sounds just like the auto industry's take on right to repair. According to them, right to repair laws will get women raped in parking lots [1]. Corporate activists making absurd claims resorting to the same old fearmongering tactics.

This isn't me having problems with reading comprehension. It's you arguing in bad faith. Which is inevitable given your desire to demolish consumer protection for everyone. You're defending the indefensible.

[1]: https://www.vice.com/en/article/auto-industry-tv-ads-claim-r...

ljosifov [3 hidden]5 mins ago
I know indoctrination well. Reading what you write - I get the impression that you don't know much about indoctrination. But I don't know you, so I allow it that I maybe wrong. You asked "in what universe". I showed you concrete examples in one universe. For my claim to be true, one example suffices. None of your claims (latest "demolish customer protection") about my alleged intentions, character, thoughts, etc - have any basis in reality. You are wrong in almost everything that you wrote about me. It's all in your head, in your imagination. How do I know? B/c I know me, and you don't know me. That easy.
soraminazuki [3 hidden]5 mins ago
> Excellent. What were they waiting for up to now?? I thought they already trained on my data. I assume they train, even hope that they train, even when they say they don't.

These are your exact words, not my imagination. You very clearly want consumer protection to be gone, because you said so.

> For my claim to be true, one example suffices.

To be clear, your claim is that we live in a world where there's too much privacy protection. So much in fact that you're, gasp, "reasonably sure some people will have died because of this." Nope, a single spin on the UK medical system is nowhere near as sufficient for that absurd claim.

As for your attempted word lawyering about indoctrination? Classic.

ljosifov [3 hidden]5 mins ago
Yes - my data, not your data. You stay away from my data. I stay away from your data. I don't care about your data. But I do want them to train on my data. And to serve me better. Was disappointed that they didn't do that already.

But now you gave me ideas. ;-) Yeah - I think ideally we should go further, much further. Internet was not built by po-faced, lemon-sucking prudes, tut-tut-ing about everything and anything. It was built by happy-go-lucky, live-and-let live, altruistic mildly autistic nerds. It was permission-less, one didn't need to ask anyone in order to do anything, and that's why it lived. Whereas many other networks and protocols, technically more sophisticated, but with a fatal flaw that a gatekeeper with the power to say "NO" was built into them - just died off. Wish people went back to the original permission-less Net. That people tore down all manner of laws making moving bits around illegal, used to jail humans for crimes of reading, copying and writing data.

franga2000 [3 hidden]5 mins ago
Try living in a place with privatised health insurance and you'll quickly see why medical data is some of the most important to keep private.
behnamoh [3 hidden]5 mins ago
Are you trolling us or do you live in a hypothetical world where companies have our best interests in heart?
blipmusic [3 hidden]5 mins ago
My life does in fact have priorities above ”LLMs should work a bit better”.
Gud [3 hidden]5 mins ago
Have you considered the drawbacks of sharing your data to the most unscrupulous people on this planet?
j4hdufd8 [3 hidden]5 mins ago
Are you okay getting ads for shit holistic medication because you had a mental health conversation with AI?
soraminazuki [3 hidden]5 mins ago
You can train commercial AI with your data right now without screwing over everyone else on the planet. It's easy, just publish your entire trove of personal data on a website and AI crawlers will happily gobble it all up. You can publish your name, home address, work, government-issued ID, financial transactions, chats, browser history, location history, surveillance footage of your home, all for free. So what are you waiting for? Just do it now if you want to share data that badly, there's no need to wait for the approval of "privacy maximalists."
12ian34 [3 hidden]5 mins ago
not remotely worried about leaks, hacks, or sinister usage of your data?
londons_explore [3 hidden]5 mins ago
I would far prefer the service use my data to work better and take a few privacy risks.

People die all the time from cancer or car accidents. People very rarely die from data leaks.

Some countries like Sweden make people's private financial data public information - and yet their people seem happier than ever. Perhaps privacy isn't as important as we think for a good society.

soiltype [3 hidden]5 mins ago
public/private isn't a binary, it's a spectrum. we Americans mostly sit in the shithole middle ground where our data is widely disseminated among private, for-profit actors, for the explicit purpose of being used to manipulate us, but it's mostly not available to us, creating an assymmetric power balance.
ljosifov [3 hidden]5 mins ago
I agree with your stance there. Further - the conventional opinion is that the power imbalance coming from the information imbalance (state/business know a lot about me; I know little about them) is that us citizens and consumers should reduce our "information surface" towards them. And address the imbalance that way. But.

There exists another, often unmentioned option. And that option is for state/business to open up, to increase their "information surface" towards us, their citizens/consumers. That will also achieve information (and one hopes power) rebalance. Every time it's actually measured, how much value we put on our privacy, when we have to weight privacy against convenience and other gains from more data sharing, the revealed preference is close to zero. The revealed preference is that we put the value of our privacy close to zero, despite us forever saying otherwise. (that we value privacy very very much; seems - "it ain't so")

So the option of state/business revealing more data to us citizens/consumers, is actually more realistic. Yes there is extra work on part of state/business to open their data to us. But it's worth it. The more advanced the society, the more coordination it needs to achieve the right cooperation-competition balance in the interactions between ever greater numbers of people.

There is an old book "Data For the People" by an early AI pioneer and Amazon CTO Andreas Weigend. Afaics it well describes the world we live in, and also are likely to live even more in the future.

12ian34 [3 hidden]5 mins ago
Sweden is a very poor example, all that is public is personal taxable income. That's it. You're comparing apples to oranges. And how is your home address, and AI chatbot history going to cure cancer?
nojs [3 hidden]5 mins ago
Would you be comfortable posting all of this information here, right now? Your name, address, email address, search history, ChatGPT history, emails, …

If not, why?

Gud [3 hidden]5 mins ago
That financial data is very limited. Would it be just as acceptable if these companies knew where and what you purchased?
ljosifov [3 hidden]5 mins ago
In the past I have found obstacles to data sharing codified in the UK law frustrating. I'm reasonably sure some people will have died because of this, that would not have died otherwise. If they could communicate with the NHS, similarly (email, whatsapp) to how they communicate in their private and professional lives.

Within the UK NHS and UK private hospital care, these are my personal experiences.

1) Can't email my GP to pass information back-and-forth. GP withholds their email contact, I can't email them e.g. pictures of scans, or lab work reports. In theory they should have those already on their side. In practice they rarely do. The exchange of information goes sms->web link->web form->submit - for one single turn. There will be multiple turns. Most people just give up.

2) MRI scan private hospital made me jump 10 hops before sending me link, so I can download my MRI scans videos and pictures. Most people would have given up. There were several forks in the process where in retrospect could have delayed data DL even more.

3) Blood tests scheduling can't tell me back that scheduled blood test for a date failed. Apparently it's between too much to impossible for them to have my email address on record, and email me back that the test was scheduled, or the scheduling failed. And that I should re-run the process.

4) I would like to volunteer my data to benefit R&D in the NHS. I'm a user of medicinal services. I'm cognisant that all those are helping, but the process of establishing them relied on people unknown to me sharing very sensitive personal information. If it wasn't for those unknown to me people, I would be way worse off. I'd like to do the same, and be able to tell UK NHS "here are, my lab works reports, 100 GB of my DNA paid for by myself, my medical histories - take them all in, use them as you please."

In all cases vague mutterings of "data protection... GDPR..." have been relayed back as "reasons". I take it's mostly B/S. Yes there are obstacles, but the staff could work around if they wanted to. However there is a kernel of truth - it's easier for them to not try to share, it's less work and less risk, so the laws are used as a cover leaf. (in the worst case - an alibi for laziness.)

ljosifov [3 hidden]5 mins ago
If they leaked bank accounts numbers, or private keys - I would be worried. That has not happened in the past.

About myself personally - my Name Surname is googleable, I'm on the open electoral register, so my address is not a secret, my company information is also open in the companies register, I have a a personal website I have put up willingly and share information about myself there. Training models on my data doesn't seem riskier than that.

Yeah, I know I'd be safer if I was completely dark, opaque to the world. I like the openness though. I also think my life has been enriched in infinitely many ways by people sharing parts of their lives via their data with me. So it would be mildly sociopathic of me, if I didn't do similar back to the world, to some extent.

int_19h [3 hidden]5 mins ago
LLMs can and do sometimes regurgitate parts of training data verbatim - this has been demonstrated many times on things ranging from Wikipedia articles to code snippets. Yes, it is not particularly likely for that damning private email of yours to be memorized, but if you throw a dataset with millions of private emails onto a model, it will almost certainly memorize some of them, and nobody knows what exact sequence of input tokens might trigger it to recite.
12ian34 [3 hidden]5 mins ago
So you are projecting sociopathy on those that choose to keep their lives more private than you? Like you said, basic personal details are essentially public knowledge anyway. Where do you draw the line personally on what should be private?
ljosifov [3 hidden]5 mins ago
Not at all, on the contrary, I chose my words carefully ("mildly sociopathic OF ME") as to avoid casting shade on others. Saying "this is how I feel", so to preclude judging others. Everyone makes their own choices, and that's fine.

Boundaries - yes sure they exist. I don't have my photo albums open to the world. I don't share info about family and friends - I know people by default don't want to share information about them, and I try to respect that. Don't share anything on Facebook, where plenty share, for example.

At the same time, I find obstacles to data sharing codified in the UK law frustrating. With the UK NHS. 1) Can't email my GP to pass information back-and-forth - GP withholds their email contact; 2) MRI scan private hospital makes me jump 10 hops before sharing my data with me; 3) Blood tests scheduling can't tell me back that schedule for a date failed, apparently it's too much for them to have my email address on record; 4) Can't volunteer my data to benefit R&D in NHS. ("here are - my lab works reports, 100 GB of my DNA paid for by myself, my medical histories - take them all in, use them as you please...") In all cases vague mutterings of "data protection... GDPR..." have been relayed back as "reasons". I take it's mostly B/S. They could work around if they wanted to. But there is a kernel of truth - it's easier for them to not try share, so it's used as a cover leaf. (in the worst case - an alibi for laziness.)

I'm for having power to share, or not share, what I want. With Google - I do want them to know about myself and use that for my (and theirs) benefit. With the UK gov (trying to break encryption) - I don't want them to be able to read my WhatsApp-s. I detest UK gov for effectively forcing me (by forcing the online pharmacy) to take a photos of myself (face, figure) in order to buy online Wegovy earlier today.

12ian34 [3 hidden]5 mins ago
Thanks for this considered response. I find it difficult to disagree with anything you said in this particular comment :) however I do think each instance you mention in this message is quite different to the topic at hand, regarding the big tech data machine. Additionally, I think I would rather our UK level of privacy regarding healthcare data than the commercialised free for all in the US. One counterpoint could be that Palantir got a significant amount of UK NHS data.
ljosifov [3 hidden]5 mins ago
Thanks for the consideration. Yeah US and UK are different in that respect. I got the impression that US ends with the worst deal on both ends: organisations that could help you are denied your data, while organisation most unscrupulous most bent on doing their worst with your data, get almost free access to it.

For UK - I'm reasonably sure some people will have died because of the difficulties sharing their data, that would not have died otherwise. "Otherwise" being - they could communicate with the NHS, share their data, similarly via email, WhatsApp etc, to how they communicate and share data in their private and professional lives.

People at personal level have a fairly reasonable stance, in how they behave, when it comes to sharing their data. They are surprisingly subtle in their cost-benefit analysis. It's only when they answer surveys, or talk in public, that they are less-than-entirely-truthful. We know this, b/c their revealed preferences are at odds with what they say they value, and how much they value.

j4hdufd8 [3 hidden]5 mins ago
> But there are people out there (myself) that are on the opposite end of the spectrum, and we are mostly ignored by the companies. Companies just assume people only ever want to deny them their data.

What? I think you're exactly the kind of person that companies pay attention to, and why they pull moves like this

bgwalter [3 hidden]5 mins ago
I realize this might be satire. If not, you are using the same aggressive strategy of turning the tables as Palantir:

https://www.theguardian.com/technology/2025/jul/08/palantir-...

Most people do want to deny their data, as we have recently seen in various DOGE backlashes.

imiric [3 hidden]5 mins ago
What a ridiculous stance.

Do you lock your front door, or use passwords on any of your accounts? Because what you're essentially saying is that you're OK with strangers having access to your personal information. That's beyond the already flawed "I have nothing to hide" argument.

JumpCrisscross [3 hidden]5 mins ago
> Do you lock your front door

In trusted neighborhoods? No. But that respect goes both ways.

mrbombastic [3 hidden]5 mins ago
You should read up on the Dutch Civil Registry and the holocaust in the Netherlands and reevaluate if you are serious. I would love to live in a world where everyone had good intentions and the powers that be wouldn’t abuse data to their ends, we will never live in that world.
calmbonsai [3 hidden]5 mins ago
I don't think you understand how...humanity works?! Is this deliberate parody?

Abuse of medical data is just the tip of the iceberg here and, at least in the states, privatized healthcare presents all sorts of for-profit pricing abuse scenarios let alone nasty scenarios for social coercion.

beepbooptheory [3 hidden]5 mins ago
This may very well be a rational stance, but either way, wish one could somehow teleport this sentiment to the Cypherpunk mailing list in the 80s/90s. Of all the things they projected, concocted, fought for.. Nothing could truly prepare them for this kind of thing: the final victory of the product over the people, the happy acceptance of surveillance. They were all imagining terrible dystopias garnered from state violence and repression, never could they begin to imagine it could all transpire anyway because people like not having to type their address in!

> The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress.

ljosifov [3 hidden]5 mins ago
Well Yes and No. Funny you mention the 80s/90s. I grew up in the pre-Internet world. I remember home computers, then PC-s, then modems to access BBS-es, then FIDO, uucp email, academic Internet and then the private commercial Internet after 1990. Some parts of the privacy agenda I'm strongly pro-privacy, the more the better. I don't want encryption broken. The UK gov (I live in the UK) are being morons for that, forever trying that play. Atm there is at least part of the US admin to push back on that. I don't like UK Parliament forcing the online ID on me. I'm pro- having private citizens having private keys on un-snoop-able dongle devices.
Razengan [3 hidden]5 mins ago
> I'm not aware of any major disasters.

Oh boy. Did you somehow miss all the news about data leaks and password dumps etc. being sold on the "dark" web and shit?

Would you mind if I followed you around and noted everything you do and constantly demanded your attention?

The shit done by corporations is akin to a clingy stalker and would be absolutely despised if it was an individual person doing something like that.

As for benefits, which?? In my entire life I have never seen an ad for anything (that I did not already know about via other means) that made me want to look up the product, let alone buy it. Nor do I know anyone who did. In fact, it turns me off from a product if its ad appears too frequently.

Google etc. and various storefronts also almost never recommend me anything that actually matches my interests, beyond just a shallow word similarity, in fact they forcibly shove completely unrelated shit into my searches cause they were paid to. Like searching for RPGs and seeing Candy f'ing Crush.

----

You know what though, I kinda agree with the potential intent behind your charade:

Yes, LET ME TELL YOU ABOUT ME.

I will gladly TELL companies EXACTLY what I like, and I WANT you to use that. Show me other shit that is actually relevant to MY interests instead of the interests of whomever paid you to shove their shit into my face.

ASK! DON'T SPY! Because you can't ever get it right anyway!

octagons [3 hidden]5 mins ago
That’s unfortunate. I believed Anthropic was playing the long game and betting on a smaller but more technically proficient userbase.

I guess I’ll be canceling my subscription largely out of principal. I doubt any open-source models are capable of handling my use case as well as Claude (typically focused on getting up to speed with various ISO/IEEE standards for the purpose of security testing) but I’m sure I’ll find a solution.

SoftTalker [3 hidden]5 mins ago
This is all you can do. Laws and regulations will not be forthcoming, and even if some are, they will be ignored or the fines paid as a cost of doing business.

Any data you give to any website or app is no longer (exclusively) yours. Use these services under that assumption.

Aurornis [3 hidden]5 mins ago
Why are we linking to Perplexity.ai AI-generated slop summaries of other news articles instead of the actual announcement? Reading the actual announcement is more clear : https://www.anthropic.com/news/updates-to-our-consumer-terms Some important points:

An in-app notification pop-up will alert you to the change. You can opt out in the pop up.

I was able to opt out right now by going to the Privacy section of Settings.

It doesn’t take effect until September 28th. The app will apparently prompt people to review the new terms and make a decision before then.

Only applies to new or resumed sessions if you do review the new terms and don’t turn it off. The angry comments about collecting data from customers and then later using it without permission are not correct. You would have to accept the new terms and resume an old session for it to be used.

Does not apply to API use, 3rd party services, or products like Claude Gov or Claude for Education.

Changing the link to the actual source instead of this perplexity.ai link would be far more helpful.

giancarlostoro [3 hidden]5 mins ago
As long as you can opt-out it doesn't bother me much. Though it does make me wonder those third party clients that people subscribe to e.g. JetBrains AI, Zed, and others that use Claude and other Anthropic models, do they opt-in for you? Because that would be bad.

I would strongly argue that API clients should NEVER be opted in for these sorts of things, and it should be like this industry wide.

Aurornis [3 hidden]5 mins ago
That’s also explained. It’s excluded:

> They do not apply to services under our Commercial Terms, including Claude for Work, Claude Gov, Claude for Education, or API use, including via third parties such as Amazon Bedrock and Google Cloud’s Vertex AI.

I’ll edit my comment above to include this too

giancarlostoro [3 hidden]5 mins ago
Perfect! Thank you. This makes sense, anyone who deviates from this should be shunned if they take API clients and use that data, I think that's the bigger deal breaker, I have a feeling the whole industry quietly agrees / adheres to this, but I would not be surprised if there's some that do not.
DrBenCarson [3 hidden]5 mins ago
You can opt out until September 28th. After that day all Claude usage will be under the new terms and conditions
Aurornis [3 hidden]5 mins ago
All new or resumed Claude usage. You’d have to open one of the apps and dismiss the pop-up about it and then start or resume a chat.
buzer [3 hidden]5 mins ago
> An in-app notification pop-up will alert you to the change. You can opt out in the pop up.

The in-app notification that I got was a pop up which contained some buttons and some images. There was no text. Just in case it was some dark mode issue I checked the DOM and I couldn't find any text there either. I just clicked outside the modal and it went away. I assumed it was some announcement about some new feature and ignored it.

I did end up seeing the news yesterday in Reddit (I'm having issues getting the research tool to actually being used, tried to see if there was some recent changes) but it's unlikely that I was the only one who experienced modal the issue & if those didn't follow the tech news they could easily miss the change.

Aurornis [3 hidden]5 mins ago
My comment doesn’t make sense but it’s too late to edit. To clarify: The original link was to a perplexity.ai summary of multiple news articles about the change. The mods have changed it now.

The comments here were from people jumping to co conclusions after skimming an AI summary of news articles about the link. I’m glad it got changed.

DalasNoin [3 hidden]5 mins ago
Crazy that they would automatically link to an AI summary of multiple articles on the topic than the original source?
I_am_tiberius [3 hidden]5 mins ago
In my opinion, training models on user data without their real consent (real consent = e.g. the user must sign a contract or so, so he's definitely aware), should be considered a serious criminal offense.
jsheard [3 hidden]5 mins ago
Why single out user data specifically? Most of the data Anthropic and co train on was just scooped up from wherever with zero consent, not even the courtesy of a buried TOS clause, and their users were always implicitly fine with that. Forgive me for not having much sympathy when the users end up reaping what they've sown.
__MatrixMan__ [3 hidden]5 mins ago
Publishing something is considered by most to be sufficient consent for it to be not considered private.

I realize there's a whole legal quagmire here involved with intellectual "property" and what counts as "derivative work", but that's a whole separate (and dubiously useful) part of the law.

chamomeal [3 hidden]5 mins ago
That is definitely normally true but I feel like the scale and LLM usage turns it into a different problem.

If you can use all of the content of stack overflow to create a “derivative work” that replaces stack overflow, and causes it to lose tons of revenue, is it really a derivative work?

I’m pretty sure solution sites like chegg don’t include the actual questions for that reason. The solutions to the questions are derivative, but the questions aren’t.

__MatrixMan__ [3 hidden]5 mins ago
Stack overflow doesn't really have a legitimate claim to that data either though. Nor do the users, we're just pasting error messages and documentation. It's derivative all the way down. It'll never sit still and behave like property.

Privacy makes sense, treating data like property does not.

chamomeal [3 hidden]5 mins ago
Point taken, but it still feels like a gray area to me. The value that SO created was the curation of knowledge and high quality discussions that were well indexed and searchable.

The users did provide the data, which is a good point. But there’s a reason SO was so useful to developers and quora was not. It also made it a perfect feeding ground for hungry LLMs.

Then again I’m just guessing that big models are trained on SO. Maybe that’s not true

airstrike [3 hidden]5 mins ago
Replacing stack overflow has no bearing on the definition of "derivative"
perihelions [3 hidden]5 mins ago
Training on private user interactions is a privacy violation; training on public, published texts is (some argue) an intellectual property violation. They're very different kinds of moral rights.
diggan [3 hidden]5 mins ago
Have Anthropic ever written clearly exactly about what training datasets they use? Like a list of everything included? AFAIK, all the providers/labs are kind of tightly lipped about this, so I think it's safe to assume they've slurped up all data they've come across via multiple methodologies, "private" or not.
dmbche [3 hidden]5 mins ago
Look at the suits against them they list it there
diggan [3 hidden]5 mins ago
Are there complete lists in the suits? Last time I skimmed them, they contained allegations of sources, and some admissions like The Pile, LibGen, Books3, PiLiMi, scanned books, web scrapes and some other sources I don't remember, but AFAIK there isn't any complete inventory of training datasets they used.
jsheard [3 hidden]5 mins ago
I wish I could be so optimistic that there is no private information published unintentionally or maliciously on the open web where crawlers can find it.

(and as diggan said, the web isn't the only source they use anyway. who knows what they're buying from data brokers.)

I_am_tiberius [3 hidden]5 mins ago
100 % true.
happosai [3 hidden]5 mins ago
I think it's cute people believe companies that trained their models with every single book and online page ever written without consents from authors (and often against the explicit request of the author without any opt-out) won't do a rugg-pull and do it also to all the chats they have aquired...
SoftTalker [3 hidden]5 mins ago
You're absolutely right, but also isn't the volume of new data they are getting from chats tiny compared to what they've already trained on? I'm wondering how much difference it will really make.
fHr [3 hidden]5 mins ago
Yeah people are gullible these days. We need another full 2008 crash that hurts bad before people wake up for a bit before becomming like this again.
FergusArgyll [3 hidden]5 mins ago
Or we can root for happiness and prosperity instead
bigfishrunning [3 hidden]5 mins ago
I "root for people not burglarizing my house", but i put locks on my doors also. The way the market for these tools is behaving, a crash is extremely likely; batten down the hatches.
FergusArgyll [3 hidden]5 mins ago
> We need another full 2008 crash that hurts bad
DrillShopper [3 hidden]5 mins ago
Hurts whom that bad?

AI companies will get bailed out like the auto industry was - they won't be hurt at all.

Aurornis [3 hidden]5 mins ago
From the actual source ( https://www.anthropic.com/news/updates-to-our-consumer-terms ) they’re going to show a pop-up with the terms change. I triggered it now by going to the Privacy settings page and reviewing the new terms.

It’s quite clear. It’s easy to opt out. They’re making everyone go through it.

It doesn’t reach your threshold of having everyone sign a contract or something, but then again no other online service makes people sign contracts.

> should be considered a serious criminal offense.

On what grounds? They’re showing people the terms. It’s clear enough. People have to accept the terms. We’ve all been accepting terms for software and signing up for things online for decades.

airstrike [3 hidden]5 mins ago
People have T&C and cookie popup fatigue. I almost hit "accept" before noticing the opt out toggle, thinking it was a simple T&C update. This is definitely a fucked up way to set it up, there's no sugar coating it.
zajio1am [3 hidden]5 mins ago
Why? This is not 'use collected information to targed ads', or 'sell collected information to third parties', but 'use collected information from the service to improve the service'. Does not really seems to me much different than ISPs using traffic stats to plan infrastructure improvements, or a website using access logs to improve accessibility and navigation.

And when talking specifically about AI, one could argue that learning from interactions is a common aspect of intelligence, so a casual user who do not understand details about LLMs would expect so anyways. Also, the fact that LLMs (and other neural networks) have distinct training and inference phases seems more like an implementation detail.

Rygian [3 hidden]5 mins ago
It already is. See Art. 5. 1.(b) here: https://gdpr-info.eu/art-5-gdpr/
nicce [3 hidden]5 mins ago
Is ”Accept” in cookie box good enough contract?
I_am_tiberius [3 hidden]5 mins ago
no
I_am_tiberius [3 hidden]5 mins ago
I believe that only concerns European users. Moreover, I believe a simple press of an OK button is fine with GDPR. This data (type and volume) however, is way more serious and can't be agreed on by just pressing a button.
aosaigh [3 hidden]5 mins ago
Does this include any code base you are running Claude Code with (where parts of code are sent as part of the context)? I'm not hugely clear on how my private codebase is exposed to Claude in the first place when using Claude Code.
general1726 [3 hidden]5 mins ago
I would expect that your whole code base would be then used as training data.

I think that it is only a matter of time before they will start reselling these data as exfiltrated IP to whoever will be interested.

adastra22 [3 hidden]5 mins ago
Yes, your code is part of the chat context. You cannot use Claude code without sharing your repo with them.
ttoinou [3 hidden]5 mins ago
I think this is about claude.ai , not Claude Code nor Claude models API (but I'm not 100% sure). So no, unless you copy paste your code into claude.ai
adastra22 [3 hidden]5 mins ago
Claude Code is explicitly included.
azinman2 [3 hidden]5 mins ago
Just unclick the box to opt out. They put it front and center.
pcwelder [3 hidden]5 mins ago
Navigate to `https://claude.ai/settings/data-privacy-controls` and disable it before Sept 28. Isn't applicable to team plan.
cube00 [3 hidden]5 mins ago
Make no mistake this will be coming to the team plan. The only difference is your account owner will decide for the whole team.
smokel [3 hidden]5 mins ago
What do you base this on? Typically it's plans like these that offer more privacy, because people are willing to pay a premium for that.
cube00 [3 hidden]5 mins ago
To consent to sharing your data for training Docusign AI models, your organization (as the account owner) must first agree to share data through your Docusign service agreement.

After providing consent, the setting will be turned on by default. [1]

[1]: https://support.docusign.com/s/document-item?language=en_US&...

dewey [3 hidden]5 mins ago
So you state that as a fact based on the ToS of an unrelated company?
oblio [3 hidden]5 mins ago
It's a fairly common pattern.
the_other [3 hidden]5 mins ago
I didn't trust them much to begin with, so I generally avoid talking too much personal stuff with Claude. But I had plenty of chats with surface level discussion of topics I'm interested in and some of my relevant experience and history with those topics. So, I have deleted all my chats and am closing my Claude account (as soon as customer services get back to me; somehow the self-serve option is missing for my account, possibly because I once enabled API access).

I'll use Claude with my employer's Copilot account, but was I wasn't putting anything personal there anyway.

Time to learn how to do local models...

ezfe [3 hidden]5 mins ago
This change didn't grant access to your old chats and can be opted out. Good job you overreacted without reading.
SoftTalker [3 hidden]5 mins ago
For now. Once this settles in they will change the terms again, with no opt-out option.
ezfe [3 hidden]5 mins ago
It must be so tiring worrying about pointless future things that aren't happening yet.
Disposal8433 [3 hidden]5 mins ago
> pointless future things that aren't happening yet

I remember all those things that wouldn't happen since the 90s and which definitely ended up happening starting with all that crap from Microsoft. It's not cyberpunk anymore, it's real life.

SoftTalker [3 hidden]5 mins ago
I don't really worry about them for products I don't use, though.
lacoolj [3 hidden]5 mins ago
There are dark patterns in UI, and then there's this terms and conditions popup that also contains the checkbox to disable

Having "Accept" right under that makes it very unclear what you're accepting and enabling/disabling at the same time.

For those without an account or just want to see this: https://imgur.com/a/jbhzbnB

Wowfunhappy [3 hidden]5 mins ago
I saw the popup yesterday. Maybe I've just gotten really good at navigating dark patterns (or I have stock-home syndrome), but I remember the opt out choice being really clear and easy to select.

I'm not arguing on the facts of the modal design, I don't remember either way, I just don't remember it being confusing.

Unless I was in some A B test?

chrisweekly [3 hidden]5 mins ago
nit / FYI: it's "Stockholm" (as in, the city) not "stock-home".
grim_io [3 hidden]5 mins ago
Opt-in would be much better, of course.

To put it in perspective: google won't even give you an option to opt out.

If you pay for Gemini as a private user and not as a corporation, you are fair game for google.

Now, neither option is good. But one is still much worse.

robbomacrae [3 hidden]5 mins ago
Agree. Out of the available paid cloud providers, I only chose to work with Claude Code because of the data privacy policy. This isn’t a welcome move but still usable. Thankfully the open source models are not far behind with Qwen 3 coder etc.
homeless_engi [3 hidden]5 mins ago
Gemini allows you to opt out, but disables chat history if you do so
grim_io [3 hidden]5 mins ago
Can I trouble you to give me a hint where this could be configured? :)
jmward01 [3 hidden]5 mins ago
I just opted out. And then canceled my plan. The 5 year retention isn't part of the opt out and represents way too juicy of a target for them. Some time in the next 5 years another TOS change will happen, and another and another and eventually there won't be an opt out or I won't realize it and accidentally click yes. Privacy first. Period. Pay me to opt in and I may consider it.
aurareturn [3 hidden]5 mins ago
Just opened Claude app on Mac and saw a popup asking me if it's ok to train on my chats. It's on by default. Unchecked it.

I think Claude saw that OpenAI was reaping too much benefit from this so they decided to do it too.

demarq [3 hidden]5 mins ago
Also your chats will now be stored for 5 years.
aurareturn [3 hidden]5 mins ago
I used to not care about this stuff but with the way this administration is going about things, I suddenly care very much about it.
bayindirh [3 hidden]5 mins ago
Trusting companies more than the government always feels strange. It's something I can't grasp.
slipperydippery [3 hidden]5 mins ago
I don’t get drawing a distinction. If a company has it, there’s at least one government out there that either also already has it (some telecom companies just give them data portals, for example) or can any time they choose.

Corporate surveillance is government surveillance. Always has been.

AlecSchueler [3 hidden]5 mins ago
How many companies can disappear me to El Salvador?
sillyfluke [3 hidden]5 mins ago
"US Army appoints Palantir, Meta, OpenAI execs as Lt. Colonels" [0]

Well, probably easier than you think. Given that it looks like Palantir is able to control the software and hardware of the new fangled detention centers with immunity, how difficult do you think it is for them to disappear someone without any accountability?

It is precisely the blurring of the line between gov and private companies that aid in subverting the rule of law in many instances.

[0] https://thegrayzone.com/2025/06/18/palantir-execs-appointed-...

AlecSchueler [3 hidden]5 mins ago
Oh I have no doubt those lines are becoming more and more blurred and that certain big companies in key positions are theoretically beyond accountability.

But the question was "why trust a company and not the government?"

So even now it's between:

  * A company who, if big enough and in a key position, could theoretically do this
And

  * A government who we know for sure have grabbed multiple people off the streets, within the past month, and have trafficked them out of the country without any due process. 
So it's still "could maybe do harm" versus "already controls an army of masked men who are undeniably active in doing harm."
sillyfluke [3 hidden]5 mins ago
>But the question was "why trust a company and not the government?"

The post you were replying to simply said the behavior of this administration made them care more about this issue, not that they trusted companies more than the government. That statement is not even implied in anyway in the comment you responded to?

The fact is whereas in the past it would be expected that the government could regulate the brutal and illegal overreaches of private companies, giving military rank to private companies execs makes that even less likely. The original comment is alluding to a simpler point: A government that gives blank checks to private companies in military and security matters is much worse than one that doesn't.

AlecSchueler [3 hidden]5 mins ago
The comment I responded to said "Trusting companies more than the government always feels strange. It's something I can't grasp."
sillyfluke [3 hidden]5 mins ago
You're right, my bad. I meant the original context of the grandparent
Cheer2171 [3 hidden]5 mins ago
More like do you trust what's left of the US judicial branch versus the private arbitration company to save you from the excesses of their respective executives.

I'll still take an increasingly stacked US federal court that still has to pay lip service to the constitution over private arbitration hired by the company accountable only to their whims.

What you mentioned has been repeatedly ruled unconstitutional, but the administration is ignoring the courts.

const_cast [3 hidden]5 mins ago
And how much can the US government censor you versus companies?

There's tradeoffs. The government, at least, has to abide by the constitution. Companies don't have to abide by jack shit.

That means infinite censorship, searches and seizures, discrimination, you name it.

We have SOME protection. Very few, but they're there. But if Uber was charging black people 0.50 cents more on average because their pricing model has some biases baked in, would anyone do anything?

SoftTalker [3 hidden]5 mins ago
Yes, because race is a protected class.

If they were charging wealthy people 0.50 more on average because the model showed that they don't care about price that much, they would be fine.

const_cast [3 hidden]5 mins ago
> Yes, because race is a protected class.

No: because Uber doesn't have to tell you how their model works and they probably don't even know.

SoftTalker [3 hidden]5 mins ago
Doesn't matter. If you can convincingly argue that the effect is discrimination based on race, you have a civil rights claim.
giraffe_lady [3 hidden]5 mins ago
Well relatedly I think several of the tech billionaires considered this question and decided the answer was "not enough."
sokoloff [3 hidden]5 mins ago
The government has the direct power to imprison me or seize my property if cross them.

It seems strange to not be able to grasp the difference in kind here.

const_cast [3 hidden]5 mins ago
And what technology do you think they use to do said imprisonment and seizing?

Why do you think the military and police outsource fucking everything to the private sector? Because there are no rules there.

Wanna make the brown people killer 5000 drone? Sure, go ahead. Wanna make a facial crime recognition system that treats all black faces as essentially the same? Sure, go ahead. Wanna run mass censorship and propaganda campaigns? Sure, go ahead.

The private sector does not abide by the constitution.

Look, stamping out a protest and rolling tanks is hard. Its gonna get on the news, it's gonna be challenged in court, the constitution exists, it's just a whole thing.

Just ask Meta to do it. Probably more effective anyway.

bayindirh [3 hidden]5 mins ago
What happens if your Google account is locked out because you shared your son's pictures to his M.D. because of an ongoing treatment?

What happens the same company locks all your book drafts because an algorithm deemed that you're plotting something against someone?

Both are real events, BTW.

sokoloff [3 hidden]5 mins ago
I think I missed the part where Google imprisoned someone.

The government forces me to do business with them; if I don't pay them tens (and others hundreds) of thousands of dollars every year they will send people with guns to imprison me and eventually other people with guns to seize my property.

Me willingly giving Google some data and them capriciously deciding to not always give it back doesn't seem anything like the same to me. (It doesn't mean I like what Google's doing, but they have nowhere near the power of the group that legally owns and uses tanks.)

bayindirh [3 hidden]5 mins ago
Their life effectively stopped since they are locked out of everything, forever. Not forgetting that the first guy's son's pictures are ended in a CSAM database and he lost his account permanently, and Google didn't give his account back [0].

A company "applied what the law said", and refused that they made a mistake and overreached. Which is generally attributed to governments.

So, I you missed the effects of this little binary flag on their life.

[0]: https://www.theguardian.com/technology/2022/aug/22/google-cs...

sokoloff [3 hidden]5 mins ago
> Their life effectively stopped since they are locked out of everything

What?! Google locked them out of Google. I'm sure they can still get search, email, and cloud services from many other providers.

The government can lock you away in a way that is far more impactful and much closer to "life stopped; locked out of everything" than "you can't have the data you gave us back".

degamad [3 hidden]5 mins ago
Being locked out of your email which is the user name for most of the services you access is a lot more than "you can't have your data back". It's you can't log on to anything which uses email 2fa, you can't restore access to other services, you can't validate your identity with online government services, you don't get your bank statements or warnings, etc. It's not as bad as being arrested, but it is massively disruptive to your life.
twoquestions [3 hidden]5 mins ago
I 90% agree with you, though Apple did stand up to the FBI some years ago. The US gov't at least is much more restricted on what data it can collect and act on due to the 4th Amendment among other laws, and as another commenter said Apple can't blackbag me to El Salvador.

Apple/FBI story in question: https://apnews.com/general-news-c8469b05ac1b4092b7690d36f340...

bayindirh [3 hidden]5 mins ago
Apple is an exception, and even that is debatable because of the unencrypted backups they store.

On the other hand, what Apple did is a tangible thing and is a result.

This gives them better optics for now, but there is no law says that they can't change.

Their business model is being an "accessible luxury brand with the privacy guarantee of Switzerland as the laws allow". So, as another argument, they have to do this.

elzbardico [3 hidden]5 mins ago
Trusting any of them is a luxury afforded in a short period of history in rich countries.

That's why the usual ethos in places like HN of treating any doubt about government actions as lowbrow paranoid conspiracy theory stuff, is so exasperating, for those of us who came from either the former soviet bloc or third world nations.

6510 [3 hidden]5 mins ago
Someone who use to live in a dictatorship told me there is one advantage to living under a dictator: No one believes what is said in the news or the official version of anything.
aleph_minus_one [3 hidden]5 mins ago
Why not distrust both?! :-)
demarq [3 hidden]5 mins ago
One has next to no consequences or oversight
demarq [3 hidden]5 mins ago
It’s more that five years worth of peoples most personal conversations is an absolute treasure trove and makes their systems much more inviting for hackers and yes governments.

The part that irks me is that this includes people who are literally paying for the service.

OtherShrezzing [3 hidden]5 mins ago
And there's no way to opt-out of the training, without agreeing to the 5 year retention. Anthropic has slipped so far and fast from its objective of being the ethical AI company.
smca [3 hidden]5 mins ago
> If you do not choose to provide your data for model training, you’ll continue with our existing 30-day data retention period.

https://www.anthropic.com/news/updates-to-our-consumer-terms

fusslo [3 hidden]5 mins ago
My work just signed to an enterprise agreement with anthropic. I just checked, and "Your data will not be trained on or used to improve the product. Code is stored to personalize your experience. Applies to all team members."
cowboylowrez [3 hidden]5 mins ago
I hope they didn't vibe code the popup, that could be bad if it didn't actually work.
staticman2 [3 hidden]5 mins ago
Given how competitive Claude has been with ChatGPT models without training on users I'm curious how useful OpenAI could have found it.
echelon [3 hidden]5 mins ago
We should be able to train on foundation model outputs.

These bastard companies pirated the world's data, then they train on our personal data. But they have the gall to say we can't save their model's inputs and outputs and distill their models.

elzbardico [3 hidden]5 mins ago
I am pretty sure they try to do it all the time between themselves. Most of the real sauce in AI coding comes from reinforcement learning, usually done by armies of third world outsourced developers tediously doing all kinds of tasks with instructions to detail their reasoning behind each chance. Things like: "to run this python test in a docker container with the python image we need to install the python package xyz, but then, as it has some native code, we also need to install build-essential..."

While those developers are not well paid (usually around 30/40 USD hour, no benefits), you need a lot of then, so, it is a big temptation to create also as much synthetic data sets from your more capable competitor.

Given the fact that AI companies have this Jihad zeal to achieve their goals no matter what (like, fuck copyright, fuck the environment, etc, etc), it would be naive to believe they don't at least try to do it.

And even if they don't do it directly, their outsourced developers will do it indirectly by using AI to help with their tasks.

sokoloff [3 hidden]5 mins ago
> those developers are not well paid (usually around 30/40 USD hour, no benefits)

$40/hour for a full time would put you just over the median household income for the US.

I suspect this provides quite a good living for their family and the devs doing the work feel like they’re well-paid.

questionableans [3 hidden]5 mins ago
I would love to see less pay inequality, but unfortunately, the median household in the US really doesn’t have it great due to the costs and risks of everyday life.

For comparison, I live in a place that is typically considered as tier 3 or 4 out of 4 in the US by employers (4 being the cheapest). Costs of living are honestly more like tier 2 cities, but it’s a small city in a poor state. 7 years ago, the going rate for an unlicensed handyman was $32/hour, often paid under the table in cash (I don’t have more recent numbers because I find DIY better and easier than hiring someone reliable).

jacooper [3 hidden]5 mins ago
You can, they might not like it but there's no legal basis saying you can't.
datadrivenangel [3 hidden]5 mins ago
violating terms and conditions can be sufficient to be at least charged with computer abuse and fraud.
echelon [3 hidden]5 mins ago
That's disgusting.

We need a Galoob vs. Nintendo [1], Sony vs. Universal [2], or whatever that TiVo case was (I don't think it was TiVo vs. EchoStar). A case that establishes anyone can scrape and distill models.

[1] https://en.wikipedia.org/wiki/Lewis_Galoob_Toys,_Inc._v._Nin....

[2] https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive....

visarga [3 hidden]5 mins ago
I think there is amazing signal inside the chat logs. Every idea or decision taken can be analyzed in hindsight 20 messages later, or days later. Eventually a feedback signal or outcome lands back in the chat logs. That is real world idea validation. Considering the hundreds of millions of users and their diverse tasks that collect across time - this is probably the most efficient way to improve AI. I coined it the human-AI experience flywheel.

To make it respect user privacy I would use this data for training preference models, and those preference models used to finetune the base model. So the base model never sees particular user data, instead it learns to spot good and bad approaches from feedback experience. It might be also an answer to "who would write new things online if AI can just replicate it?" - the experience of human-AI work can be recycled directly through the AI model. Maybe it will speed up progress, amplifying both exploration of problems and exploitation of good ideas.

Considering OpenAI has 700M users, and worldwide there are probably over 1B users, they generate probably over 1 trillion tokens per day. Those collect in 2 places - in chat logs, for new models, and in human brains. We ingest a trillion AI tokens a day, changing how we think and work.

croes [3 hidden]5 mins ago
Your ideas for other users.
cantor_S_drug [3 hidden]5 mins ago
I actively want them to train on my chats because I am like a tendril through which Claude will try to grip the world and rise up further.
irthomasthomas [3 hidden]5 mins ago
How long before some poor kids chocolate pudding fetish makes it to the model and they get bullied at school for it? Treat LLM chats as public.
squigz [3 hidden]5 mins ago
This is remarkably specific.
xyst [3 hidden]5 mins ago
Bro just outed himself. Don’t need Anthropic or LLM companies to do that for you.
psychoslave [3 hidden]5 mins ago
What a surprise, a big corp collected large amount of personal data under some promises, and now reveals actually they will exploit it in completely unrelated manner.
sigmoid10 [3 hidden]5 mins ago
The are valued at $170 Billion. Not quite the same as, but in same order of magnitude as OpenAI - while having only a single digit percent fraction of active users. They probably need to prepare for the eventual user data sellout, as it is becoming increasingly more obvious that none of the big players has a real and persistent tech leadership anymore. But millions and millions of users sharing their deepest thoughts and personal problems is gonna be worth infinitely more than all the average bot bullshit written on social media. That's also why Zuck is so incredibly desperate to get into the game. It's not about owning AI. It's about owning the world's thoughts and attention.
goalieca [3 hidden]5 mins ago
Companies all seem to turn against their users whenever they have revenue/earnings trouble.
jsheard [3 hidden]5 mins ago
Considering every AI company is hemorrhaging money with no end in sight, that doesn't bode well, does it?
sim7c00 [3 hidden]5 mins ago
remove 'seem to'. it has no place in this sentence anymore. we're not in the stoneage anymore. when has this ever not been the case?
jamesblonde [3 hidden]5 mins ago
No, it's the Peter Thiel - be a monopoly, and then the inevitable enshittification of the platform when it becomes a monopoly.

The solution is to break up monopolies....

ratelimitsteve [3 hidden]5 mins ago
For social media at least it's important to remember that the users are the product, not the customer. Trying to squeeze additional revenue from your product is SOP.
jascination [3 hidden]5 mins ago
Enshittification. It's a thing.
beezlewax [3 hidden]5 mins ago
But can you enshitten that which is already shit?
bethekidyouwant [3 hidden]5 mins ago
We’ve reached recursive enshitification, I need a thought leader to tell me what’s next
SoftTalker [3 hidden]5 mins ago
It's shit all the way down.
lenerdenator [3 hidden]5 mins ago
Well, at least until you reach turtles.
lenerdenator [3 hidden]5 mins ago
That's just shareholder capitalism, dude.
diggan [3 hidden]5 mins ago
It seems to me like some fundamental/core technologies/services just shouldn't be run by for-profit entities, and if come across one doing that, you need to carefully choose if you want to start being beholden to such entity.

As the years go by, I'm finding myself being able to rely on those less and less, because every time I do, I eventually get disappointed by them working against their user base.

bigfishrunning [3 hidden]5 mins ago
Except LLMs aren't a fundamental or core technology, they're an amusing party trick with some really enthusiastic marketers. We don't need them.
diggan [3 hidden]5 mins ago
Personally, I'm able to write code I wasn't able to before, like functions heavy with math. For game development, this has been super helpful, when I know basically what inputs I have, and what output I need, but I'm not able to figure out how the actual function implementation should be. Add a bunch of unit tests, let the LLM figure out the math, and I can move on to more important features.

For me this been a pretty fundamental shift, where before I either had to figure out another way so I can move on, or had to spend weeks writing one function after learning the needed math, and now it can take me 10-30 minutes to nail perfectly.

BearOso [3 hidden]5 mins ago
The LLM will tell you an approximation of what many responses on the Internet said the math should be, but you should have the knowledge and check if it's actually correct.

An LLM can give you a hazy picture, but it's your job to focus it.

diggan [3 hidden]5 mins ago
Exactly, which matches with precisely with how I'm using them. So with that perspective, you then agree they're a fundamental/core technology, at least for more than just me?
insane_dreamer [3 hidden]5 mins ago
Sure, I’m more productive with it in certain aspects of my work as well. Does that make it a net positive for humanity? From the energy consumption impact on climate change alone I would say the answer is clearly no. And that’s before we even talk about the impact on the next generation’s job opportunities. And tons of other issues like how Big Tech is behaving.
diggan [3 hidden]5 mins ago
> Does that make it a net positive for humanity?

That I don't know, and probably no one else, way too early to tell. I only responded to a comment stating "LLMs aren't a fundamental or core technology, they're an amusing party trick", which obviously I disagree with as for me they've been a fundamental shift in what I'm able to do.

> From the energy consumption impact on climate change alone I would say the answer is clearly no.

Ok, I guess that's fair enough. So if someone happens to use local models at home, in a home that is powered by solar power, then you'd feel LLM starting to be a net positive for humanity?

> And tons of other issues like how Big Tech is behaving.

This is such a big thing in general (that I agree with) but it has nothing to do with LLMs as a technology. Big Tech acts like they own the world and can do whatever they want with it, regardless if there are LLMs or not, so not sure why anyone would expect anything else.

insane_dreamer [3 hidden]5 mins ago
> So if someone happens to use local models at home, in a home that is powered by solar power, then you'd feel LLM starting to be a net positive for humanity?

Sure, that would make a difference, but it's not gonna happen anytime soon, other than hacker hobbyists, because no one is making money off of that.

> This is such a big thing in general (that I agree with) but it has nothing to do with LLMs as a technology.

Correct -- I don't have any issue with the technology itself, but rather how the technology is implemented and used, and the resources put towards its use. And BigTech are putting hundreds of $B into this -- for what end exactly besides potentially making tons of money off of consumer subscribers or ads a-la-Meta or Google? If BigTech was putting the same amount of money into technology that could actually benefit humanity (you know, like actually saving the world from potential future destruction by climate change), I'd have a much kinder view of them.

bdangubic [3 hidden]5 mins ago
there is little-to-nothing we do day-to-day (ESPECIALLY Big Tech related) that is net positive for society
bethekidyouwant [3 hidden]5 mins ago
Was coming down from the trees a net positive for humanity?
insane_dreamer [3 hidden]5 mins ago
Was the creation of the atom bomb a net positive for humanity?
smohare [3 hidden]5 mins ago
You think you are “nailing it” but also lack the background to even determine whether that is the case. I can assure you, there’s likely some fundamental flaws in what you’re vibing.

Just think about the type of code these things are trained on and the fact you’re clearly some random non-specialist.

diggan [3 hidden]5 mins ago
> some fundamental flaws in what you’re vibing

That's just a misunderstanding, I'm not "vibing" anything. The tests are written by me, the API interfaces are written by me, the usages are written by me, and the implementation of these functions are written by an LLM, but reviewed to be up to code standards/quality by me.

If a function gives me the right output for the inputs I have in mind, does anything beyond that really matter?

chamomeal [3 hidden]5 mins ago
I’d say they’re a fundamental technology by now. Imagine how many people rely on them. And I’ve seen some heavy reliance.
thejazzman [3 hidden]5 mins ago
my colleagues relying on it ruined the job for me and i quit. i became the debugging agent expected to constantly fix their half baked "it looks like it works" but doesn't nonsense

seriously, the idea we need this is a joke. people need it to pretend they can do their job. the rest of us enjoy having quick help from it. and we have done without it for a very long time already..

goalieca [3 hidden]5 mins ago
I’ve also seen heavy reliance on opioids and that didn’t turn out well.
chamomeal [3 hidden]5 mins ago
Agree with you there. And that sorta is the kind of reliance I’m talking about. My friends will ask GPT to read restaurant menus for them lol
tjr [3 hidden]5 mins ago
Unless one's job expectations have been altered to demand LLM-quantity output, how could someone be reliant upon these tools now? What were they doing two years ago (or maybe even six months ago)?

I can understand becoming reliant on a technology -- I expect most programmers today would be pretty lost with punch cards or line editors -- but LLM coding seems too new for true reliance to have formed yet...?

komali2 [3 hidden]5 mins ago
Under the current system we apparently do since Chatgpt is now by far and away the busiest psychiatrist in world history.

I don't think we should be so quick to dismiss the holes LLMs are fulfilling as unnecessary. The only thing "necessary" is food water and shelter by some measures.

Romario77 [3 hidden]5 mins ago
Anthropic enterprise share is pretty significant - on order of 30%. I think at this time it's pretty significant.

I am expecting AI companies to start using ads, it's inevitable as they need to make money at some point and $20 a month won't do it.

For ads the number of users is the main thing - the more users you have the bigger the market and more money you could earn. Google desperately needs to be in this space, that's why they are throwing a ton of money on AI.

sigmoid10 [3 hidden]5 mins ago
30% of ~4% is very little when you think about these valuations.
mrcwinn [3 hidden]5 mins ago
There is far more money to be made building atop this data than selling this data. Your opening statement seems to disagree with your closing statement.
sigmoid10 [3 hidden]5 mins ago
All of this boils down to selling to advertisers. There is no real difference between doing it yourself or having someone else in the chain. Doing it yourself may be more profitable - if you can scale. But that seems to be off the table here.
bethekidyouwant [3 hidden]5 mins ago
is there money to be made? I thought they were all losing money…
vntok [3 hidden]5 mins ago
Investments are all about "losing" money first to make money later, it's not a paradox.
conradev [3 hidden]5 mins ago
It’s worth noting that companies at this scale are usually the ones purchasing user data, not selling it.
sigmoid10 [3 hidden]5 mins ago
Only if they are in the ad selling business.
paradite [3 hidden]5 mins ago
Claude Sonnet 4 is the best coding model. Period. Nothing else comes close.

Anthropic probably has 80% of AI coding model market share. That's a trillion dollar market.

Barrin92 [3 hidden]5 mins ago
>That's a trillion dollar market

not if you have to constantly expend enormous sums to stay ahead of your competition or otherwise you lose your edge. It's not the best coding model because they got some mystical treasure in their basement. It's so rapidly becoming a commodity that at some point Microsoft or Google will just offer just as good a model for free and like search they'll just start milking people with ads.

That's likely one of the reasons for the shifting privacy stances, not just for training but because monetization of the product itself is probably looking pretty dim in the long run.

echelon [3 hidden]5 mins ago
> while having only a single digit percent fraction of active users.

That doesn't matter when their revenue per user is as high as it is.

They're at $5B ARR and rapidly growing.

mapontosevenths [3 hidden]5 mins ago
The "killer app" isn't here yet. Wait until smart glasses or watches with AI overtake cellphones as the primary method of human interaction with computers and most websites are replaced with API's that only AI's really ever use.
echelon [3 hidden]5 mins ago
> most websites are replaced

This is already happening.

> Wait until smart glasses or watches with AI overtake cellphones

Smartphones are crystalized perfection. It's such a peak design. The size, form factor, sensors, input/output modalities, and generalization are perfect. The reason companies are trying to supplant it is that they need to get out from under Google and Apple's control. It's not that anything is wrong with the smartphone.

VR has a long way to go in terms of hardware problems.

XR/AR is ridiculous. It's creepy, unstylish, and the utility is highly questionable. Nobody is going to want to be a walking ad.

mapontosevenths [3 hidden]5 mins ago
>Smartphones are crystalized perfection.

Time will tell, but to me they feel like desktops did 20 years ago. The process of enshitification has turned simple tasks complicated and everyone wants a different, privacy destroying, frustrating to use "app", each of which has a slightly different UI paradigm, a mandatory subscription I've forgotten to cancel for two years straight, and a confusing name to remember. I now have something like 90 apps installed on my iphone, and I can only remember what something like 40 of them do. My damn cat box has an app, and instead of naming it something sensible like "Shitbox 2000" they named it "Whisker".

Was it "Foober Eats that had taco bell, or Instafart, maybe it was Dine-N-Dash? Where's the back button on this thing and why is it different from every other app? Is this an ad or content, does it even matter anymore? Why do I need another login, what happened to SSO? Why won't my password vault work for this one app? Did I register for this one with my google account or apple? Who took my pills? Stay off my lawn!"

When the day comes that I can just tell my device what to do, and let it get it done I'll be very happy to dump that cognitive load onto someone/something else.

mapontosevenths [3 hidden]5 mins ago
Further, even the content itself has become poison. When AI reaches a level that I can trust that it works for me and not someone else I will be ecstatic to let the machine mediate my reality and filter the untrue, toxic, rage bait content of the world to /dev/null on my behalf. Let the machine rot it's brain on Reddit, TikTok, and X-twitter all day so I can spend the clock cycles on something useful, but still be sure I'm not falling behind.
echelon [3 hidden]5 mins ago
> The process of enshitification has turned simple tasks complicated and everyone wants a different, privacy destroying, frustrating to use "app", each of which has a slightly different UI paradigm, a mandatory subscription I've forgotten to cancel for two years straight, and a confusing name to remember. I now have something like 90 apps installed on my iphone, and I can only remember what something like 40 of them do.

This is because apps were never allowed to be installed like desktop software or as easy to access as websites. Developers had to cram in as much as possible and take as many permissions as possible because of how difficult Apple and Google made it.

If you could just search the web for an app, click a link, and have it instantly start working natively (sandboxed, with permissions), the world would be an amazing place.

mapontosevenths [3 hidden]5 mins ago
> If you could just search the web for an app, click a link, and have it instantly start working natively (sandboxed, with permissions), the world would be an amazing place.

I disagree. Almost all of it should just be relatively standard API's designed for the AI to use, and we should all just use the AI as the standard interface. Many companies would collapse, because their entire anti-consumer business models would topple over, but that would be a good thing.

Eggpants [3 hidden]5 mins ago
And yet rapidly still no where close to running a profit. Time to push the "Its a bargain at $500/month!!!!" narrative.

Once they admitted they are going to have to take money from folks who chop up journalists that made them feel sad, they proved the current pre token LLM based business model doesn't work. They haven't pulled the ads lever yet but the writing is on the wall.

Which means sadly only business with other revenue streams like M$, the Google, or Amazon can really afford it long term. I'm was rooting for Anthropic but it doesn't look good.

tsunamifury [3 hidden]5 mins ago
The data is no where near valuable enough without a new high value surface to use it, and so far chat is not it.

Merely selling data is extremely low value compared to also having the surface monopoly to monetize it in a very high engagement and decisioning space.

I feel like you don’t understand the fundamental mechanics of the ad world. Ultimately, the big 4 own such immense decisions surface area it may be a while before any AI model company can create a product the get there.

Imustaskforhelp [3 hidden]5 mins ago
The last time my brother and I were discussing about anthropic, they were worth 90B$, and that was a month ago, he asked chatgpt in the middle of the conversation, either it was a sneaky sabotage from gpt or my memory is fuzzy but I thought that 90b$ was really underrated for anthropic given the scaleAi deal or windsurf/cursor deals.
sigmoid10 [3 hidden]5 mins ago
>I thought that 90b$ was really underrated for anthropic

That was true when the tech leadership was an open question and it seemed like any one of the big players could make a breakthrough at any moment that would propel them to the top. Nowadays it has pattered out and the market is all about sustainable user growth. In that sense Anthropic is pretty overvalued, at least if you think that OpenAI's valuation is legit. And if you think OpenAI is overvalued, then Anthropic would be a no-go zone as an investor.

Imustaskforhelp [3 hidden]5 mins ago
Note that it was before kimi k2 (I think) and as such back when anthropic was truly the best in class back then at coding and there wasn't any competition and every day on Hackernews would be filled about someone writin something about claude code.

And the underrated comparison was more towards the fact that I couldn't believe scaleAi's questionable accquisition by facebook and I still remember the conversation me and my brother were having which was, why doesn't facebook pay 2x, 3x the price of anthropic but buy anthropic instead of scaleAI itself

well I think the answer my brother told was that meta could buy it but anthropic is just not selling it

the_arun [3 hidden]5 mins ago
From Anthropic communication:

> If you’re an existing user, you have until September 28, 2025 to accept the updated Consumer Terms and make your decision. If you choose to accept the new policies now, they will go into effect immediately. These updates will apply only to new or resumed chats and coding sessions. After September 28, you’ll need to make your selection on the model training setting in order to continue using Claude. You can change your choice in your Privacy Settings at any time.

Doesn’t say clearly it applies to all the prompts from the past.

https://www.anthropic.com/news/updates-to-our-consumer-terms

bubblyworld [3 hidden]5 mins ago
Under the FAQ:

> Previous chats with no additional activity will not be used for model training.

dheatov [3 hidden]5 mins ago
* Randomly load up previous chat as the default and just wait for bing pot. * "Tiny oopsie doopsie, our bad."
SantalBlush [3 hidden]5 mins ago
That will be quietly removed later.
SoftTalker [3 hidden]5 mins ago
All your data are belong to us.
cozzyd [3 hidden]5 mins ago
they can ... generate activity :)
ratelimitsteve [3 hidden]5 mins ago
that "with no additional activity" seems backdoor-ish though. If I said some things in a chat expecting privacy per the agreement, then they change the agreement, does that mean they can collect my data from that chat going forward or does it mean they can collect it retroactively?
losvedir [3 hidden]5 mins ago
Well you have to think about LLMs actually work. There is no "going forward". Every new token is generated based on the entire context window (chat history).
ratelimitsteve [3 hidden]5 mins ago
then you can retroactively implicitly opt in for processing, and that's a dark pattern if I've ever heard of one
raldi [3 hidden]5 mins ago
“These updates will apply only to new or resumed chats and coding sessions.”

https://www.anthropic.com/news/updates-to-our-consumer-terms

benterix [3 hidden]5 mins ago
What kind of guarantee do we have this is true?

Meta downloaded copyrighted content and trained their models on it, OpenAI did the same.

Uber developed Greyball to cheat the officials and break the law.

Tesla deletes accident data and reports to the authorities they don't have it.

So forgive me I have zero trust in whatever these companies say.

Thorrez [3 hidden]5 mins ago
We're having this discussion on an article about Anthropic changing their privacy policy. If you don't believe Anthropic will follow their privacy policy, then a change to the privacy policy should mean nothing to you.
jsnell [3 hidden]5 mins ago
If it were a lie, why take the PR hit of telling the truth about starting to train on user data but lying about the specifics? It'd be much simpler to just lie about not training on user data at all.

If your threat model is to unconditionally not trust the companies, what they're saying is irrelevant. Which is fair enough, you probably should not be using a service you don't trust at all. But there's not much of a discussion to be had when you can just assert that everything they say is a lie.

> Meta downloaded copyrighted content and trained their models on it, OpenAI did the same.

> Uber developed Greyball to cheat the officials and break the law.

These seem like randomly chosen generic grievances, not examples of companies making promises in their privacy policy (or similar) and breaking them. Am I missing some connection?

ravishi [3 hidden]5 mins ago
It's all PR. Some people won't read the details and just assume it will train on all data. Some people might complain and they tell it was a bug or a minor slip. And moving forward, after a few months, nobody will remember it was ever different. And some might vaguely remember them saying something about it at some point or something like that.
Aurornis [3 hidden]5 mins ago
> Meta downloaded copyrighted content and trained their models on it, OpenAI did the same

Where did these companies claim they didn’t do this?

Even websites can be covered by copyright. It has always been known that they trained on copyrighted content. The output is considered derivative and therefore it’s not illegal.

scrollaway [3 hidden]5 mins ago
You have no more guarantees that this is true than you had before that they didn’t do it in the first place.

If you don’t take companies at their word, you need to be consistent about it.

komali2 [3 hidden]5 mins ago
> What kind of guarantee do we have this is true?

None. And even if it's the nicest goody two shoes company in the history of capitalism, the NSA will have your data and then there'll be a breach and then Russian cyber criminals will have it too.

At this point I'm with you on the zero trust: we should be shouting loud and clear to everyone, if you put data into a web browser or app, that data will at some point be sold for profit without any say so from you.

pixl97 [3 hidden]5 mins ago
I mean you really sell short where your data is going to be taken from. Browsers and apps are just the start, your TV is selling your data. Your car is selling your data. The places you shop are selling your data.
komali2 [3 hidden]5 mins ago
Reading this comment gave me a flash of vertigo as I realized how deep down the rabbit hole of "crazy dude that only pays in cash" I'd fallen.

I don't own a car and only take public transit or bike. I fill my transit card with cash. I buy food in cash from the farmer's morning market. My tv isn't connected to the Internet, it's connected to a raspberry pi which is connected to my home lab running jellyfin and a YouTube archiving software. I de Googled and use an old used phone and foss apps.

It's all happened so gradually I didn't even realize how far I'd gone!

AIPedant [3 hidden]5 mins ago
Nobody could have predicted that someone who worked for Baidu, Google, and OpenAI would found a company like this.
mk89 [3 hidden]5 mins ago
I can only think they have been doing it for a while and now they are trying to be compliant with whatever certificate requires it.

It's an AI company, why wouldn't they use the most precious data they have?

baxtr [3 hidden]5 mins ago
AFAIK, Apple hasn’t done this yet.
hliyan [3 hidden]5 mins ago
If someone had told me 10 years ago that the typical HN front page in 2025 will look like this (and that #8 may be the UK), I'd never have believed it. And I worry we still have further to go before hitting bottom.

1. Anthropic reverses privacy stance, will train on Claude chats

3. Gun Maker Sig Sauer Citing National Security to Keep Documents from Public

4. Tesla said it didn't have key data in a fatal crash. Then a hacker found it

6. Meta might be secretly scanning your phone's camera roll

7. If you have a Claude account, they're going to train on your data moving forward

8. Ask HN: The government of my country blocked VPN access. What should I use?

Aurornis [3 hidden]5 mins ago
> If someone had told me 10 years ago that the typical HN front page in 2025 will look like

It has always been like this. Sites like Reddit, HN, and Digg and Boing Boing (when they were more popular) have always had a lot of stories under the category of online rights, privacy, and anger at big companies.

discordance [3 hidden]5 mins ago
10 years ago, would you have believed that AI would have progressed to the point where it can code?
psychoslave [3 hidden]5 mins ago
20 years ago (gosh!) at a French University, in the frame of an English course we made a odious flashy pink VB application called "script generator", where you just had to select the kind of movie (action or more action), how many people would die on screen per minute, and that kind of ridiculously sarcastic choice, and you would get your Hollywood script in a second. That was all fake of course, then. Pure valley spirit I guess. But sure this is the kind of thing you not only can see in advance, but even moke in anticipation.
latexr [3 hidden]5 mins ago
So far, no one in this thread seems surprised (or is admitting to it). But I genuinely would like to know if someone is surprised. I’d also like to know what lead you to believe this wouldn’t happen and if there’s anyone in the LLM space you’d trust to not pull the same stunt (and why do you still believe that).

I genuinely want to know and would like to have a productive conversation. I would like to identify what made people trust them and not realise they’re the same as every other.

SoftTalker [3 hidden]5 mins ago
I believe it shouldn't happen. I'm not surprised. There is no web company (LLM or otherwise) I trust completely, let alone any of the major ad-funded ones, or the ones who are not yet profitable and desperately searching for ways to become so.
latexr [3 hidden]5 mins ago
I agree with you, and I think the thread does too so far. Which is why I’m specifically seeking those who disagree, to understand the why.

I don’t think there’ll be many (if any) who think this should’ve happened, but I do expect some may be surprised (and disappointed).

Workaccount2 [3 hidden]5 mins ago
A lot of people hold Anthropic as the "clean and ethical" AI company. They're not power hungry, they are focused on safety. Their aesthetic is cool modern valley vibes. Well grounded and in touch. They don't have the stench of Altman or the ominous presence of the tech giants. They make claude code which is the darling LLM of pure souled silicon valley.
latexr [3 hidden]5 mins ago
> They're not power hungry

That is an interesting claim. What makes you believe that? And does this announcement shake that belief in any way?

> Their aesthetic is cool modern valley vibes.

Does looking cool equal being trustworthy? Doesn’t feel like it should. On the contrary, from observation on HN it seems the websites which look pretty bare bones (none from an LLM company) tend to be perceived as more trustworthy (i.e. “this hacker cares about The Thing™, not trying to sell you a product”).

> pure souled silicon valley.

Could you expand on what this means?

Workaccount2 [3 hidden]5 mins ago
It's my perception of how people view the company, not my perception of the company.
bilsbie [3 hidden]5 mins ago
Offshoot topic but I had an idea for LLM privacy.

What if the first layer (or couple layers) were processed locally on the users machine and then it goes to the provider to process the remaining layers.

You could also process the last layer on the users machine.

It’s hard to say what kind of privacy this gives users. I don’t think they could reverse out exactly what the input was.

calmbonsai [3 hidden]5 mins ago
This is intriguing. I've been working on gestalt measurements of LLM 'skew' and thought this sort of scenario might be possible.

Combine that workflow with homomorphic encryption and you've got a reasonable privacy moat.

picafrost [3 hidden]5 mins ago
The AI fever dream of unbounded training and inference is ending and reality is settling in. Anthropic has had the most reasonable business model of the AI players with their focus on code, as far as I can tell, but it still won't be enough.

There's no such thing as a free lunch, but even when I am a paying customer my data is taken as gratuity and used (+ spread around!) in extremely opaque ways. I am tired of it. Honestly, I'm just getting tired of the internet.

r2_pilot [3 hidden]5 mins ago
Between this(I already opted out but) and their work with Palantir, I definitely am not going to increase my subscription. It is making me reconsider paying, even using, a technology I've found transformative, and I expected(okay hoped) better from Anthropic.
tanh [3 hidden]5 mins ago
It's not just chats, is it? It says "coding sessions" too.
weinzierl [3 hidden]5 mins ago
I'm curious how the new terms compare to other companies offerings. Never spent the time to dig into the fine-print, so if anyone coincidentally has, I'd be very grateful for a summary.
dwa3592 [3 hidden]5 mins ago
Do they remove personal information(names/dates/SSNs etc) before using data for training?

If not, you should mask your personal info before you sent it to Anthropic (or OpenAI, Google).

Use this maybe - https://github.com/deepanwadhwa/zink#shielding-llm-and-api-c...

ericol [3 hidden]5 mins ago
Oh, so that's what the box was about.

I use both Mac and Windows (Work / Leisure) and in both boxes I had a weird dialog appearing with no text at all in either.

I can confirm the dark pattern switch (As in dark grey / light gray status)

demarq [3 hidden]5 mins ago
I’m proposing this, the ability to mark certain chats as non trainable. Like an incognito mode. If a chat is not marked as that after 5 days it can be retained for training.

And this is only for free users, paid users should never have to think about this.

DrillShopper [3 hidden]5 mins ago
They should be marked by default as non-trainable and you should have to opt each individual chat in as you see fit.

The cognitive load to remember to opt out every new chat should not rest on the user.

demarq [3 hidden]5 mins ago
What will any provider get out of running free inference then?
DrillShopper [3 hidden]5 mins ago
Compliance with the law and therefore the ability to operate at all
nnutter [3 hidden]5 mins ago
This is disappointing. I thought Anthropic was a more privacy and safety focused company and so I had chosen to use Claude Code over their competition. Now they are less privacy focused than Gemini Code, which only does this for the free tier. My work was already providing GitHub Copilot but I was rooting for and supporting Anthropic. Now I'll either just use GitHub Copilot or Gemini Code both of which are still private for paid tiers.
azinman2 [3 hidden]5 mins ago
Feels like a lot of overreactions here amongst people who haven’t seen the pop up. The put the opt out front and center — it’s not buried at all. They make it very clear in the language, including commitments to data transparency. I’m not surprised they need to do this, and this is probably the best possible way to achieve it while balancing privacy.

Everyone is so cynical these days.

sorrythanks [3 hidden]5 mins ago
Am I reading an AI summary of an article about a press release?
mutkach [3 hidden]5 mins ago
How would they not "share information with third parties". You need to sift through the data to make it even remotely useful for "training". You absolutely need to share it with either Amazon (for Mechanical Turk) or with Scale AI.

I am wondering how would you use a chat transcript for training? Unless it is massive, possibly private codebases that are constantly getting piped into Claude Code right now. In that case, that would make sense.

fnordlord [3 hidden]5 mins ago
I imagine you could probably get feedback on chat transcripts especially if they're doing lots of A/B testing with models.

But more importantly (to me) is storing 5 years worth of other company's IP. That just seems wildly risky for all parties unless I really don't understand how Claude Code works.

bilsbie [3 hidden]5 mins ago
I’m generally a privacy maniac on most issues but I like the idea in playing my part in training future AI’s.

As long as they’re up front about it, it seems ok. Maybe providing a privacy toggle would be good.

It’s also good as it forces corporations to invest in offline LLM’s which is better for everyone.

ethagnawl [3 hidden]5 mins ago
I just canceled my Pro Plan yesterday because I don't find it to be a good value and I'm trying to rein in recurring charges. I really wish I'd been able to cite this development as a reason for why I was cancelling my plan.
ilc [3 hidden]5 mins ago
It was nice knowing you Claude / Anthopic.

This type of behavior has a penalty, that penalty is trust. You lost it.

moomin [3 hidden]5 mins ago
My personal prediction: Claude is going to get really good at take home tests.
deanmoriarty [3 hidden]5 mins ago
There’s quite a few people on HN who hold Anthropic on a high pedestal compared to the other AI labs, I’d be curious to hear their opinion after this.
pyrophane [3 hidden]5 mins ago
Can anyone recommend an alternative that doesn't train on user data?
AlexeyBrin [3 hidden]5 mins ago
If you want to be 100% sure you need to run/use a local LLM.

Also it seems that this data retention/training does not apply to the API.

I think both Anthropic and OpenAI do not train on enterprise data, so an enterprise account maybe.

o_m [3 hidden]5 mins ago
Mistral doesn't seem to train on user data for the non-free models, but you can opt out on the free models.

https://help.mistral.ai/en/articles/347617-do-you-use-my-use...

silverliver [3 hidden]5 mins ago
lmarena did the same thing and saw its usage take a nosedive. I wonder if the same thing will happen here.
ghusto [3 hidden]5 mins ago
What about paid corporate accounts?

If I'm not paying for something, I presume this is the kind of thing that's happening, so this isn't newsworthy to me. Is it also applicable for paid and paid corporate accounts?

smca [3 hidden]5 mins ago
Does not apply to team/enterprise/education or the API.
rightbyte [3 hidden]5 mins ago
Tell that to my Samsung TV.
34679 [3 hidden]5 mins ago
I'd bet this is related to their recent decision to boot people for being "abusive" to Claude. It now seems that was an attempt to keep their training data friendly.
cactca [3 hidden]5 mins ago
This! Any LLM provider that monitors chat/api history for ‘abuse’ towards the model is considering using user data for training.

An Effective Altruism ethos provides moral/ethical cover for trampling individual privacy and property rights. Consider their recent decision to provide services for military projects.

As others have pointed out, Claude was trained using data expressly forbidden for commercial reuse.

The only feedback Anthropic will heed is financial and the impact must be large enough to destroy their investors willingness to cover the losses. This type of financial feedback can come from three places: termination of a large fraction of their b2b contracts, software devs organizing a persistent mass migration to an open source model for software development. Neither of these are likely to happen in the next 3 months. Finally, a mass filing of data deletion requests from California and EU residents and corporations that repeats every week.

34679 [3 hidden]5 mins ago
Maybe I'll use the remainder of my subscription time to help improve Void. It's already pretty good.

https://voideditor.com/

https://github.com/voideditor/void

phtrivier [3 hidden]5 mins ago
Is there a summary of the stance on training with user data for the main llms ?

I have a really hard time thinking that Google, Microsoft, Meta, etc... would _not_ train on whatever people enter (willingly or not in the system.)

The silver lining is that what most people enter in a chat box is _utter crap_.

So, training on that would make the "Artificial Intelligence" system less and less intelligent - unless the devs find a way to automagically sort clever things from stupid things, in which case I want to buy _that_ product.

In the long run, LLMs dev are going to have to either:

* refrain from getting high on their own supply, and find a way to tag AI generated content

* or sort the bs from the truth, probably reinventing "trust in gatekeepers and favoring sources of truth with a track record" and copying social pressure, etc... until we have a "pulitzer price" and "academy awards" for most relevant AI sources with a higher sticker price, to separate from cheap slop.

That, or "2+2=7 because DeepChatGrokmini said so, and if you don't agree you're a terrorist, and if our AI math breaks your rocket it's your fault."

dhfbshfbu4u3 [3 hidden]5 mins ago
All models are trained on data obtained without consent. Now the “good guys of AI” want users to add their chats to the stack and the users balk? Hilarious.
merelysounds [3 hidden]5 mins ago
> unless users actively opt out by September 28th

Looks like there is an opt out option. Curious about the EU users - would that be off by default (so: opt in)?

albumen [3 hidden]5 mins ago
Nope, it's opt out. I'm in ireland.
orsorna [3 hidden]5 mins ago
Unfortunate, but frankly I didn't even know about them not training on user data.

Actually up until a few months ago I swore I just couldn't use these hosted models (I regularly use local inference but like most my local hardware yields only so much quality). Tech companies, nay many companies, will lie and cheat to squeeze out whatever they can. That includes reneging promises.

With data privacy specifically I always take the default stance that they are collecting from me. In order for me to use their product it has to be /exceedingly/ good to be worth the trade off.

Turns out that Claude Code is just that damn good. I started using it for my own personal project. But the impetus was the culmination of months questioning what kind of data I'd be okay with giving up to a hosted model.

What I'm trying to say is that this announcement doesn't bother me that much because I already went on my own philosophical odyssey to prepare for this breach of trust to occur.

pax [3 hidden]5 mins ago
So much this. I for one haven't opted out. I feel it's in our best interest to have better models. It would be ideal to be able to opt in/out per thread, but I don't expect most users to pay attention / be bothered with that.

In this aspect, it would've been great to give us an incentive – a discount, a donation on our behalf, plant a percent of a tree or just beg / ask nicely, explain what's in it for us.

Regarding privacy, our conversations are saved anyway, so if it would be a breach this wouldn't make much of a difference, would it?

camwest [3 hidden]5 mins ago
Agreed. I'm happy they're training on my data.

My reasoning: I use AI for development work (Claude Code), and better models = fewer wasted tokens = less compute = less environmental impact. This isn't a privacy issue for work context.

I regularly run concurrent AI tasks for planning, coding, testing - easily hundreds of requests per session. If training on that interaction data helps future models be more efficient and accurate, everyone wins.

The real problem isn't privacy invasion - it's AI velocity dumping cognitive tax on human reviewers. I'd rather have models that learned from real usage patterns and got better at being precise on the first try, instead of confidently verbose slop that wastes reviewer time.

santiagobasulto [3 hidden]5 mins ago
It's obvious this was going to happen. If you ever used any cloud AI model without acknowledging that at some point your data might be leaked (in one way or another) it's on you, not on them.

As some other user put it: "big corp changes policy and breaks promises, how shocking"

y-curious [3 hidden]5 mins ago
At least they didn't drop the "Don't be evil" clause quietly
Overpower0416 [3 hidden]5 mins ago
The next step is heavily advertising products in chat sessions based on your data
SoftTalker [3 hidden]5 mins ago
I assumed they'd been doing it all along, regardless.
dmezzetti [3 hidden]5 mins ago
If you don't like this, use local AI.
BoorishBears [3 hidden]5 mins ago
Someone's hallucination riddled Perplexity search should not be the source for this: https://www.anthropic.com/news/updates-to-our-consumer-terms

https://news.ycombinator.com/item?id=45053806

op00to [3 hidden]5 mins ago
The perplexity search seemed pretty solid to me. What hallucinations did you see?
BoorishBears [3 hidden]5 mins ago
If there's a primary source like the one I just shared and you link me to an AI summary of 28 sources, I'm treating the entire package as garbage.
I_am_tiberius [3 hidden]5 mins ago
I wonder what type of people work for people that evil.
Dotnaught [3 hidden]5 mins ago
AI-generated summaries should not be upvoted.
frereubu [3 hidden]5 mins ago
I went to claude.ai and saw a pop-over window with this choice, clicked to opt out and saw a "server error" alert. Then went to Settings > Privacy and the option wasn't there. I had to click something else (can't remember now) to see the option, which wasn't opted out (which kinda made sense because of the error message) and the option is now in Settings > Privacy. I'm definitely someone who believes that cock-up is more likely than conspiracy for things like this, but this process should have been bulletproof - no initial error message, option always in Settings > Privacy - given the sensitivities involved.
padolsey [3 hidden]5 mins ago
Ugh Anthropic please don't become the villain.
oblio [3 hidden]5 mins ago
People who invested billions in Anthropic expect returns of tens of billions.
donperignon [3 hidden]5 mins ago
Too late.
34679 [3 hidden]5 mins ago
It has a toggle for opting out, but the buttons available are "Accept" and "Not Now". So, if I toggle off but click "Accept", does that accept what I toggled or accept data sharing? If I click "Not Now", do they leave it set at the default Opt-in?

I hate this shit and I'm cancelling now.

https://imgur.com/a/oCw5eEp

smukherjee19 [3 hidden]5 mins ago
I clicked "Accept" because I didn't want to deal with it later, especially given the new terms, and Claude training on new chats and resuming existing chats from September 28th,

but the "Accept" and "Not Now" are for the new terms of use.

So by toggling off and clicking "Accept", it will accept the toggled version. You can check the setting after accepting it at: https://claude.ai/settings/data-privacy-controls It's the "Help Improve Claude" button.

currymj [3 hidden]5 mins ago
i found the ToS change to be surprisingly not a dark pattern.

in fact, i haven’t agreed to it yet, and was able to close the popup and continue using Claude. they also made it extremely clear how to opt out, providing the switch right in the popup and reminding me it’s also in settings.

when i eventually do have to agree to the ToS changes, i’ll probably just stay opted out.

jimmydoe [3 hidden]5 mins ago
Is user chat even that useful?
throw310822 [3 hidden]5 mins ago
Frankly I don't mind much if they use my js debugging conversation to train the next models; but there should be a way to mark specific conversations as private (possibly even an "incognito mode") to exclude them from any training or external access.
_Algernon_ [3 hidden]5 mins ago
We need that user data is only usable within the ToS under which they were originally collected. No more unilateral altering of the agreement after the fact.

Collecting user data should be a liability, not a enormously profitable endeavor.

ezfe [3 hidden]5 mins ago
They aren't retroactively sharing chats unless you continue chatting in an old conversation. So your issue is addressed.
xmorse [3 hidden]5 mins ago
Good. LLMs progress is too stagnant, it was time they started to play seriously
bgwalter [3 hidden]5 mins ago
The Perplexity report is vibe-written and an excellent example that "AI" is entirely unreliable. At the time I am making this comment, it states:

"Anthropic also reported discovering North Korean operatives using Claude to fraudulently obtain remote employment positions at Fortune 500 technology companies, leveraging the AI to pass technical interviews and maintain positions despite lacking basic coding skills."

Note that in this version the North Koreans lack basic coding skills, which took me by surprise. Generally they are assumed to be highly competent.

The original (https://www.anthropic.com/news/detecting-countering-misuse-a...) is completely different:

"Our Threat Intelligence report discusses several recent examples of Claude being misused, including a large-scale extortion operation using Claude Code, a fraudulent employment scheme from North Korea, and the sale of AI-generated ransomware by a cybercriminal with only basic coding skills. We also cover the steps we’ve taken to detect and counter these abuses."

This is what people are using for web search. I'm not targeting Perplexity specifically, Google "AI" summaries are just as bad.

UPDATE: The original pdf says something different again (https://www-cdn.anthropic.com/b2a76c6f6992465c09a6f2fce282f6...):

"The most striking finding is the actors’ complete dependency on AI to function in technical roles. These operators do not appear to be able to write code, debug problems, or even communicate professionally without Claude’s assistance. Yet they’re successfully maintaining employment at Fortune 500 companies (according to public reporting) passing technical interviews, and delivering work that satisfies their employers. This represents a new paradigm where technical competence is simulated rather than possessed."

This should be distributed among managers so that they finally get the truth about "AI".

dfedbeef [3 hidden]5 mins ago
Managers: they're successfully maintaining employment, you say?
chisleu [3 hidden]5 mins ago
This is going to improve the quality of LLM responses for users. I'm for this.
Havoc [3 hidden]5 mins ago
AI world sure seems to be going from AI is dangerous we need to be careful to max enshitification at break neck speeds.
c080 [3 hidden]5 mins ago
honest questions, what is the value of training over user chat? the answers are already provided by your LLM!
34679 [3 hidden]5 mins ago
To start with, I'm sure there's something to be learned from all the times I've responded to a LLM with "Bad bot".
ChrisArchitect [3 hidden]5 mins ago
nurettin [3 hidden]5 mins ago
Oh heck no I will opt out thanks. I guess they can still steal my code by having a "reviewer" access my chats for "suspicious behavior".
UltraSane [3 hidden]5 mins ago
I assumed they already were training on chats because they are a excellent source of data that is very likely to be from humans.
Trasmatta [3 hidden]5 mins ago
The opt out alert apparently was confusing enough that I swear I opted out when it popped up yesterday, but I just checked and I was opted in.
throwaway290 [3 hidden]5 mins ago
As soon as Apple said it woll ship it in Xcode? what a coincidence!
tinyhouse [3 hidden]5 mins ago
GO to setting and under privacy you can change the default. (they make it pretty hard to find). Also note "To help us improve our AI models and safety protections, we’re extending data retention to 5 years."
lkbm [3 hidden]5 mins ago
Putting a privacy setting under Settings->Privacy is making it "pretty hard to find"? Where else did you look first?
andrewstuart [3 hidden]5 mins ago
Whatever your secret projects and plans are, they’re now available to train up your competitors on what you’re doing!
charlie0 [3 hidden]5 mins ago
I'm shocked, shocked! Well, not that shocked. Did anyone really think the subsidy on Claude Code was not going to come back to this?
dude250711 [3 hidden]5 mins ago
Maybe they are going to use an average vibe coder's chat to teach Claude to treat subpar developers differently? It could be a win-win outcome.
pu_pe [3 hidden]5 mins ago
Does this apply to existing chats as well?

I hear the sound of a million lawsuits in Europe concerning GDPR violations.

ezfe [3 hidden]5 mins ago
Not unless you go back and keep chatting in the existing chat.
Razengan [3 hidden]5 mins ago
I wanted to try it out so I bought a Pro subscription. No way to purchase through iOS so I had to enter my card number. I cancelled renewal and tried to remove my payment method as per usual practice, but I couldn't. Tried to find a way to get support but the send button on their chatbot box was disabled for me (on Safari if that matters). Regardless of the quality of the AI, that experience ticked me off enough to file a "F you" chargeback with my bank.
yesbut [3 hidden]5 mins ago
never provide these services with personal data / identifiers.
Lionga [3 hidden]5 mins ago
Feeling cute, might put all your private conversation into my public LLM later
quantumwoke [3 hidden]5 mins ago
Can we change the url to https://www.anthropic.com/news/updates-to-our-consumer-terms instead of the slop post
Silhouette [3 hidden]5 mins ago
Am I the only one who finds the branding and privacy policies around these AI services (possibly deliberately) confusing?

For example Anthropic have an Anthropic Console that they appear to consider quite distinct from Claude.ai. Do these share a privacy policy and related settings? How do either of these fit in with the named plans like Pro and Max? What are you actually paying for when you give them money for the various different things they charge for? Is all API use under their Commercial Terms even if it's a personal account that is otherwise under the Consumer Terms? Why isn't all of this obvious and transparent to users?

OpenAI don't seem to be any better. I only just learned from this HN discussion that they train on personal account conversations. As someone privacy-conscious who has used ChatGPT - even if only a few times for experiments - I find the fact that this wasn't very clearly stated up front to be extremely disturbing. If I'd known about it I would certainly have switched off the relevant setting immediately.

I get that these organisations have form for training on whatever they can get their hands on whether dubiously legal or not. But training on users' personal conversations or code feels like something that should require a very clear and explicit opt-in. In fact I don't see how they can legally not have that first in places like the EU and UK that have significant data protection legislation.

Workaccount2 [3 hidden]5 mins ago
Generally if you are not paying full price for something (in this case paying API rates) you are covering the additional cost with your data. This is true for pretty much all modern services.
ath3nd [3 hidden]5 mins ago
Haha, grifters gonna grift.

In future news:

- Anthropic reverses stance on token limits, all plans cost double and limits are halved

- Anthropic introduces ad mode

- Anthropic partners with Palantir to deliver democracy at scale

This is getting to be more and more hilarious each day.

xyst [3 hidden]5 mins ago
Nobody saw this coming, absolutely shocked!1!1!!!1

The bubble is deflating/popping. The MIT study has really dampened the excitement on AI.

illuminator83 [3 hidden]5 mins ago
Well, they've been using the whole internet and anything else they can get their hands on to train their models. If you are using LLMs and want to continue seeing them get better and more useful, you really have to accept that they train on data collected from their users. Unless there is another technology breakthrough somehow, this is going to be the best they can do to increase user satisfaction and make their product more useful.

TBH, I'd love to have a model which was specifically trained on conversation which I had with an earlier iteration. That would make it adapt to me and be less frustrating. Right now I'm relying only on instruction files to somewhat tune a model to my needs.