For Linux kernel vulnerabilities, there is no heads-up to distributions
Recent: Copy Fail - https://news.ycombinator.com/item?id=47952181 - April 2026 (466 comments)
515 points by ori_b - 399 commentsRecent: Copy Fail - https://news.ycombinator.com/item?id=47952181 - April 2026 (466 comments)
515 points by ori_b - 399 comments
Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.
It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers. One would hope that the former would notify the latter, but apparently it's the responsibility of whoever finds the vulnerability.
the real problem is:
>It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers.
the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.
what should be happening, as you allude to, is a communication channel between the kernel security team and distribution maintainers. they are in a much better position to coordinate and communicate with the maintainers than random reporters are.
the minute the patch landed in the kernel, a notification should have gone out from the kernel team to a curated list of distro security folk that communicated the importance of the patch, and that the public disclosure would be in 30 days.
Not "separately to every single downstream", there is the "linux-distros" mailing list for disclosures: https://oss-security.openwall.org/wiki/mailing-lists/distros
This random blogpost from 2022 serves as a proof that disclosing kernel vulnerabilities to the distros list is a well-known practice: https://sam4k.com/a-dummys-guide-to-disclosing-linux-kernel-...
I agree it's a shame that the process isn't more streamlined and the kernel developers aren't forwarding the reports to the distros list.
If researchers want to showcase their ability (either individually or as an organization) to identify and address security vulnerabilities in complex multi-stakeholder environments, I very much expect them to figure this out. After all, it doesn't make much sense if a company, after commissioning a security review, needs to hire a different firm to handle the vendor interactions, so that identified issues are resolved with minimal impact to the business.
The kernel devs patched the kernel. The kernel devs have a pretty known, straightforward stance in how they ship fixes for anything, because anything in the kernel can be a security problem.
Distro maintainers can see kernel changes. Some distros aggressively track new changes. Others backport what they feel are relevant. Others don’t do either.
Users pick what distro they use, and how they set up their infra.
Maybe if I were paying for RHEL licenses I’d be eyeballing the money I pay and RHEL’s response time.
But the ownership here lies with system operators, who pick their infrastructure, who design their security model, and who build their operational workflows. This vuln is a great example: people who looked at shared untrusted workloads on a single kernel and said “Hell no” had a much calmer day than teams who thought that was a good idea.
In terms of something actionable, and maybe someone more well versed in how the distros work can tell me why this is a bad idea, but shouldn't there be a documented process and channel for critical CVE's to be bubbled out to distro maintainers who then have some sort of SLA for patching them and sending them downstream to end users? Perhaps incentives are not aligned to produce this outcome.
Otherwise, it’s on the end user. Distro volunteers don’t owe you anything. Kernel devs don’t owe you anything.
I don’t care about what would be the most effective way of doing things. I care about what folks involved actually owe to each other, and distro volunteers don’t owe users any kind of active chasing of remediation due to the user’s threat model.
The idea of making some kind of streamlined process that solves what you didn’t like about this vulnerability’s remediation is that it ignores basically all the complexity. Like “what about distros that don’t abide by embargoes” or “what distros count as ones that matter” or “what about all the vulns that aren’t in Linux, they’re in software that’s packaged across many operating systems”.
This vulnerability is, for some threat models, a really big deal. A security group found the vulnerability. They disclosed it. It was patched.
Folks here have gotten all kinds of bent out of shape that the groups involved didnt do things in the way each internet commenter would have liked. But this is the system working.
This vulnerability is, for other threat models, a death sentence.
> A security group found the vulnerability. They disclosed it. It was patched.
It was patched only after some people who should have been notified well in advance happened to notice something was up. That is NOT HOW IT'S SUPPOSED TO WORK.
For as long as the unpatched window remains open, skids will mess around and break things. Organized crime teams will use it for some really nasty hacking/ransomware/exfil/extortion/whatever. I guarantee you, this vuln is powerful and widespread enough that intel orgs will use it to kill targets, if they haven't already been using it for years. And if they have, we can just bank on them pulling out all the stops to take advantage of the remaining time for wreaking havoc. Make a project out of it and see if you can guess some of the future headlines.
Certain folks might not care much because they are citizens of one or more of those orgs' nations, so those targets are welcome to die in their opinion. That's fine. You do you, I'll do me, we'll all just go on doing our thing. But it's all fun and games until the wrong target gets hit and now there's a pact between the Germans and the Austrians being invoked and a few dozen million Europeans die. Or a geopolitical hotspot flares up and overnight 20% of the global petroleum supply chain grinds to a halt. Use your imagination. This vuln is a digital magic wand that is trivially usable to cast Avada Kedavra and somebody neglected to tell 99.99% of the Good Guys about it.
How is this different from any other day? Because now we've got a world-changing vuln out in the wild with no distro mitigation on day 1, and who the hell knows how many unscrupulous actors poised to take advantage of it before the fun and games stops. There will be no adults in the room when the miscreants decide to deploy while they still can.
Is this vuln going to start the next world war? Probably not. I don't expect it to and I hope and pray it doesn't. But leaving a vuln like this undisclosed to the very people whose job it is to protect us all is playing with fire. Not matches; more like a 10-grams-less-than-critical mass of plutonium.
sam is right to be pissed and he's doing a very good job of hiding it, because he knows that his users are at the mercy of TPTB in the Linux kernel world. Somebody's head needs to roll for this, and I don't mean some dude the CIA wants to hax0r because he's next on the list.
Not sure what the solution could/should be, but surely there could be a better, easier mechanism for kernel to advise all distro maintainers who care, and for those distro maintainers to subscribe in some way. Whether any distro maintainers do so (let alone do something about the vuln notifications) would be entirely up to them. There could also be some easier way for end users to see what the distros' policies on this are, such that they can take that into account when selecting a distro.
We don’t have to agree, but the site rules are pretty clear that swipes like that aren’t ok.
That kind of distro maintainers and kernel devs communication path already exists: the linux-distros@ mailing list. But since anybody can read it, posting “hey everybody, this is a security patch” has basically the same effect as the security researcher posting, in terms of disclosing the vuln to bad actors.
Given that anybody can make a Linux distro, and Linux distros aren’t generally either capable or interested in background checking their teams or policing their individual security practice, it doesn’t seem possible to have a communication channel that distros can sign up for that lacks this problem.
There's not really an enforcement mechanism in FOSS like there is in capitalism world, it just comes down to what we want our part of the world to look like. So I think we'd think more clearly if we leave aside the ideas like "who owes who what." I think it's fun to imagine what sort of motivations and incentives there are if we put away the money ones.
Linux like every open source project is just a bunch of people who are YOLOing it. Not something you use for your fortune 500 critical mission infrastructure.
But from what I understand they were not given enough information to know if it was relevant or not. The commit message just said it reverted a change from another commit because there was "no benefit". From the patch itself, it is not at all evident that this is a fix for a critical security bug.
If the commit message says it fixes a security bug, then bad actors immediately know there's a possible exploit there. So maybe it's intentional? (not familiar with the policy for this)
They dropped the ball when the shipped supposedly secure systems where their method for getting alerted to security updates was "hope people reporting to upstream will also notice a mailing list that will alert them".
(Caveat: Distro's like Ubuntu advertise security updates so this is on them. I'm not sure Gentoo does that, if they don't well then no one dropped the ball because no one represented that Gentoo got prompt security updates).
Expecting a FOSS project to go track down all of its (millions of?) users seems like a very unreasonable expectation, and is well outside of their scope of responsibility.
People have gotten so used to the Github flavour of free-labour, social-network-style FOSS that they've forgotten what all those LICENSE files actually say, which is to make it explicitly clear that the devs are not responsible to you for your issues, up to and including the software setting your house on fire. If you don't like it, you don't have to use it.
They can't, because (responsible) security disclosures are private, _not public_. That's the whole point of the system: notify the developers in private ahead of time (usually 30, 60 or 90 days) so they can write, test and roll-out the fixes before you release the info to the whole world. This is to minimize the time between when bad actors gain access to the exploits vs. when users install the patch. So "keeping up on security disclosures" cannot ever be a 'pull' process.
Usually the maintainers of the big distros are part of (private) security mailinglists and receive such info. Just not in this case it seems.
Sending emails to some big distros would still result with e.g. Gentoo not getting that info because they are not a big distro.
Not ideal, but also: shit happens? It's always a balancing act choosing the lesser of multiple evils and most of the time it seems to work ok-ish, which is probably the best we can hope for ;-P
For this specific "bug" they took care to not mention any security angle in the commit message, making it extremely hard for an outsider to even realize this was a critical patch. I assume this was because they wanted to push the fix without breaking embargo.
The post you are responding to says that it would be nice if they copied literally one mailing list.
Who would curate that list though? You don't need permission from the kernel team to spin up a new distro. I can go and create fork of Debian or Arch or whatever today and the kernel team would never know (and neither should they).
This is completely in the responsibility of the distros. If you don't like this model, use something like FreeBSD.
You don't need anyone's permission to make a distro, that's true, but if you notify Debian, Canonical, Fedora, Red Hat and Arch you're covering a very large fraction of users; way more than today's 0%. In cases like this, perfect is the enemy of the good.
Given this was announced when backports weren't ready (and given the POC was at least opaque if not obfuscated), I'm getting the vibe fixing the vuln wasn't as high as a priority as making a media splash.
They openly refuse to do this and have been given authority by MITRE to work against any such process.
There would be a lot of people gloating if this happened to MS.
Linux is a free kernel that literally revolutionized the computing landscape.
It is not the responsibility of the initial reporter to communicate to distributions, but the fact that those responsible failed to do that, doesn't give everybody else a free pass.
The thing is, malicous actors are already monitoring most major projects and doing either source analysis or binary analysis to figure out if changes were made to patch a vulnerability. So, as soon as you actually patch, you really need to disclose because all you're doing by not disclosing the vulnerability is handing the bad actors a free go. The black hats already know. You need to tell the white hats, too, so they can patch.
Requiring the security researcher to do it is insane. Should a security researcher that identifies a vulnerability in electron.js need to identify every possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
FTFA:
> I see that on the 11th of April 6.19.12 & 6.18.22 were released with the fix backported.
> Longterm 6.12, 6.6, 6.1, 5.15, 5.10 have not received the fix and I don't see anything in the upstream stable queues yet as I write.
I wouldn't go so far as to call this "the kernel devs patched it". Virtually none of the kernels that distro's are actually using today have received a fix. This looks like an extremely lackluster response from the kernel security team.
Pretty much the only non-rolling distro's that are shipping a fixed kernel are Fedora 44 and Ubuntu 26.04, both released in the last few weeks. Their previous releases both shipped with Linux 6.17 which is still vulnerable today!
But it's been at least 15 years since "reversing means patches are effectively disclosures legible mostly to attackers" became a norm in software security. And that was for closed-source software (most notably Windows). The norms are even laxer for open source.
But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.
It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?
About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.
> Their job is to bring information to light, not to manage downstreams.
The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.
If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.
People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.
they did it in the established industry standard way that probably every single security researcher you can think of follows (for good reason, i would add).
whoever did the marketing on "responsible disclosure" was a genius.
tptacek says it much better than me: ""Responsible disclosure" is an Orwellian term cooked up between @Stake and Microsoft and other large vendors to coerce researchers into synchronizing with vendor release schedules."
And it's not as if I'm asking for a lot of effort. One mail to the security team of a popular distro "hey, we have found this LPE that we'll release with exploit next week, it's patched upstream already in this commit, but you don't seem to have picked it up" would likely have been enough.
The problem is that vendors and developers have repeatedly shown that if you give them an inch, they take a mile. Look at exactly what happened with BlueHammer this month. The security researcher went full disclosure because Microsoft didn't listen to their reports.
Disclosure is vital. It's essential. Because the truth is, if a security researcher has found it, it's extremely likely that it's already been found by either black hats or by state actors. Ignorance is not actually protection from exploitation.
The security researcher also has a responsibility to the general public that is still actively using vulnerable software in ignorance. They need to be protected from vendor and developer negligence as well as from exploits. And the only way to protect yourself from an exploit that hasn't yet been patched is to know that it is there.
I'm also not proposing delaying the disclosure to the general public at all. They already waited 30 days with that, that's fine. Just look a bit further than your checklist of only contacting upstream, and send a mail to the distributions if they haven't picked it up a week or two before.
[citation needed]
Is there any evidence that Linux distros (specifically) act in this way? Or a particular distro?
there is ~3 decades of citations you can look at, spread out over every security mailing list, security conference, etc. that you can think of.
one decent start is https://projectzero.google/vulnerability-disclosure-faq.html...
"Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. [...] "We used this model of disclosure for over a decade, and the results weren’t particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely! We were optimistic that vendors could do better, but we weren’t seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users.
[...]
While every vulnerability disclosure policy has certain pros and cons, Project Zero has concluded that a 90-day disclosure deadline policy is currently the best option available for user security. Based on our experiences with using this policy for multiple years across thousands of vulnerability reports, we can say that we’re very satisfied with the results.
[...]
For example, we observed a 40% faster response time from one software vendor when comparing bugs reported against the same target over a 7-year period, while another software vendor doubled the regularity of their security updates in response to our policy."
>Linux distros (specifically) act in this way
carving out special exceptions based on nebulous criteria is a bad idea. 90+30 is what has been settled on, and mostly works.
Because I would call a situation where the development team fails to appreciate the severity of a security vulnerability and has an established procedure that requires the researcher and not the kernel team to communicate with downstream users is already a major failure of process. Security is not just patching the vulnerability, and it seems that the Linux kernel developers or the Linux kernel security team does not understand that.
This is the result of that failure.
If this were any other software, we'd be here with pitchforks and torches. The researcher gave the developers timed disclosure, and even waited until after the developers had patched the issue. And... it's still a problem.
the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.
the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.
Gentoo has to take some blame too for not keeping all the kernels they maintain patched in a timely way.
How do you figure that? From what I could tell from the earlier post, the fix has only been backported to 6.18 and later, and as TFA indicates the distro's were not informed of the security implications of this fix. All distro's shipping a major kernel version from more than a year ago -- and that includes all LTS kernels -- are vulnerable, regardless of how "timely" their patch schedules follow upstream.
the baddies are looking at every patch anyways.
Even if the only purpose of looking at the status to make yourself look good in marketing materials, it's surprising that it didn't happen.
the vulnerability report was submitted to the kernel security team and appropriate kernel maintainers. those are the people responsible for patching the kernel, which they did 30 days ago.
They patched 2 of 7 supported kernels.
is the reporter of that vulnerability responsible for finding and submitting a vulnerability report to every single piece of software that uses left-pad? all ~millions of them?
or do they submit the report to left-pad, get them to fix it at the source, and trust that the people relying on left-pad will update their software like they should when they see a security-relevant update is available?
Those groups don't exist, to my knowledge. And probably can't, realistically speaking.
> Is your software AI-era safe?
> Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. [...]
> [Try Xint Code]
More chaos makes their product seem even more attractive.
this was a failure of the kernel security team, and their stance on communicating security issues with their downstreams.
Hell, Crowdstrike is still purchased.
Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.
they did a proper coordinated disclosure, following the industry standard 90+30 process. that is why the exploit dropped 30 days after the patch landed.
the kernel team should have communicated with their downstream about the importance of the patch. that is the kernel security team's responsibility -- and they are much better positioned to do that than crossing your fingers and hoping every reporter will contact every distro every single time there is a vulnerability.
there are very good reasons disclosure works this way, backed by a couple of decades of debate about it.
I just don't see the point in complaining about how shirking the norms of your industry will make you look irresponsible. I don't really care that they could have decided to sell the vulnerability instead. It isn't material.
Tavis Ormandy dropped Zenbleed right onto Twitter. He's doing fine. You can blacklist him if you want; I imagine he's not going to notice.
There is actually no way to give them a friendly heads up, and then do your own thing. The only way not to be bound is by not sending them any notification at all...
I couldn't find a public copy of that.
The best starting point I found for reporting vulnerabilities was: https://github.com/microsoft/MSRC-Security-Research/security...
You can email without agreeing to anything. But for a serious issue Microsoft would obviously try and track down who you are and what jurisdiction you are in.
> MICROSOFT BOUNTY TERMS & CONDITIONS
> Last updated: July 23, 2025
> The Microsoft Bug Bounty Programs Terms and Conditions ("Terms") cover your participation in the Microsoft Bug Bounty Program (the "Program"). These Terms are between you and Microsoft Corporation ("Microsoft," "us" or "we"). By submitting any vulnerabilities to Microsoft or otherwise participating in the Program in any manner, you accept these Terms.
Who knows if its enforceable.
Those private actors aren't planning to sit around and hold onto these exploits they've horded forevermore, they're obviously paying for them so they can one day use them.
We must get public funds to reward ethical disclosure of big impact vulns like this.
Mostly cover citizens within a very limited set of jurisdictions.
Otherwise there's a chance at extradition.
This is not true in many jurisdictions.
We need an anonymous bounty system.
If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.
And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)
This is how society needs to work.
Or at least went dark ..
You'll learn who the buyers are if you routinely have the really good stuff to sell! If you are offering iOS zero click on a semi-regular basis, the buyer is going to want to try to deal with you directly and preferably offer you a more regular form of employment, if you are interested. Some national governments may offer certain benefits to you, depending on your situation.
All depends on what you have to offer. If you were able to offer this https://arstechnica.com/security/2025/09/microsofts-entra-id... or something of that magnitude, a lot of problems in your life would just go away. The buyers would all be Five Eyes and the intelligence gain of having that kind of access even briefly is priceless.
In a more Western-centric context, imagine if you had a flaw like that, same 'no logs are generated' and 'every single customer account is accessible' but the impacted vendor was Alibaba Cloud. The researcher would get to name their price. That's the real world, that's the world we share. We shouldn't be blind to that.
it wasn't sold for profit, it was openly disclosed.
> And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems.
All that "responsible disclosure" does is keep people from demanding better.
Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.
> Just fyi.
...
> Be glad it was disclosed at all. Be glad a patch was available prior to release.
I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.
Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.
I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.
Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.
"Researchers"...
non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.
1. Status quo. Researchers are free to disclose to a vendor, free to sell vulns to legitimate companies, free to do full disclosure if they want. This situation benefits security. Researchers are able to pay their bills while also doing meaningful research into OSS projects that are unable to fund the kind of security audit they need. Harm reduction, of sorts.
2. Everyone is a bad actor. No one is going to do this work for free/for a bounty. Horrible flaws will be found and shared with ransomware gangs and the like. 0day will sell for a percentage of the ransom winnings. Researchers will live like kings, everyone else will suffer.
Which do you prefer?
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
In that world, the vulnerability has more value to those who seek to exploit it for their own motives, regardless of the consequences. They hope that no one else stumbles on it and fixes it, preventing them from continuing to use it to do bad things.
In the world where it is disclosed, there is more value in fixing the vulnerability as the maintainer’s reputation is at risk (and potentially monetary loss or legal liability if they are shown to be negligent).
> In computer security, coordinated vulnerability disclosure (CVD, sometimes known as responsible disclosure)
I guess you can learn something new after 36 years.
If you are referring to what you quoted, your pedantry and sharpshooting would result in an incomplete English sentence: "that's why we have the responsible disclosure" is missing a noun. Now that we are firmly in worthless pedantry:
Protocol (n):
1.a. a system of rules that explain the correct conduct and procedures to be followed in formal situations
1.b. a set of conventions governing the treatment and especially the formatting of data in an electronic communications system
If you don't like what I said or disagree, poke holes in factual inaccuracies. However, in the reality that I am pretty sure we all share, responsible disclosure is a well established protocol that is followed by many security researchers, and was imperfectly followed here.
As a user and admin I disagree. Makes one appreciate what a masterful bit of lexical-engineering “Responsible” Disclosure is, kinda like “Secure” (from me, not forme) Boot — “Responsible” Disclosure is 100% about reputation-management for the various corporation/foundation middleman entities sitting between me and my computer.
Those groups don't care that my individual computer is vulnerable but about nobody being able to say “RHEL is vulnerable” or “Ubuntu is vulnerable”. The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk than to be surprised by the fix and hope nothing bad happened in that meantime.
Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.
Even when there is no known use case of the attack (other than the security researcher's)?
> The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk
By the time you hear about it, the money could be gone because 1000 hackers heard about it from the researcher before you did.
> than to be surprised by the fix and hope nothing bad happened in that meantime.
Hope is not a good strategy here.
What could possibly go wrong?
And do you agree with that behaviour?
The real debate here is what went wrong with getting that info downstream, and whose responsibility was that?
No, it's really not.
High severity vulnerabilities are responsibly handled by quietly neutralising them with subtle patches that do not reveal the vulnerability, waiting for those patches to distribute. Then patching or removing the root cause of the vulnerability (at which point opportunists will start to notice), and finally publicly disclosing it when there are already good mitigations in place.
Example: spectre/meltdowm mitigations.
I've been asked to use this approach myself when reaching out to maintainers. Sometimes it's possible to directly fix the vulnerability as a "side effect" by making a legitimate adjacent change.
That’s what you’re saying here.
https://x.com/spendergrsec/status/2049566830771970483
https://lore.kernel.org/linux-cve-announce/2026042214-CVE-20...
Or is everyone expected to upgrade and reboot every 48 hours for all eternity and just deal with potential regressions all the time?
I think this reflects poorly on the original reporters. If you have a weaponized 700-byte universal local root exploit script ready to go, perhaps you should coordinate with major distros for patches to be available before unleashing it on the world. No matter how "veteran" you are.
(This bug does not technically require a reboot to mitigate).
It's a category error to talk about a disclosure event like this as something that would destabilize someone's fleet operations. The Linux kernel is fallible. So is the x64 architecture. You already have to be ready to lock things down and reboot (or mitigate) at a moment's notice.
Remember: whatever else grumpy sysadmins have to say about this, Xint are the good guys. Contrast them with the bad guys, who have vulnerabilities just as bad as CopyFail, but aren't disclosing them at all --- you only find out about them when it's discovered they're actively be exploited. There's no patch at all. There isn't even a characterization of how they work, so that you could quickly see what to seccomp. That's the actual threat environment serious Linux shops operate in.
LPEs are not rare.
As soon as a patch is committed, the clock starts ticking, the exploit will be discovered by reverse engineering recent commits. The commit was made on April 1st, Xint disclosed it on the 29th. If the Kernel Security team had wanted to, they had 28 days to backport patches in the LTS branches...
So, I wouldn't put any blame on Xint there.
Opportunists are the ones who will sell a 0day to bad guys. Or who will drop a 0day publicly to promote their services. And they’ll fight tooth and nail against any actual legal obligation to engage in responsible and coordinated disclosure, because they make more money without that.
Using quotes around something where you’re actually doing a strawman paraphrase of another commenter you disagree with is bad form.
Ubuntu/RHEL is vulnerable and so are most Linux users by extension.
I absolutely 100% agree with this and I'm glad to see somebody saying it. Any system that is one LPE away from being compromised is already insecure.
The only important system that uses it as a security boundary is Android and there is mitigated by the fact that APKs need user approval, plus strict SELinux and seccomp policy plus the GrapheneOS hardening, and in this case the mitigations succeeded (https://discuss.grapheneos.org/d/35110-grapheneos-is-protect...)
LPEs on Linux are obscenely commonplace.
I'm honestly unaware of what systems could be put in place to prevent this but expecting people to always do the right thing is fantasy level thinking. I mean I bet the disclosers thought they were doing the right thing, hence why it's a bad thing to rely on.
edit: spelling/grammer.
The kernel security team was given the heads up a month ago. At that point it is their decision.
It's fundamentally their position to not work the way that you describe.
Partly they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways (on one hand this case where the vulnerability is almost ignored; on the other hand, I saw cases where a VM panic that could be triggered only by a misbehaving host—which could just choose to stop executing the VM—was given a CVE).
The reason they don't is because Linus and Greg have repeatedly, publicly stated that they don't want to because they don't believe that vulnerabilities conceptually make sense for the linux kernel and they refuse to engage in the process.
That's exactly what I wrote: "they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways".
But there is also a question of bandwidth. If a maintainer asks to bring a specific vulnerability to distros-list, the kernel security people will be reasonable. I did it last March.
For a first approximation: Ubuntu, Debian, RHEL(-derived) to begin with, and SuSE which is in EU/server space (AIUI):
* https://commandlinux.com/statistics/most-popular-linux-distr...
* https://commandlinux.com/statistics/linux-server-market-shar...
Seems like Gentoo, Arch, Mint, and Slackware could also be as well:
* https://distrowatch.com/dwres.php?resource=major
U/Deb/RHEL are 'upstream' of a lot of other projects, and fixes would trickle down to Rocky, Alma, etc. Perhaps VM OS in cloud (AWS, Azure) could be a usage gauge as well.
But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal.
And no, the proposed mitigations don't help with half of the distributions out there...
What’s your theory here? What crime?
Also, all kinds of aiding and abetting.
Copying from the comment I was replying to:
> But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal
But it’s not the law anywhere I’m aware of today, and I’d not support it becoming a law.
Instead of that, you’d rather make the law compel free individuals to limit their speech, or to hand over their work to big companies privately, so big companies can save money?
That doesn’t sound like a nice future, if it’s even enforceable at all.
That's besides the point. If people use the official mitigation on https://copy.fail/#mitigation they will not sufficiently protect themselves on mainstream distros like Ubuntu and Debian.
The page also states
> Most major distributions are shipping the fix now.
This text was probably prepared in advance, but this was simply not true at the time of publication.
Edit: As of this writing, most distros including Redhat, Fedora, Debian Stable, do not have patches available in the package repos, though they're being actively worked on.
Considering that the patches have been available for a while, someone surely reversed what they were for and was actually exploiting this in the wild.
In the age of AI, I’d argue that “responsible disclosure” is dead. Arguably even in closed source projects. Just ask Claude to do a diff between the previous version and to see whether anything fixed in there could have had security implications.
We’re not there yet, but very soon the only way to responsibly disclose a vulnerability will be immediately.
Linux kernel is one of the most audited open-source projects ever. I guarantee you that someone did reverse the patch.
> but forgot to tell the distros
Probably an oversight, but irrelevant. The bug was in the linux kernel. It's insane to suggest that they should have notified everyone shipping the linux kernel.
With the way linux is used these days, I'd guess the number of systems with untrusted local users is pretty limited. Even with shared hosting, you generally have root in your VM or container anyway. Unless this enables an escape from that?
Still the risk that people who run "curl | bash" without care could get bitten, but usually its "curl | sudo bash" anyway...
Lots of shared hosters don't use VMs or containers. It's some arbitrary number of people logging in to a shared system, each one with a home directory under /home/THE_USER_NAME. i've had several such hosters over the years (thankfully not right now, though).
Things like HPC clusters are multiuser & don't entirely trust their users. If they did we wouldn't need users/groups/permissions etc in the first place.
And then there are users running claude-cli and friends who may just find it convenient to use a local root exploit to remove obstacles.
So containers don't protect you, only a VM.
How so?
But even if you think making unethical decisions in personal self interest is something no one should be criticized for, surely the Linux kernel team ought to have some process for notifying the top distributions of an upcoming LPE, just out of practicality.
Distros are downstream of kernel, that doesn’t entitle them to expect to be contacted directly by every security reporter. That’s not on them. Distros that are big enough should be plugged into the linux security team for notifications.
Security researchers cannot be held responsible for broken lines of communication within the org charts of projects that they study. They’re providing a valuable public service already, how much more do you want?
Yes it does. That's how it's always been done and distros can ship a fix well before it ends up in a kernel release.
Any strategy that assumes that the rest of the world is functional or makes you personally responsible for fixing all of it is equally broken but there is a reasonable middle ground and sending a few more emails lies within it
AWS and GCP are downstream another level. Should the reporter also have worked with them? And their customers? And the customers of their customers?
IMO this whole discussion seems like people are annoyed by the security researchers doing god’s work and wish they didn’t exist or think that they should be fully subservient to the projects and companies they are helping for free. The bugs were there before the researchers revealed them!!
Most people in tech think like the techie in this comic strip.
https://xkcd.com/538/
A large percentage of kernel fixes have the potential to be similarly bad. For some the potential isn't even realized until after the fix has shipped.
Ever stable release GregKH says you must upgrade now, because there is something security relevant in there. This happens at least once a week.
As for shared hosting providers it is my sense that there is always at least one local privilege escalation available to miscreants. Making shared hosting only safe if there is a certain amount of trust.
I remember bugs that were similarly bad from my university days 30+ years ago. Has anything substantially changed?
I'd consider a shared hoster which allows users to run their own (native) code and doesn't use VMs for tenant isolation extremely irresponsible in 2026.
And if you disclose to just a handful, why ignore the rest?
I could equally ask: "Who knows how many attackers learned about this vulnerability from this disclosure, and used it before the distributions fixed it?"
So maybe folks should take a break from the kind of armchair quarterbacking that this was “incredibly irresponsible”, as was done upthread, or that the researchers should be blacklisted for life, as a parallel commenter stated.
The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.
There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.
So I feel like the argument reduces into "why is it a problem that now anyone could exploit it, if some people were exploiting it already". Which imho isn't a sensible argument because the issue is clearly the amount of people capable of using the exploit for nefarious purposes, which has increased.
“Because we can’t know if there was exploitation by existing parties who had discovered the vulnerability on their own, there are upsides to disclosing earlier so that affected users can take mitigating steps and review their systems for indicators of compromise. Additionally, the more projects the researchers pull into the loop for coordinated disclosure, the higher the likelihood that they further leak the vulnerability to more attackers.”
However the issue is that we cannot know if the attack space has been broadened or lessened as a consequence of this disclosure, because of how eager it was. If it wasn't eager then we could much more comfortable in suggesting that the attack space has probably been reduced.
Given the exploit had been living in the linux code base undetected for so long in the first place, I think its fair to state that disclosing the exploit prior to the distributions being ready and given the distributions are the principal attack vector of the exploit: that the researcher has made the situation worse and should reflect on their actions.
The idea about the available exploit space and how the actors within it might, or might not move is a much more interesting avenue of conversation and I thank you for elaborating on your initial comment. <3
I do however feel that its hard to be confident about whether or not the attack space has been increased or reduced as a consequence of the eager disclosure. I feel we could make the case either way.
Anything else inevitably has worse for the public good.
Having spent that entire time and then some on both offensive and defensive teams, I assure you longer delays after notification do NOT decrease the overall risk to the public.
There's a reason we've landed where we have as a security community.
It's an advertisement for an unpatched critical exploit and apparently some kind of infosec company.
The disclosure is private. Meaning neither the commit messages nor any public info can leak too much information about the bug. It's usually kept rather discrete.
It is impractical for the kernel to broadcast to all its users privately.
Meaning that either a) distro maintainers should be privy to it, but where does this end?[1] or b) we have the current situation
[1] probably the top 5 distros security teams can just be copied into the private mail. Maybe the kernel security private list can forward the emails to them as well.
Problem is, every other type of communication between distros and kernel is implicit. In commit messages, patches and release notes. So it's an exceptional case.
BTW, with LLMs there's a new issue. It is now cheap to scan the kernel commit log maybe in _next and ask it to identify what could be a patch for a private disclosure. And then immediately RE the patch and exploit it on deployed kernels.
None? Because nobody* does hosting using Linux users as a security boundary. It's not the 90s.
* Standard HN disclaimer for people that think that some retro shell box with 10 users disproves "nobody": nobody does not literally mean exactly 0 people in this context.
That is unfortunately not true. i left my last one only a few years ago and they're still going strong without me.
Maybe it is irresponsible how little attention we pay to software security. Maybe, software developers of all kind should spend an entire year not developing any features at all, but fix all the tech debt of 30 years instead.
Yes, that sounds revolutionary, but I do not see an alternative in an age where all you need to find kernel bugs of this scale with AI agents.
It's a total arsehole'y move to not share with open-source projects (like Debian) but for commercial vendors like Microsoft I don't give a crap.
Now let's not get carried away either: that's a privilege escalation, so it already requires access to a local account. We're not exactly in Jia Tan "I backdoor every SSH out there if your Linux distro is using systemd" territory either.
Maybe a decade of corporations with revenue in the billions, paying peanuts and coffee money, for critical vulnerability disclosures made it....
Yes, this was clearly a marketing stunt to promote Xint code.
I, for one, will never use Xint code and will advise everyone to never use it. To anyone working there: enjoy your 15 minutes, I hope this backfires right in your face.
External security research happens for one of only a few reasons typically:
1) hobbyists who are learning or just like to do it for fun 2) bug bounties (good luck with those in most open source) 3) marketing for security companies 4) non-public research going to CNO/CNE
If you want to kill 3, the output of 1 will not come close to 4 and the public is NOT better off with fewer public bugs.
It is a really really bad look for Linux, puts a bit of water on all hype around switching from Windows.
For single user systems (not rigorously defined, I presume it's the intersection of our two definitions which we might be talking about) the nature of the exploit is local privilege escalation, of which there could be many possible, and many mitigations / countermeasures against. This could have suddenly appeared from the ether of "unknown unknowns" for some people.
Those people farther up the food chain still potentially have service accounts, maybe even user accounts for some purposes, perhaps "trusted" services which deliver them code which they deserialize and run once. (Have a pickle.)
severity * impact * likelihood
Not everyone looking to migrate from Windows 95 plans to run everything as root afterward.
On the copy.fail site:
Not everybody needs or wants to wait for their distro, or plans to patch their IC firmware when a config change will do.No OS is perfect. The awkward rollout for this bug fix is proof of that.
Said no one ever...present post excluded :-))
Why would they imply it is incumbent on the reporter to liaise with distributions? That seems to assume a high level of familiarity with the linux project. Vulnerability reporters shouldn’t be responsible for directly working with every downstream consumer of the linux kernel, what’s the limiting principal there? Should the reporter also be directly talking to all device manufacturers that use Linux on their machines?
IMO reporter did more than enough by responsibly disclosing it to linux and waiting for a patch to land.
Aren’t there people in the linux project itself with authority over and responsibility for security vulnerabilities? One would think they would be the ones notifying downstream distros…
https://docs.kernel.org/process/security-bugs.html
```As such, the kernel security team strongly recommends that as a reporter of a potential security issue you DO NOT contact the “linux-distros” mailing list UNTIL a fix is accepted by the affected code’s maintainers and you have read the distros wiki page above and you fully understand the requirements that contacting “linux-distros” will impose on you and the kernel community. ```
The bug is in the kernel, so it's OK to notify only the kernel team. Then they should notify the distributions they are in contact with.
The first message about Copy Fail that I see in the archive https://www.openwall.com/lists/oss-security/2026/04/ is from April 29. I run apt on my Debian 13 yesterday and got the fixed kernel.
Do I expect that every distribution is already patched? I don't. However each of us choose the distribution to run. Security can be one of the criteria for the choice. I played safe and I'm using Debian. Other people can make a different tradeoff maybe based on their personal threat analysis.
There are people running end of life kernels and distributions in production, or with pinned old kernels especially on ARM SBCs. I know both. Those are other choices made at the user end of the process.
IMHO the disclosure and fix process was run in the proper way from the researcher to the end user.
Make them private? Now you have a nice stream of zero days, long before fixes are available, making bad actors who made it in filthy rich.
They believe there is no difference being able to get root and not being able to get root? It seems to me that to-be(-root) and not-to-be(-root) are quite different.
IMO it's pretty obviously not a view that they seriously hold, it's just one of those technical justifications people come up with to avoid admitting something they don't want to admit - in this case that Linux has a poor security track record.
Linus is the reason why kernel team doesn't talk to distros. For them bugs are bugs, security related or not.
https://lkml.iu.edu/hypermail/linux/kernel/1711.2/01701.html...
I don't agree with the premise, but I do think it's a sincerely held one.
These are smart people. If it wasn't about their own project I really think they'd have a different point of view. I wonder what they say about Microsoft's security bugs for example!
Literally never. Why would he? He's surrounded by sycophants. And we have Greg for whenever Linus isn't involved anymore, and Greg is just as boneheaded.
Everyone involved here failed to do the right thing, and hiding behind the lack of written words is weak sauce.
A security researcher's ethical obligations are to protect users over vendors (barring any contractual agreement in place). From what has been discussed in this thread, they meet that bar.
Sure, they could have gone the extra mile to ensure the distros were in a good place to patch before they published the exploit. That's a kindness you can wish for, but don't disparage them for not going that extra mile. It's a bonus.
It's also possible that it simply didn't occur to them to do so this time. There's certainly lessons to be learned either way. I don't know that the right lessons will emerge from hostility.
and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.
The skills to detect code exploits is not the same as the skills to navigate an informal org chart to the satisfaction of an amorphous audience if end users (i.e. us on HN).
That said… as they are a company that supposedly specializes in this field, and is trying to sell a product, I do believe they should do better. Right now, I don’t have much confidence in their product.
I see this as an organizational failure of the Linux ecosystem. There should be better communication between distro and kernel development.
yes, because 30 days had passed from the time the patch landed in the kernel, as per industry standard.
approximately every security researcher, including the likes of google and other big names you may know, does a 90+30 disclosure, which is what happened here. they do this for good reason, which has been figured out over decades of experience in reporting thousands and thousands of vulnerabilities.
the only security researchers i know of that dont like 90+30 actually argue for shorter timelines (or immediate disclosures).
wait, what?
you are in another comment thread, of this very post, calling these reporters bumbling and incompetent for their disclosure. "merely bumblingly incompetent and overly eager to get their marketing pitch out the door" - that is your quote.
you also said "Basic care would involve making sure the patches had made it into the wild before ending the embargo", which is the literal opposite of immediate disclosure.
but now you are saying they should have just dropped it with no reporting at all? because that is what "immediate disclosure" means. pop up the exploit script on twitter and call it done.
If you're going wait a month between landing the patch (possibly notifying attackers), but not notify the people who may get the patch to users, it seems like something was mishandled.
Good for them. But just because some folks cannot afford 24/7 response teams and on-call personnel that doesn't make them or their systems any less important.
Lots of non-profits and academic institutions had to scramble because of the Linux kernel team's position of non-communication to distros.
Google search: https://share.google/aimode/eihDKXZJy94Z5lC1p
and it's beyond me to not think about doing this and instead exposing everyone and their neighbor to this exploit up front.
I'm certain this is even a felony in some legislations, rightfully so.
https://www.openwall.com/lists/oss-security/2026/05/01/3
> Nope, sorry, we are NOT allowed to notify anyone about anything "ahead of time" otherwise we will have to tell everyone about everything. That's the only policy by which all the legal/governmental agencies have agreed to allow us to operate in, so we are stuck with it.
greg k-h
I don't think they would have gotten as much flame if it weren't for how the RHEL 14 mention and such were put.
This is a security company with a professional(?) communications department banking on pointing fingers at distro maintainers. We are not talking about solo security researchers or academics here.
At this point this is not really white-hat/ethical hacking anymore.
Ofc the kernel-distro security loophole is stupid and should be patched ASAP, but that doesn't absolve this company of wrongdoing.
here is a good start: https://projectzero.google/vulnerability-disclosure-faq.html...
there is ~3 decades of more context if you search for it.
If I call 911 to report a fire at an oil storage facility - and they ask me to alert the hospital, then phone the neighboring county's Sheriff Dept., and then...yeah. Either I'm way out in the sticks (and known to/trusted by the 911 operator), or else the 911 service is run by children.
I'd hate to be involved in any emergency services. Too many people have opinions on how things should have been done.
I am running this in production right now and it mitigates the attack, with no unexpected side-effects as far as I can see.
> Nope, sorry, we are NOT allowed to notify anyone about anything "ahead of time" otherwise we will have to tell everyone about everything. That's the only policy by which all the legal/governmental agencies have agreed to allow us to operate in, so we are stuck with it.
I'd be interested in knowing more about that policy... Seems that there should be exceptions for the major distros.
Of course, major distros who have contracts with SLA could also pay for someone to be on the kernel security team and get a heads up like that..
And try to define "major distros" in a way that actually means anything viable.
If you just want to count users, then that would only be Android (everything else is a rounding error.) After Android, that would be Yocto, and then Debian. All distros after that are mere fractions of overall users compared to those 3 by number of running systems alone.
If you want to count it as "$ spent on Linux" then that cuts out Android and Yocto and Debian as those distros are free, and would focus purely on the tiny installed base of paid Linux systems, and cut everyone else out.
So what is a fair way to do this other than "we notify no one, and tell everyone to always update their systems to the latest stable releases that we support."
Especially as there is no way for us to determine your use case (i.e. if a specific bug is a vulnerability for you or not.)
Letting SUID binaries just "exist" anywhere is a stupendous security issue. What if you mount some external storage medium, how are you to verify that none of the SUID binaries on that block device are malicious.
Additionally, this exploit appears to only work if the user executing the SUID binary can also read the SUID binary. There's no reason for non-root users to have read on a SUID binary.
NixOS does this correctly. No SUID in the normal package installation directory `/nix/store` and no package leakage outside of that no `nosuid` can safety be used on all other mountpoints. The exception is just a single-purpose `/run/wrappers.$hash` directory that safety contains executable ONLY SUID wrappers.
The bug that is being exploited gives you basically arbitrary page cache poisoning. At that point it's already game over. Patching a suid program is maybe the easiest way to get a root shell from that but far from the only.
With this vulnerability you can manipulate the page cache. You could also manipulate ld.so to hook into arbitraty system calls, or set your uid to 0, or any of another dozen or so ways to elevate your privileges.
Mount points have nothing to do with this, even if is always a good idea to disallow suid in user writable areas and prevent reading suid files, but that's for other reasons. NixOS does nothing to fix this and is just as vulnerable as everyone else.
To execute the binary it needs to be read from disk and loaded into memory.
In fact if you have read permissions but not executable permissions on a specific binary then you can still execute it by calling the linker directly /bin/ld.so.1 /path/to/binary (the linker will read and load the binary and then jump to the entry point without an exec() call)
This is not correct, as when the binary is setuid-someone-else, you are not the one executing it; they are.
Removing world-readability from all setuid-root binaries on the system would be sufficient to kill the PoC script provided for this vulnerability. It would not be sufficient to prevent exploitation though; there are many ways to abuse the ability to write to files you have read access to in order to gain root, for example by using the vulnerability to alter the cached copy of a file in /etc/sudoers.d/, or overwrite /etc/passwd, or /etc/crontab, ... the list goes on.https://www.bleepingcomputer.com/news/security/new-linux-cop...
https://github.com/theori-io/copy-fail-CVE-2026-31431/issues...
https://github.com/theori-io/copy-fail-CVE-2026-31431/issues...
Basically: sudo grubby --update-kernel=ALL --args=initcall_blacklist=algif_aead_init
sudo reboot
https://discourse.nixos.org/t/is-nixos-affected-by-copy-fail...
Copy Fail
https://news.ycombinator.com/item?id=47952181
The distros dont get any involvement until release, welcome to the suck.
- Debian
- Ubuntu
- Arch
- Amazon/Azure
- Fedora/RHEL
its the same disclosure policy as google's project zero, and several other major players, so you should probably be trying to ping a lot more people
reporters should not be responsible for finding out and individually reporting to every downstream consumer. blame the kernel security team, who is in a much better position to coordinate notifications to individual distro security teams.
the disclosure itself followed a normal timeline, which you can view at the bottom of their blog post.
Seems not fatal to all non-patched systems.
However, not having the module loaded does mean that in normal operation you don't need the module, so the proposed mitigation of disabling the module is safe in the sense that it won't disrupt anything.
I tried to rmmod on all servers and rmmod always returns `ERROR: Module algif_aead is not currently loaded`, that's why I think it's fine. Of course I take a look on https://security-tracker.debian.org/tracker/CVE-2026-31431 for the updates.
Well, for one thing, opening an AF_ALG socket, as the exploit does.
But only Trixie (and testing/Sid) are patched (as I type this).
On Bookworm (and Bullseye), you want to add the module to list of blocked modules. It's a one-line change.