HN.zip

FlashAttention-T: Towards Tensorized Attention

59 points by matt_d - 24 comments
jmward01 [3 hidden]5 mins ago
I built guided window attn (literally predict the position of the window) a while ago and that works great. Why are we still stuck on any form of attn that looks at the entire context in any meaningful way? Do humans work this way? Do I need a whole book to predict the next word? Who out there is working on really new unique ways to deal with infinite history, other than me of course :)
simianwords [3 hidden]5 mins ago
OT but instead of quadratic attention can we not have n^10 or something crazier? I feel like we are limiting the intelligence just to save cost. But I can imagine that there might be some questions that may be worth paying higher cost for.

I feel like n^10 attention can capture patterns that lower complexity attention may not. So it seems arbitrary that we have n^2 attention.

jsenn [3 hidden]5 mins ago
You can find papers discussing "cubic" attention, i.e. each token gets to interact with each pair of other tokens, but always in very theoretical settings with single-layer transformers on contrived synthetic tasks.

Keep in mind that LLMs have many many layers, so they have plenty of opportunity to model higher-order interactions without needing to brute force every possible combination of 10 previous tokens, of which the vast majority will be useless. Empirically, even full "quadratic" attention is not always necessary, as evidenced by the existence of linear/sparse attention variants that perform almost as well.

noosphr [3 hidden]5 mins ago
Yes, and it works in theory.

Less so in practice. You saturate the memory of a b200 with a few dozen tokens on attentions higher than order 4. Training is even worse.

To paraphrase Knuth: high order polynomials are much more unimaginably large than mere infinity.

storus [3 hidden]5 mins ago
Aren't layers basically doing n^k attention? The attention block is n^2 because it allows 1 number per input/output pair. But nothing prevents you from stacking these on top of each other and get k-th order of "attentioness" with each layer encoding a different order.
eldenring [3 hidden]5 mins ago
This is a common way of thinking. In practice this type of thing is more like optimizing flop allocation. Surely with an infinite compute and parameter budget you could have a better model with more intensive operations.

Another thing to consider is that transformers are very general computers. You can encode many many more complex architectures in simpler, multi layer transformers.

refulgentis [3 hidden]5 mins ago
n^2 isn't a setting someone chose, it's a mathematical consequence of what attention is.

Here's what attention does: every token looks at every other token to decide what's relevant. If you have n tokens, and each one looks at n others, you get n * n = n^2 operations.

Put another way: n^2 is when every token gets to look at every other token. What would n^3 be? n^10?

(sibling comment has same interpretation as you, then handwaves transformers can emulate more complex systems)

measurablefunc [3 hidden]5 mins ago
There are lots more complicated operations than comparing every token to every other token & the complexity increases when you start comparing not just token pairs but token bigrams, trigrams, & so on. There is no obvious proof that all those comparisons would be equivalent to the standard attention mechanism of comparing every token to every other one.
vlovich123 [3 hidden]5 mins ago
While you are correct at a higher level, comparing bigrams/trigrams would be less compute not more because there’s fewer of them in a given text
sigbottle [3 hidden]5 mins ago
Oh wow there's still work being done on ampere?

I was wondering - I've been thinking about switching to AI systems programming (I know, easy task), but from what I understand, industry cloud GPUs are the main winners, right? Nobody's going to pay me (assuming I even had the skills) to optimize for consumer GPUs?

From what I understand, it's not just number + capacity + performance, it's literal core primitives. I don't think any of the "Blackwell" chips like the grace one or rtx 5090 have for example SM pairs in their ISA? And likewise similar fundamental differences between consumer and cloud hopper (where the majority of the perf is the cloud one's ISA?)

So I guess I'm wondering if I should buy a GPU myself or should I just rent on the cloud if I wanted to start getting some experience in this field. How do you even get experience in this normally anyways, do you get into really good schools and into their AI labs which have a lot of funding?

vlovich123 [3 hidden]5 mins ago
Look at am the email addresses. If you’ll recall there’s an embargo on China.
storus [3 hidden]5 mins ago
I still have 2x NVLinked A6000 and they aren't that bad compared to a single RTX 6000 Pro.
Maxious [3 hidden]5 mins ago
yep, https://github.com/poad42/cuda-fp8-ampere recently another attempt at squeezing whatever's left from ampere
semiinfinitely [3 hidden]5 mins ago
tri dao isn't on the paper is it even allowed to call it "FlashAttention"???
saagarjha [3 hidden]5 mins ago
Less annoying link directly to the paper: https://dl.acm.org/doi/pdf/10.1145/3774934.3786425?download=...
SpaceManNabs [3 hidden]5 mins ago
link if you don't want to automatically download files

https://dl.acm.org/doi/pdf/10.1145/3774934.3786425

verytrivial [3 hidden]5 mins ago
Tldr: 5% - 17% speedup due to removing a bottleneck by juggling where on a GPU/compute core a computation is done during Flash attention.
measurablefunc [3 hidden]5 mins ago
[flagged]
dheera [3 hidden]5 mins ago
"Most people" didn't figure this out either, the top 0.01% did.
E-Reverance [3 hidden]5 mins ago
I also wouldn't be surprised if they used AI to assist themselves in small ways
measurablefunc [3 hidden]5 mins ago
You're just moving the goal post & not addressing the question I asked. Why isn't AI optimizing the kernels in its own code the way people have been optimizing it like in the posted paper?
phkahler [3 hidden]5 mins ago
It will, right after it reads the paper.
measurablefunc [3 hidden]5 mins ago
I read the paper. All the prerequisites are already available in existing literature & they basically profiled & optimized around the bottlenecks to avoid pipeline stalls w/ instructions that utilize the available tensor & CUDA cores. Seems like something these super duper AIs that don't get tired should be able to do pretty easily.