(Q|O)SFP are basically just raw high speed serial interfaces to whatever - you see this a lot in FPGAs, you can use the QSFP interfaces for anything high speed - PCIe, SATA, HDMI…
dcrazy [3 hidden]5 mins ago
> Although we can already buy commercial transceiver solutions that allow us to use PCIe devices like GPUs outside of a PC, these use an encapsulating protocol like Thunderbolt rather than straight PCIe.
> [snip]
> As explained in the intro, this doesn’t come without a host of compatibility issues, least of all PCIe device detection, side-channel clocking and for PCIe Gen 3 its equalization training feature that falls flat if you try to send it over an SFP link.
So, uh… what’s the benefit? How much overhead does Thunderbolt really introduce, given it solves these other issues?
jmyeet [3 hidden]5 mins ago
The benefits are twofold: physical colocation and bandwidth.
Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.
TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).
ahepp [3 hidden]5 mins ago
How does this compare to something like RDMA over Converged Ethernet (RoCE)?
mmastrac [3 hidden]5 mins ago
This was a super interesting video to watch. I honestly thought SFP required more setup, but this explains why AliExpress is so ripe with USB3 and HDMI over SFP converters that are dirt cheap.
fl4regun [3 hidden]5 mins ago
Cool project! PCIe itself I think is likely to end up doing something similar soon, there are provisions in the spec now for optical retimers.
russdill [3 hidden]5 mins ago
There's a number of optical modules for TB3 and TB4, might be an easier (but less fun) route as TB3 and TB4 can carry PCIe.
whalesalad [3 hidden]5 mins ago
So you're saying I can put a handful of 4090's out in the middle of snowy Michigan with a handful of OM4 cables snaking into my basement to run legit arctic cooling with no noise?
myself248 [3 hidden]5 mins ago
No part of Michigan is in the arctic, but sure, outside of mosquito season, that would work.
preisschild [3 hidden]5 mins ago
Might as well put your entire computer outside and use thunderbolt/usb-4 over fiber docks
phendrenad2 [3 hidden]5 mins ago
Watercooling loop light be better, the radiator fins will still rust from condensation.
benjojo12 [3 hidden]5 mins ago
I mean yes, but you could also just place the entire computer out there as well
While at a higher level, thunderbolt and https://en.wikipedia.org/wiki/ExpEther can both of course work over fiber too!
(Q|O)SFP are basically just raw high speed serial interfaces to whatever - you see this a lot in FPGAs, you can use the QSFP interfaces for anything high speed - PCIe, SATA, HDMI…
> [snip]
> As explained in the intro, this doesn’t come without a host of compatibility issues, least of all PCIe device detection, side-channel clocking and for PCIe Gen 3 its equalization training feature that falls flat if you try to send it over an SFP link.
So, uh… what’s the benefit? How much overhead does Thunderbolt really introduce, given it solves these other issues?
Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.
TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).