HN.zip

A programmer-friendly I/O abstraction over io_uring and kqueue (2022)

61 points by enz - 19 comments
willtemperley [3 hidden]5 mins ago
Perhaps someone who knows what they're talking about should update the Wikipedia page on io_uring [1]. Someone with a casual interest in Linux internals will probably get a poor impression of io_uring security which appears to be largely due to Google using an old kernel in Android [2].

[1] https://en.wikipedia.org/wiki/Io_uring [2] https://github.com/axboe/liburing/discussions/1047

ayuhito [3 hidden]5 mins ago
Also worth checking out libxev[1] by Mitchell Hashimoto. It’s a Zig based event loop (similar to libuv) inspired by Tigerbeetle’s implementation.

[1] https://github.com/mitchellh/libxev

Zambyte [3 hidden]5 mins ago
Also, the Zig 0.16.0 preview nightly builds includes a new Io library[0]. I have not used libxev or Tigerbeetles event loop, but I believe the standard Zig implementation is likely largely influenced by those two.

[0] https://ziglang.org/documentation/master/std/#std.Io, or https://ziglang.org/documentation/0.16.0/std/#std.Io after the release

adzm [3 hidden]5 mins ago
I love NT's IO completion ports. I think kqueue is very similar, right? Honestly I've been able to get by using boost asio for cross platform needs but I've always wanted to see if there are better solutions. I think libuv is similar, since it is what node is based on, but I'm not entirely sure what the underlying tech is for non-Windows
Cloudef [3 hidden]5 mins ago
kqueue is similar to epoll, it's readiness based and not completion like IOCP and io_uring. IOCP is nice in theory, but the api and the different ways everything has to be fed for it leaves a lot to be desired... Windows also has own version of iouring, but it's bit abandoned and only works for disk io which is shame, because it could have been nice new clean io api for windows.
shakna [3 hidden]5 mins ago
> Hey, maybe we’ll split this out so you can use it too. It’s written in Zig so we can easily expose a C API.

This never happened, did it?

Suppose libex is the alternative.

RantyDave [3 hidden]5 mins ago
There was a brief fascination with user mode TCP over DPDK (or similar). What happened with that? Can you get similar performance with QUIC? Does io_uring make it all a moot point?
toast0 [3 hidden]5 mins ago
I've only done a little prototyping with it, but io_uring addresses the same issue as DPDK, but in a totally different way. If you want high perf, you want to avoid context switches between userland and kernelland; you have DPDK which brings the NIC buffers into userland and bypasses the kernel, you have things like sendfile and kTLS which lets the kernel do most of the work and bypasses userland; and you have io_uring which lets you do the same syscalls as you're doing now, but a) in a batch format, b) also in a continuous form with a submission queue thing. I think it's easier to reach for io_uring than DPDK, but it might not get you as far as DPDK; you're still communicating between kernel and userland, but it's better than normal syscalls.

> Can you get similar performance with QUIC?

I don't know that I've seen benchmarks, but I'd be surprised if you can get similar performance with QUIC. TCP has decades of optimization that you can lean on, UDP for bulk transfer really doesn't. For a lot of applications, server performance from QUIC vs TCP+TLS isn't a big deal, because you'll spend much more server performance on computing what to send than on sending it... For static file serving, I'd be surprised if QUIC is actually competitive, but it still might not be a big deal if your server is overpowered and can hit the NIC limits with either.

saghm [3 hidden]5 mins ago
At least to me, one of the most compelling parts of QUIC is that you establish a connection with TLS without needing extra round trips compared to TCP, where there are separate handshakes for the connection and then the TLS initialization. Even if it was no faster than TCP from that point forward, that seems like enough to make the protocol worthwhile in today's world where TLS is the basically the rule with relatively few exceptions rather than an occasion use case.

It's also something I just find fascinating because it's one of the few practical cases where I feel like the compositional approach has what seems to be an insurmountable disadvantage compared to making a single thing more complex. Maybe there are a lot more of them that just aren't obvious to me because the "larger" thing is already so well-established that I wouldn't consider breaking it into smaller pieces because of the inherent advantage from having them combined, but even then it still seems surprising that that gold standard for so long arguably because of how well it worked with things that came after eventually run into change in expectations that it can't adapt to as well as something with intentionally larger scope to include one of those compositional layers.

Veserv [3 hidden]5 mins ago
That is because providing a reliable stream over a stateful connection is actually about a half-dozen layers of abstraction.

TCP couples them all in a large monolithic, tangled mess. QUIC, despite being a little more complex, has the layers much less coupled even though it is still a monolithic blob.

A better network protocol design would be actually fully decoupling the layers then building something like QUIC as a composition of those layers. This is high performance and lets you flexibly handle basically the entire gamut of network protocols currently in use.

formerly_proven [3 hidden]5 mins ago
> You can switch a file descriptor into non-blocking mode so the call won’t block while data you requested is not available. But system calls are still expensive, incurring context switches and cache misses. In fact, networks and disks have become so fast that these costs can start to approach the cost of doing the I/O itself. For the duration of time a file descriptor is unable to read or write, you don’t want to waste time continuously retrying read or write system calls.

O_NONBLOCK basically doesn't do anything for file-based file-descriptions - a file is always considered "ready" for I/O.

catlifeonmars [3 hidden]5 mins ago
Is that true for all file abstractions? What happens with NFS?
gpderetta [3 hidden]5 mins ago
Think about it, what does it means for a file to be ready? Socket and pipes are a stream abstraction: To be ready it means that there is data to read or space to write.

But for files data is always available to read (unless the file is empty) or write (unless the disk is full). Even if you somehow interpret readiness as the backing pages being loaded in the page cache, files are random access so which pages (ie which specific offset and length) you are interested in can't be expressed via a simple fd based poll-like API (Linux tried to make splice work for this use case, but it didn't work out).

stingraycharles [3 hidden]5 mins ago
Don’t block devices have a scheduler with a queue under the hood? Couldn’t that queue become full when writing?

(This is a genuine question)

yarosv [3 hidden]5 mins ago
from open(2):

    Note that this flag has no effect for regular files and block devices; that is, I/O operations will (briefly) block when device activity is required, regardless  of  whether O_NONBLOCK  is  set.  Since O_NONBLOCK semantics might eventually be implemented, applications should not depend upon blocking behavior when specifying this flag for regular files and block devices.
WD-42 [3 hidden]5 mins ago
I’m pretty sure spinning HDDs can have rather complex controllers that actually try to optimize access at the block level by minimizing the amount the read head needs to travel. So yea there are some buffers in there.
RantyDave [3 hidden]5 mins ago
This is one of the things that changed when we moved from SATA to NVME. SATA has only a few "in flight" instructions and NVME does 64k.

Begs a question though: are there any NVME "spinny rust" disks?

up2isomorphism [3 hidden]5 mins ago
When I am already using things like io_uring already I don’t need any io abstraction.

BTW most of applications is totally fine with a UNIX file apis.

toast0 [3 hidden]5 mins ago
Some people would rather have an abstraction over io_uring and kqueue rather than choosing a single API that works everywhere they want to run, choosing to only run on the OS that provides the API they prefer, or writing their loop (and anything else) for all the APIs they want to support.

But I agree with you; I'd rather use the thing without excess abstraction, and the standard apis work well enough for most applications. Some things do make sense to do the work to increase performance though.