HN.zip

io_uring, libaio performance across Linux kernels and an unexpected IOMMU trap

52 points by tanelpoder - 16 comments
tanelpoder [3 hidden]5 mins ago
I understand that it's the interrupt-based I/O completion workloads that suffered from IOMMU overhead in your tests?

IOMMU may induce some interrupt remapping latency, I'd be interested in seeing:

1) interrupt counts (normalized to IOPS) from /proc/interrupts

2) "hardirqs -d" (bcc-tools) output for IRQ handling latency histograms

3) perf record -g output to see if something inside interrupt handling codepath takes longer (on bare metal you can see inside hardirq handler code too)

Would be interesting to see if with IOMMU each interrupt handling takes longer on CPU (or is the handling time roughly the same, but interrupt delivery takes longer). There may be some interrupt coalescing thing going on as well (don't know exactly what else gets enabled with IOMMU).

Since interrupts are raised "randomly", independently from whatever your app/kernel code is running on CPUs, it's a bit harder to visualize total interrupt overhead in something like flamegraphs, as the interrupt activity is all over the place in the chart. I used flamegraph search/highlight feature to visually identify how much time the interrupt detours took during stress test execution.

Example here (scroll down a little):

https://tanelpoder.com/posts/linux-hiding-interrupt-cpu-usag...

eivanov89 [3 hidden]5 mins ago
BTW, the whole situation with IRQ accounting disabled reminds me the -fomit-frame-pointer case. For a long time there was no practical performance reason, but the option had been used... Making slower and harder to build stacks both for perf analyses and for stack unwinding in languages like C++.

After careful reading I'm surprised how small IRQ squares build up 30%. Should search for interrupts when I inspect our flamegraphs next time.

tanelpoder [3 hidden]5 mins ago
I was doing over 11M IOPS during that test ;-)

Edit: I wrote about that setup and other Linux/PCIe root complex topology issues I hit back in 2021:

https://news.ycombinator.com/item?id=25956670

singron [3 hidden]5 mins ago
FYI 11M IOPS in terms of AWS EBS is 138 gp3 volumes (80K IOPS each), which costs about $56K/month or about $1.3M over 2 years. If anyone was considering using EBS for high-IOPS workloads, don't.

I think your test had 10 980 Pros, which were probably around $120 each at the time (~$1200 total). SSDs are wildly more expensive now, but even if you spend $500 each, it's nowhere close to EBS.

It's apples vs oranges, but sometimes you just want fruit.

eivanov89 [3 hidden]5 mins ago
That's super hot. Especially the update with the 37M IOPS reference. Might be very useful for my next tasks related to a setup with 6 NVMe disks: 1. Get all disks saturated through the network (including RDMA usage). 2. Play with io_uring to share a polling thread. Currently, no luck: if I share kernel poller between two devices then improvement is just +30% (at a cost of 1 core). Considering alternative schemes now.
eivanov89 [3 hidden]5 mins ago
Unfortunately, we don't have a proper measurements for IOPOLL mode with and without IOMMU, because initially we didn't configure IOPOLL properly. However, I bet that this mode will be affected as well, because disk still has to write using IOMMU.

You suggest a very interesting measurements. I will keep it in my mind and try during next experiments. Wish I have read this before to apply during the past runs :)

tanelpoder [3 hidden]5 mins ago
Yeah you'd still have the IOMMU DMA translation, but would avoid the interrupt overhead...
eivanov89 [3 hidden]5 mins ago
Dear folks, I'm the author of that post.

A short summary below.

We ran fio benchmarks comparing libaio and io_uring across kernels (5.4 -> 7.0-rc3). The most surprising part wasn’t io_uring gains (~2x), but a ~30% regression caused by IOMMU being enabled by default between releases.

Happy to share more details about setup or reproduce results.

jcalvinowens [3 hidden]5 mins ago
Thanks for sharing this.

Was the iommu using strict or lazy invalidation? I think lazy is the default but I'm not sure how long that's been true.

eivanov89 [3 hidden]5 mins ago
We compared IOMMU fully disabled vs enabled. When it is enabled, I expect it to be lazy (should be the default for IOMMU). Note, that we recommend to use passthrough to completely bypass translation for most devices independent on strict/lazy mode.
skavi [3 hidden]5 mins ago
what was the security situation of whatever is now being protected by the IOMMU before it was enabled by default?
eivanov89 [3 hidden]5 mins ago
When IOMMU is not enabled, any PCIe device capable of DMA could access arbitrary physical memory. It allows to read any sensitive data, modifying memory and fully compromising the system without CPU involvement.

There are many DMA-based attacks described in the literature. Even with IOMMU, some attacks are still possible due to misconfiguration or incomplete isolation. For example: https://www.repository.cam.ac.uk/items/13dcaac4-5a3d-4f67-82...

In our case, we didn’t dive deeply into the security aspects. Our typical deployment assumes a trusted environment where YDB runs on dedicated hardware, so performance considerations tend to dominate.

hcpp [3 hidden]5 mins ago
Why was 4K random write chosen as the main workload, and would the conclusion change with sequential I/O?
eivanov89 [3 hidden]5 mins ago
That's a popular DBMS pattern. We chosen writes over reads, because on many NVMe devices writes are faster and it is easier to measure software latency.

I guess that in case of sequential I/O result would be similar. However with larger blocks and less IOPS the difference might be smaller.

menaerus [3 hidden]5 mins ago
So perhaps a mixed read+write workload would be more interesting, no? Write-only is characteristic of ingestion workloads. That said, libaio vs io_uring difference is interesting. Did you perhaps run a perf profile to understand where the differences are coming from? My gut feeling is that it is not necessarily an artifact of less context-switching with io_uring but something else.
eivanov89 [3 hidden]5 mins ago
There are a couple of challenges with mixed read+write workloads on NVMe.

In practice, read latency tends to degrade over time under mixed load. We observe this even across relatively short consecutive runs. To get meaningful results, you need to first drive the device into a steady state. In our case, however, we were primarily interested in software overhead rather than device behavior.

For a cleaner comparison, it would probably make sense to use something like an in-memory block device (e.g., ublk), but we didn’t dig into it.

As for profiling: we didn’t run perf, so the following is my educated guess:

1. With libaio, control structures are copied as part of submission/completion. io_uring avoids some of this overhead via shared rings and pre-registered resources. 2. In our experience (in YDB), AIO syscall latency tends to be less predictable, even when well-tuned. 3. Although we report throughput, the setup is effectively latency-bound (single fio job). With more concurrency, libaio might catch up.

We intentionally used a single job because we typically aim for one thread per disk (two at most if polling enabled). In our setup (usually 6 disks), increasing concurrency per device is not desirable.