Show HN: A physically-based GPU ray tracer written in Julia
We ported pbrt-v4 to Julia and built it into a Makie backend. Any Makie plot can now be rendered with physically-based path tracing.Julia compiles user-defined physics directly into GPU kernels, so anyone can extend the ray tracer with new materials and media - a black hole with gravitational lensing is ~200 lines of Julia.Runs on AMD, NVIDIA, and CPU via KernelAbstractions.jl, with Metal coming soon.Demo scenes: github.com/SimonDanisch/RayDemo
151 points by simondanisch - 50 comments
This is why I wish Julia were the language for ML and sci comp in general, but Python is sucking all of the air out of the room.
I'm asking because I had a lot of trouble trying to describe interfaces between materials, only to find out that what I wanted to do was not possible in PBRT without modifying the code. Apparently, in PBRT a material can only have one other material touching it. So, for example rendering a glass filled with water and ice is not possible without hacks. From a user's point of view this is a bit of a let-down, of course.
Context: https://news.ycombinator.com/item?id=45668543
https://blog.yiningkarlli.com/2019/05/nested-dielectrics.htm...
And I'm curious how you solve it.
Anyway, I'm looking at this from the user's perspective. I wanted to do some physics-based ray-tracing with lenses and pbrt is what I ended up trying. As such, I really needed the multi-material aspect to work correctly. Also, it would be nice to be able to describe surfaces using a z=f(x,y) kind of formulation, or a way to place a hook in the renderer.
It took me days to get that build to work; doing this compilation once in CI so you don't have to do it on every machine is trickier than it sounds in Julia. The "obvious" way (install packages in Docker, run container on target machine) does not work because Julia wants to see exactly the same machine that it was precompiled on. It ends up precompiling again every time you run the container on other machines. I nearly shed a tear the first time I got Julia not to precompile everything again on a new machine.
R and Python are done in five minutes on the standard worker and it was easy; it's just the amount of time it takes to download and extract the prebuilt binaries. Do that inside a Docker container and it's portable as expected. I maintain Linux and Windows environments for the three languages and Julia causes me the most headaches, by far. I absolutely do not care about the tiny improvement in performance from compiling for my particular microarch; I would opt into prebuilt x86_64 generic binaries if Julia had them. I'm very happy to take R's and Python's prebuilt binaries.
> I would opt into prebuilt x86_64 generic binaries if Julia had them
The environment varial JULIA_CPU_TARGET [1] is what you are looking for, it controls what micro-architecture Julia emits for and supports multi-versioning.
As an example Julia is built with [2]: generic;sandybridge,-xsaveopt,clone_all;haswell,-rdrnd,base(1)
[1] https://docs.julialang.org/en/v1/manual/environment-variable...
[2] https://github.com/JuliaCI/julia-buildkite/blob/9c9f7d324c94...
I am intimately familiar with JULIA_CPU_TARGET; it's part of configuring PackageCompiler and I had to spend a fair amount of time figuring it out. Mine is [0]. It's not related to what I was discussing there. I am looking for Julia to operate a package manager service like R's CRAN/Posit PPM or Python's PyPI/Conda that distributes compiled binaries for supported platforms. JuliaHub only distributes source code.
[0] generic;skylake-avx512,clone_all;cascadelake,clone_all;icelake-server,clone_all;sapphirerapids,clone_all;znver4,clone_all;znver2,clone_all
You may be interested in looking into AppBundler. Apart from the full application packaging it also offers ability to make Julia image bundles. While offering sysimage compilation option it also enables to bundle an application via compiled pkgimages which requires less RAM and is much faster to compile.
Elsewhere someone used the term "janky" and perhaps it's the fact that there are so many incredibly smart people around it that makes it so janky. By way of example, somebody needed to check disk space and the architect told him to shell out to Python.
Remember when LLVM first came out and it got kudos for the quality of its error messages? Well if you miss the old-school 1980s GCC experience the nonsense that eventually comes out of the Julia compiler after an hour will relight that flame.
Want to use greek letters and other symbols that don't appear on your keyboard as variable names? You've found your people.
I think what happened is this: Julia got advertised as "Python syntax, C speed" but in practice it turns out to really be "Python syntax, 50% of C speed if you were willing to avoid some semi-well-documented gotchas, where avoiding said gotchas will take some non-trivial effort". Again, great if you are willing to work with it.
I am not saying that the Julia people are responsible for the "Python syntax, C speed" perception as much as that was what the prevalent perception became. And
I have talked to people in computational biology who tried Julia, and they said something or the other similar to "It just wasn't performant enough for me to give up Python," and if you really dig in, what really happened was when new people tried Julia with old mental models, they walked away thinking, "Heh, more MIT hypeware."
https://arxiv.org/abs/2309.17309
This paper in experimental high-energy physics is a good example of why Julia is popular for scientific calculations.
It shows that #julialang is over 100 times faster than Python and even faster than C++.
To reiterate, citing studies that show that smoking causes cancer in chain smokers does ... nothing. You are citing studies, but I am not the chain smoker; I am just the guy talking about chain smokers.
One more time, I wish we lived in a world where public perception was swayed by objective studies, but we don't.
Julia is fast, yes, but when a university sys-admin rolls their eyes at hearing its name, you have lost the battle for well and good.
"Data science" is an extremely broad term, so YMMV. That said, since you asked, Julia has absolutely replaced Python for me. I don't have anything new to add on the benefits of Julia; it's all been said before elsewhere. It's just a question of exactly what kind of stuff you want to do. Most of my recent work is math/algorithms flavored, and Python would be annoyingly verbose/inexpressive while also being substantially slower. Julia also tends to have many more high-quality packages of this kind that I can quickly use / build on.
Yes 1-based indexing is a mistake. It leads to significantly less elegant code - especially for generic code - and is no harder to understand than 1-based indexing for people capable of programming. Fight me.
Some would argue that 0-based indexing is significantly less elegant for numerical/scientific code, but that depends on whether they come from a MATLAB/Fortran or Python/C(++) background.
A decision was made to target the MATLAB/Fortran (and unhappy? Python/C++) crowd first, thus the choice of 1-based indexing and column-major order, but at the end of the day it's a matter of personal preference.
0-based indexing would have made it easier to reach a larger audience, however.
> and is no harder to understand than 1-based indexing for people capable of programming.
The same could be said the other way around ;-)
It really isn't. "Scientific code" isn't some separate thing.
The only way it can help is if you're trying to write code that matches equations in a paper that uses 1-based indexing. But that very minor advantage doesn't outweigh the disadvantages by a wide margin. Lean doesn't make this silly mistake.
> If you really need the first or last element
What if you need the Nth block of M elements? The number of times I've written arr[(n-1)m+1:nm] in MATLAB... I do not know how anyone can prefer that nonsense to e.g. nm..<(n+1)m
arr[n..=m]
> arr[1:m] and arr[(m+1):end]
arr[0..m], arr[m..]
Much nicer.
> Arrays are (conceptually) not pointer arithmetic.
Look at a ruler. Does it start at 1?
Recently we got the public keyword, but even the PR there says:
"NOTE: This PR is not a complete solution to the "public interfaces are typically not well specified in Julia" problem. We would need to implement much than this to get to that point. Work on that problem is ongoing in Base and packages and contributions are welcome."
>the reference implementation from Physically Based Rendering (Pharr, Jakob, Humphreys)
I'd like to know a little about the process you went through for the port. That book * sounds like an excellent resource to start from but what was it like using it and the code?
* https://pbrt.org/
Then I found that pbrt moved away from the initial design and I used claude code to port large parts of the new C++ code to Julia. This lead to a pretty bad port and I had lots of back and forth to fix bugs, improve the GPU acceleration, make the code more concise and "Julian" and correct the AIs mistakes and bogus design decisions ;) This polish isn't really over yet, but it works well enough and is fast enough for a beta release!
curious about BVH traversal specifically. dynamic dispatch patterns across GPU backends can get weird fast. did KernelAbstractions hold up there or were there vendor-specific fallbacks needed for the heavier acceleration structure work?