At the end of the video there is a non-pinhole camera demo; could someone explain what exactly is different about this camera?
I.e. what exactly the video is showing? And what would the video look like if that was a pinhole camera?
kvark [3 hidden]5 mins ago
In pinhole cameras, straight lines look straight. That's what a regular projection matrix gives you with rasterization.
With non-pinhole cameras, straight lines look curved. You can't rasterize this directly. 3D Gaussian splats have an issue with this, addressed by methods like ray tracing.
It's very useful to train on non-pinhole cameras, because in real world they can capture a wider field of view.
markisus [3 hidden]5 mins ago
In Gaussian Splatting, a first order approximation takes ellipsoids in camera space to ellipses in image space. This works ok for pigeonhole cameras where straight lines remain straight and more generally conic sections are taken to other conic sections. For high distortion models like fisheye, this approximation probably breaks. However this method presumably does not rely on approximation since it is ray traced.
jedbrooke [3 hidden]5 mins ago
looks like it’s some sort of fisheye camera with a super wide fov. it might be simulating rays bending due to lens effects. a pinhole camera could just look “normal” ie straight lines stay straight (except for horizon convergence perspective effects)
xnx [3 hidden]5 mins ago
How is Google using all these amazing radiant field techniques they're developing?
lairv [3 hidden]5 mins ago
TBH only one author out of four has a Google affiliation, and their personal webpage [1] says "part-time (20%) staff research scientist at Google DeepMind", so it's a stretch to call this a "Google technique". I notice that this is a common thing when discussing research paper, people associate it with the first company name they can find in the affiliations
For one, I’ve seen interactive Gaussian Splatting interior flythroughs in the Google Maps app.
mlsu [3 hidden]5 mins ago
Pure conjecture: relighting in Pixel phones. I don't think they have too many AR-like products. I'm surprised so much of this research is coming out of Google and not Meta.
xnx [3 hidden]5 mins ago
I'm a little surprised Google hasn't included lidar into their Pixel phones (even after including and dropping some oddball stuff like Soli) to support some of these radiance field / photogrammetry techniques. I guess the <2.5% market share of Pixel phones wouldn't encourage any third parties with bothering to develop for lidar on Android.
catapart [3 hidden]5 mins ago
I have no idea, but given their stock of pictures of the entire earth (via google maps), I have some ideas about what I HOPE they would use this tech for.
wongarsu [3 hidden]5 mins ago
And Google Maps/Google Earth have a long history of trying to create 3d views using all kinds of techniques, from manual modeling to radar satellite data.
CyberDildonics [3 hidden]5 mins ago
How do you know they're amazing until you've used them yourself?
TuringTourist [3 hidden]5 mins ago
By being amazed when observing it, one can conclude that a thing is amazing.
macawfish [3 hidden]5 mins ago
They do look amazing
CyberDildonics [3 hidden]5 mins ago
You must have more faith in research papers than I do. Every single one I've actually used has had significant flaws that are glossed over by what isn't being shown or said.
soulofmischief [3 hidden]5 mins ago
Maybe you're misunderstanding the point of research. Research groups set constraints not only for practical reasons, but so that the novelty in their papers isn't bogged down by edge cases. It seems absurd to just wholesale reject the usefulness of all research papers just on account of your own failure to properly make use of them.
absolutelastone [3 hidden]5 mins ago
The problem is when the constraints exclude methods that are comparable performance while otherwise being superior options for the problem they are solving. I've found this to be extremely common.
CyberDildonics [3 hidden]5 mins ago
Maybe you're misunderstanding just how different most research papers actually are when you implement them and see all the limitations they have, especially when they compare themselves to general techniques that work better but they claim to surpass.
It's naive to accept what a paper does as fact from a video, you have to get it to work and try it out to really know. Anyone who has worked with research papers knows this from experience.
Feel free to try this out and let me know.
itronitron [3 hidden]5 mins ago
I was a bit underwhelmed by the video as it just looks like hyper-real computer graphics.
I.e. what exactly the video is showing? And what would the video look like if that was a pinhole camera?
[1] https://theialab.ca/
It's naive to accept what a paper does as fact from a video, you have to get it to work and try it out to really know. Anyone who has worked with research papers knows this from experience.
Feel free to try this out and let me know.