vsg::CoordinateFrame node for universal scale scene graphs #1364
Replies: 2 comments
-
The vsgcoordinateframe example was the test bed I used to experiment with different approaches to solving the numerical precision issues associated with universe scale scene graphs. Originally the two solar systems were placed with standard vsg::MatrixTransform and exhibits acceptable stability until you push the two solar systems apart, you can use the --mt command line option to re-enable this old non vsg::CoordinateFrame code path, I've also added --ambient to make sure the side away from the sun is lit enough to see : vsgcoordinateframe --mt --ambient 0.5 Pressing 2, 3, 4, etc. to move to different viewpoints works smoothly. Now to test with the worst case add the --wc command line option: vsgcoordinateframe --mt --wc --ambient 0.5 And then following the same key presses to see the same viewpoints - what you'll see is jitter so bad that the camera and the earth as so affected by precision issues that you can't even see the earth if front of the camera! If you are curious when numerical precision becomes an issue in the test scene you can use --distance to set the specific value and then play the animations that rotate the earth about it's axes, and the earth around the sun - the following command line plays the animation at 100x normal speed. vsgcoordinateframe --ambient 0.4 --mt --distance 1e17 --play --speed 100 Distances below 1e17 work fine for me, 1e18 and above and things break down. The ''--wc'' worst cast set the distance to 8.514e25, so the jitter is 8 orders of magnitude worse! Remove the --mt command line option to return to using vsg::CoordinateFrame instead of vsg::MatrixTransform and all the jitter is gone. If you feel curious you can even keep pushing up the --distance beyond the worse case we used for testing, once you get to 1e36 and above the float precision used for the startfield starts causing problems so they start disappearing, but still the sun and earth views remain rock solid. vsgcoordinateframe --ambient 0.4 --distance 1e36 --play --speed 1e2 You can even start pushing to the limits of double where the star-field floats have long gave up and STILL the sun and earth for both solar systems are rock solid. This is Doctor Who territory way out beyond the universe: vsgcoordinateframe --ambient 0.4 --distance 1e304 --play --speed 1e2 Above a distance of 1e304 things finally stop working, but as doubles only go to 1.7e308 I think this is OK :-) The trick to getting this to work to such large scales is the Camera's viewMatrix::origin and CoordinateFrame::origin need to be managed so that when the camera is local to the CoordinateFrame of interest the origin's match so the when the delta is computed it comes to zero. The vsgcoordinate examples illustrates how to manage this by interpolating the origin between the solar systems viewpoints, here the new support for long doubles comes in handle so that the math for interpolation is done at as high precision as possible. The new long double support comes in the form of vsg::ldvec3, vsg::ldmat4 etc. One Linux the compiler will map long double to a 128 bit datatype with 80bit mapped to the long double support found in modern Intel class CPUs. Unfortunately Microsoft decided that no one ever needs long double support so just maps long double to double, so it's just the same 64bit format so beware that when you are doing computation on very large numbers and want to lever long doubles that you may to look beyond the Microsoft compilers under Windows. |
Beta Was this translation helpful? Give feedback.
-
Another part of the rendering very large coordinate scenes is how to handle the depth buffer, and here the VSG's default reverse depth projection matrices, when combined with a float depth buffer, works to help scale up depth whilst minimizing the impact of near depth precision. This NVidia Visualizing Depth Precision article goes into the topic of reverse depth so is worth a read. The VSG has used reverse depth projection matrices by default for 3 years, so there isn't any need for changes there, just make sure you select a float depth buffer when you set up your window's traits i.e: windowTraits->depthFormat = VK_FORMAT_D32_SFLOAT; |
Beta Was this translation helpful? Give feedback.
-
One of the main new features in the VulkanSceneGraph-1.1.9 developer release was the addition of vsg::CoordinateFrame node and associated changes in supporting classes, this new node enables applications to easily handle very large coordinates in their scene graphs, enough to encompass the whole universe and beyond.
This work on universe scale scene graphs was funded by Digitalis who are planning to port their planetarium software, Nightshade, from OpenSceneGraph to VulkanSceneGraph.
The vsg::CoordinateFrame node is a vsg::Transform subclass that has an origin and rotation member to place and rotate a subgraph to place it in space - such as positioning a solar system in the universe. Additionally the vsg::ViewMatrix class now also has a dvec3 origin member, which is used in combination with the vsg::CoordinateSytem node by the vsg::RecordTraversal to localize the camera position into the nearest coordinate frame.
The RecordTraversal computes the required modelview matrix by using the offset between the viewMatrix->origin and the coordinateFrame->origin, if the two match then the offset is {0,0,0} so maximizing the numerical accuracy when the camera is local to a particular coordinate frame, this ensures that when the eye point is near an astral body you don't get the visual jitter associated with handling universe scale coordinates.
Distant coordinate frames like other galaxies will have large offsets between the viewMatrix->origin and the coordinateFrame->origin above the subraphs that represent them but as objects are typically rendered with a perspective camera, after the divide by depth is done on eye space vertices before rasterization, the numerical errors remain subpixels so you can't see any jitter due to numerical precision issues.
This scheme builds upon the double modelview matrix support built into the RecordTraversal, not just by using doubles to minimize numerical errors, but using two sets of double vec3's to position subgraphs, one set being the origin that places subgaphs in the universal scale, and then the local MatrixTransform etc. that place the object to be rendered in their local CoordinateFrame. Separating out the large scale positioning from local positioning doubles the effective mantissa available to improve precision where it's needed most.
One neat benefit of the CoordinateFrame approach is that it doesn't require any special maths or shaders in end user applications, you simply need to decorate your subgraphs with CoordinateFrame and manage the viewMatrix->origin so when you move to a local area the origin is consistent with that local area. The subgraphs below the CoordinateFrame and your shaders needn't be modified or be aware of this new scheme, so in principle it should be possible to use with 3rd party terrain rendering libraries like Rocky and vsgCs, though I haven't tested these yet.
vsgcoordinateframe example
The test the functionality and illustrate how to set up the scene graph and camera's viewMatrix I wrote a vsgcoordinateframe example. This example creates a scene graph with cartoon version of our universe with two "solar systems" spread apart by a user definable distance, with the a yellow sun system and an orange sun system so you can tell them apart.
There is a vsg::CoordianteFrame node above each solar system subgraph that places it into the universe, and point cloud with randomly place points to represents stars. There are a series of viewpoints in the scene graph that utilize vsg:Object user values to mark them, and then camera control is able to animate between these viewpoints in response to the user press '1' through to '7'.
This is the default home view, which you can return to by pressing the '1' key, that can see both solar systems, along as you aren't too far away :-)
Pressing '2' takes you to the yellow sun system looking at the sun with the earth nice and close simply so we have a chance of seeing the two bodies in the same view:
Pressing '3' takes you to the earth orbit view looking slightly back to the sun synced with the orbit of earth around the sun, note the side facing away from the sun is pretty dark as it would be:
Pressing '4' changes to a earth geosynchronous orbit with a very small field of view, just 0.005 degrees, that zooms from orbit to focus on Buckingham Palace in London, a popular tourist spot even for visiting aliens from the other side of the universe! In this screenshot I've changed the ambient light using the ''--ambient 0.8'' command line option to give us a nice view which would otherwise be dark in this default position of sun/earth:
Pressing '5' through '8' moves the viewpoint to the same views but for the second orange solar system.
The screenshots are for the default distance between solar systems of 1.0e9 (one billion metres) but it's only this small so we can see the two solar systems in a single screenshot, this also helped with the initial development work.
As a worst case stress test your can using the --worst-case or --wc command line options that sets the distance between solar systems at 8.514e25m. The observable universe is thought to be around 4.40e26m across so this puts our two cartoon solar systems almost on opposite sides of the universe. I would put screenshots in but they'd all be remarkably similar to the ones above, well because CoordinateFrame is doing it's job and there's no observable jitter even at opposites sides of the observe universe.
There are more features accessed via command line options that I'll walk through in follow up posts to this thread.
Beta Was this translation helpful? Give feedback.
All reactions