Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Runtime generated SDF
#1
What would be the best approach for acheiving a HD sdf collision map close to the player? I would love to reach a resolution of 1-5 centimeters (in a VR setting).

I'm working on a smooth voxel system with detailing through tesselation and displacement. Next up for me is generating this HD collsion map but I'm stumped at finding a solution here.
So far the implemented process is:
  1. Have a low resolution SDF field
  2. Generate mesh from SDF
  3. Unwrap mesh UV's and apply virtual texturing
  4. Use tesselation+displacement when rendering. It now looks HD and photorealistic!
I will have to figure out a way to make a custom SDF collider that Obi can use.
It needs to be runtime and on demand as chunks are loaded and LOD changed. I know of the method from the Unity demo team. I could likely do this or something similar, but with some virtual volume method so that i only resolve high definition close to geometry.

At some point I need to stop the flood-fill propagation of distances if nothing else at the seams between high resolution and low resolution. How mathematically accurate does the SDF need to be for Obi to work properly? Is accuracy most important close to zero or does it have to be seamless?

Reply
#2
Hi!

(Yesterday, 09:56 PM)goosejordan Wrote: What would be the best approach for acheiving a HD sdf collision map close to the player? I would love to reach a resolution of 1-5 centimeters (in a VR setting).

I will have to figure out a way to make a custom SDF collider that Obi can use.
It needs to be runtime and on demand as chunks are loaded and LOD changed. I know of the method from the Unity demo team. I could likely do this or something similar,

Obi already has adaptive SDF support. The ObiDistanceField scriptable object class stores SDF data as a list of DFNode structs. You can create a new distance field and populate its data as you see fit.

Obi's SDFs are dual and octree-based. This means each node in the tree stores 8 distance values (one at each of its corners) instead of the usual approach of storing them in a regular grid and have distance values stored at the center of each cell. This allows them to have adaptive LOD while retaining fast trilinear interpolation (as distance data needed for interpolation is stored contiguously).

Each DFNode has these fields:

distancesA = 4 distance values at the negative face in the XY plane.
distancesB = 4 distance values at the positive face in the XY plane.
center = spatial coordinates of the center of the node.
int firstChild = index of this node's first child in the list. All 8 children are assumed to be stored consecutively, so 2nd child will be at firstChild+1, etc.

The ASDF class contains two static methods: one to build an adaptive SDF from a bounding interval hierarchy, and another one to sample it. You can use these as reference implementations.

Note SDFs are designed to be created in the CPU and stored in CPU memory. When using the Compute backend, this data is sent to the GPU. Since your distance data likely comes from the GPU, and you probably want to update the SDF often, the main hurdle will be how to efficiently perform data copying and avoid the impact of readbacks.

(Yesterday, 09:56 PM)goosejordan Wrote: but with some virtual volume method so that i only resolve high definition close to geometry.

Obi's SDFs allow you to have varying resolution, since they're octree based (you can have as many resolution levels as you'd like).

(Yesterday, 09:56 PM)goosejordan Wrote: At some point I need to stop the flood-fill propagation of distances if nothing else at the seams between high resolution and low resolution. How mathematically accurate does the SDF need to be for Obi to work properly? Is accuracy most important close to zero or does it have to be seamless?

Ideally, close to the zero isosurface you want to be as accurate as possible, so that collisions are smooth. No need for it to be seamless, specially when further away from the isosurface.

kind regards
Reply
#3
Wow, thanks a lot! That was very in-depth, I have a lot to work with now Sonrisa
Reply