Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Using SDFs for collision
#1
Hi,
now that Obi runs ion GPU I am coming back to this asset I bought long time ago. I also saw that Obi does support SDFs; however it seems only SDFs generated by Obi components, is this correct? I am already using SDFs generated by Mesh-to-SDF by Unity (which basically stores SDF in 3D Texture). Would be great if this could be used as collision provider by Obi, as this also allows having SDF of deforming meshes (e.g. SMR).
Reply
#2
(07-08-2024, 05:05 PM)qlee01 Wrote: Hi,
now that Obi runs ion GPU I am coming back to this asset I bought long time ago. I also saw that Obi does support SDFs; however it seems only SDFs generated by Obi components, is this correct? I am already using SDFs generated by Mesh-to-SDF by Unity (which basically stores SDF in 3D Texture).

Hi,

SDFs in Obi are adaptive: instead of all voxels having the same size as in a regular 3D texture, their size changes depending on how much resolution is needed to represent a specific region of the SDF. As a result their storage requirements are much lower than regular SDFs. The downside is that Obi only works with its own SDFs, as adaptive representations are not usual.

(07-08-2024, 05:05 PM)qlee01 Wrote: Would be great if this could be used as collision provider by Obi, as this also allows having SDF of deforming meshes (e.g. SMR).

This wouldn't allow for robust collisions agains deforming meshes. The problem is that SDFs lack information about surface velocity, which is necessary for accurate collision detection. When your SDF represents a rigid object and doesn't deform, you can just use the velocity of the object. However if your SDF changes every frame in a flipbook-like fashion, it essentially becomes a "teleporting" surface as far as any physics engine is concerned.

If your SDF is thick enough and deforms slowly enough that other objects (in Obi's case, particles) don't suddenly find themselves on the other side of the volume this might work.

The best approach for collisions against some deformable meshes (eg a human body) is to segment the mesh into convex regions surrounding each bone and generate a SDF for each one, then parent the SDF collider to its corresponding bone. This allows to calculate velocities for each collider accurately, uses far less memory compared to a single SDF for the entire body, and is a lot more performant as it doesn't require calculating the SDF from scratch every frame.

kind regards
Reply
#3
(12-08-2024, 07:52 AM)josemendez Wrote: Hi,

SDFs in Obi are adaptive: instead of all voxels having the same size as in a regular 3D texture, their size changes depending on how much resolution is needed to represent a specific region of the SDF. As a result their storage requirements are much lower than regular SDFs. The downside is that Obi only works with its own SDFs, as adaptive representations are not usual.


This wouldn't allow for robust collisions agains deforming meshes. The problem is that SDFs lack information about surface velocity, which is necessary for accurate collision detection. When your SDF represents a rigid object and doesn't deform, you can just use the velocity of the object. However if your SDF changes every frame in a flipbook-like fashion, it essentially becomes a "teleporting" surface as far as any physics engine is concerned.

If your SDF is thick enough and deforms slowly enough that other objects (in Obi's case, particles) don't suddenly find themselves on the other side of the volume this might work.

The best approach for collisions against some deformable meshes (eg a human body) is to segment the mesh into convex regions surrounding each bone and generate a SDF for each one, then parent the SDF collider to its corresponding bone. This allows to calculate velocities for each collider accurately, uses far less memory compared to a single SDF for the entire body, and is a lot more performant as it doesn't require calculating the SDF from scratch every frame.

kind regards
oh, SDF from Mesh To SDF work fine for collisions. It's used by the Unity hair system, and I also implemented a PBD cloth simulation which uses the SDF. It's not perfect, but good enough for most cases. and SDF per bone for dozens of bones which are used to move a dynamic mesh partly animated by blendshapes would also not work. MEsh to SDF works fine with all of this.
Reply
#4
(15-08-2024, 01:40 PM)qlee01 Wrote: oh, SDF from Mesh To SDF work fine for collisions. It's used by the Unity hair system, and I also implemented a PBD cloth simulation which uses the SDF. It's not perfect, but good enough for most cases.  MEsh to SDF works fine with all of this.

Hi!

Got the impression you were looking to use this for making game characters wear cloth relying purely on collision detection. In a nutshell: as long as the character movement is slow enough and the object is thick enough it should work reasonably well, however no CCD can be performed, so it can't guarantee that the cloth won't penetrate the SDF and once it's fallen off/clipped trough, it cannot recover from it. Also there's the overhead of generating the SDF every frame, which may or may not be significant for whatever your use case is.

(15-08-2024, 01:40 PM)qlee01 Wrote: and SDF per bone for dozens of bones which are used to move a dynamic mesh partly animated by blendshapes would also not work.

Yes, blend shapes wouldn't work with the approach I suggested above. We will consider adding support for GPU 3D textures containing distance fields in the future. Sonrisa

kind regards,
Reply
#5
(15-08-2024, 01:56 PM)josemendez Wrote: Hi!

Got the impression you were looking to use this for making game characters wear cloth relying purely on collision detection. In a nutshell: as long as the character movement is slow enough and the object is thick enough it should work reasonably well, however no CCD can be performed, so it can't guarantee that the cloth won't penetrate the SDF and once it's fallen off/clipped trough, it cannot recover from it. Also there's the overhead of generating the SDF every frame, which may or may not be significant for whatever your use case is.


Yes, blend shapes wouldn't work with the approach I suggested above. We will consider adding support for GPU 3D textures containing distance fields in the future. Sonrisa

kind regards,
sounds good! the overhead for creating SDF is not big, like 0.1ms on a 3080 with around 10K vertices; usually done with a proxy (if character mesh is high poly). And the collision check is very fast. So overall this kind of collision checking would be probably much faster than anything else, while still being VERY accurate, of course with some downsides as you mentioned (velocity e.g., but this could come from skinning as an approximation). you can check the hair system for a good implementation, it also calculates friction based on SDF collision.
Reply