Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Help  Cloth does not follow animation
#1
I am testing various setups for clothing simulation and using skin weighs does not seem appropriate for realistic results.
I am now experimenting with mesh collider but the cloth does not follow the animation. It works fine if the mesh is static. Can you please help?
Reply
#2
(01-08-2018, 04:29 PM)adev eloper Wrote: I am testing various setups for clothing simulation and using skin weighs does not seem appropriate for realistic results.
I am now experimenting with mesh collider but the cloth does not follow the animation. It works fine if the mesh is static. Can you please help?

Animated (deformable) mesh colliders are not supported by Unity. In fact they're not supported in any engine that I've heard of, mainly because they don't allow any kind of preprocessing which is crucial for good performance when colliding against many individual triangles.

The only viable approaches to character clothing are skin constraints (you'll find equivalents for them in most cloth simulation packages, they're not exclusive to Obi nor invented by us) and primitive colliders (mostly capsules attached to character bones). Both are routinely used in the industry, usually a combination of both methods is preferred.

For skin constraints, try to use zero backstop, a large backstop radius, then control the blend between animation and simulation using skin radius (larger radius will result in more simulation thrown into the mix).
Reply
#3
(02-08-2018, 12:13 PM)josemendez Wrote: Animated (deformable) mesh colliders are not supported by Unity. In fact they're not supported in any engine that I've heard of, mainly because they don't allow any kind of preprocessing which is crucial for good performance when colliding against many individual triangles.

The only viable approaches to character clothing are skin constraints (you'll find equivalents for them in most cloth simulation packages, they're not exclusive to Obi nor invented by us) and primitive colliders (mostly capsules attached to character bones). Both are routinely used in the industry, usually a combination of both methods is preferred.

For skin constraints, try to use zero backstop, a large backstop radius, then control the blend between animation and simulation using skin radius (larger radius will result in more simulation thrown into the mix).

Hi Josemendez,
I should mention that I am working on a realistic simulation, not a game. It is a research project, and I could end up implementing something if there is no suitable tool for this.

Since my post, I have been able to update the mesh collider based on the animation pose (https://answers.unity.com/questions/1197...inned.html), and I am now looking into updating the Obi mesh collider with this updated info. Somehow, the obi collider stops working when I update programatically, e.g. below:

Code:
...
myUpdatedCollider.sharedMesh = newMesh;
obiCollider.SourceCollider = myUpdatedCollider;

How can I force a pose to be used as input for the obi collider?
Reply
#4
(02-08-2018, 02:35 PM)adev eloper Wrote: Hi Josemendez,
I should mention that I am working on a realistic simulation, not a game. It is a research project, and I could end up implementing something if there is no suitable tool for this.

Since my post, I have been able to update the mesh collider based on the animation pose (https://answers.unity.com/questions/1197...inned.html), and I am now looking into updating the Obi mesh collider with this updated info. Somehow, the obi collider stops working when I update programatically, e.g. below:

Code:
...
myUpdatedCollider.sharedMesh = newMesh;
obiCollider.SourceCollider = myUpdatedCollider;

How can I force a pose to be used as input for the obi collider?

Just for the record, imho this all seems like a terrible idea. Modifying the collider mesh will force Unity (and Obi) to re-generate the internal hierarchical representation of the mesh geometry (in Obi's case, a multi-level hash grid), which can become much more expensive that the cloth simulation itself depending on how often you do it. You will also run into a lot of tunneling issues with collision detection (as MeshColliders aren't convex in the general case, thus being treated as paper-thin instead of solid volumes).

This being said, you might be able to modify Obi slightly to support modifying the collision mesh at runtime:

- Go to ObiColliderBase.cs and change the "tracker" declaration from protected to public (line 65).
- Get your ObiCollider component, and call this:
Code:
((ObiMeshShapeTracker)collider.tracker).UpdateMeshData();

I cannot guarantee 100% that this will work, as it was designed to update the mesh once at startup. There's a pretty good chance that it will, though.

My advice regarding this would be to use signed distance fields. You can precompute pretty much all of the collision information (distance, gradient), they can be blended together via relatively simple operations (useful in flexible regions such as elbows and knees in case you need realtime animation) and can be regarded as solid volumes even for concave shapes, so that tunneling becomes less of an issue.
Reply
#5
Quote:As there is a huge gap between computations the collisions don't run smooth. This results in the cloth penetrating deeper into the character mesh and eventually dropping to the ground.
This is exactly what tunneling is. It is a fundamental consequence of how time is modeled in computer simulations as discrete chunks. Thin objects (as the surface of a mesh collider) make it worse. Using distance fileds should alleviate it. Keep in mind that if your mesh has thin parts (like a character with very thin arms) tunneling will still happen, and then the only solution will be to reduce the simulation time step (that is, reducing the gap between computations, as you described it).
Reply
#6
(02-08-2018, 04:11 PM)josemendez Wrote: Just for the record, imho this all seems like a terrible idea. Modifying the collider mesh will force Unity (and Obi) to re-generate the internal hierarchical representation of the mesh geometry (in Obi's case, a multi-level hash grid), which can become much more expensive that the cloth simulation itself depending on how often you do it. You will also run into a lot of tunneling issues with collision detection (as MeshColliders aren't convex in the general case, thus being treated as paper-thin instead of solid volumes).

This being said, you might be able to modify Obi slightly to support modifying the collision mesh at runtime:

- Go to ObiColliderBase.cs and change the "tracker" declaration from protected to public (line 65).
- Get your ObiCollider component, and call this:
Code:
((ObiMeshShapeTracker)collider.tracker).UpdateMeshData();

I cannot guarantee 100% that this will work, as it was designed to update the mesh once at startup. There's a pretty good chance that it will, though.

My advice regarding this would be to use signed distance fields. You can precompute pretty much all of the collision information (distance, gradient), they can be blended together via relatively simple operations (useful in flexible regions such as elbows and knees in case you need realtime animation) and can be regarded as solid volumes even for concave shapes, so that tunneling becomes less of an issue.

Hello, sorry about stupid question im not a programmer, but where should I call component? I need to write some code or there is some option? thank you
Reply
#7
(02-08-2018, 12:13 PM)josemendez Wrote: Animated (deformable) mesh colliders are not supported by Unity. In fact they're not supported in any engine that I've heard of, mainly because they don't allow any kind of preprocessing which is crucial for good performance when colliding against many individual triangles.

The only viable approaches to character clothing are skin constraints (you'll find equivalents for them in most cloth simulation packages, they're not exclusive to Obi nor invented by us) and primitive colliders (mostly capsules attached to character bones). Both are routinely used in the industry, usually a combination of both methods is preferred.

For skin constraints, try to use zero backstop, a large backstop radius, then control the blend between animation and simulation using skin radius (larger radius will result in more simulation thrown into the mix).


https://gpuopen.com/tressfx/

Animated mesh colliders can be done in the link above.Convert the skinmesh to a signed distance field (SDF) collider each frame .


You can search the following keywords in the web page.

"Signed distance field (SDF) collision, including compute shaders to generate the SDF from a dynamic mesh."
Reply
#8
(14-10-2022, 07:38 AM)Ocean Wrote: https://gpuopen.com/tressfx/

Animated mesh colliders can be done in the link above.Convert the skinmesh to a signed distance field (SDF) collider each frame .


You can search the following keywords in the web page.

"Signed distance field (SDF) collision, including compute shaders to generate the SDF from a dynamic mesh."

Not really, for multiple reasons:

1) Performance-wise: this only works if you can use the GPU to generate the SDF, and use the SDF in the GPU as well. Obi is (at least right now) 100% CPU based, collision detection happens on the CPU and so the SDF must be used there. Bringing SDF data from the GPU to the CPU is extremely expensive, which makes using compute shaders for this unfeasible.

2) Robustness-wise: a SDF flip book does not contain any velocity information: you just have static "snapshots" of the mesh at each animation frame. So for fast moving meshes, this would break the same way a dynamically updated MeshCollider does because it's not possible to use CCD on it.

3) Detail-wise: the SDF generated by Obi is adaptive, so the amount of detail it can capture is far greater than you'd be capable of in a fixed-resolution SDF (which is what you'd usually do on the GPU since regular grids map very well to graphics hardware).

For skeletally-animated characters, the way I've done this in the past is to decompose the into convex pieces (one per bone, you can use bone influences to automate this), approximate each one with a SDF, and parent the SDFs to the corresponding bone along with a kinematic rigidbody.

This is extremely fast even in the CPU -since the only work done at runtime is updating the bone transforms-, robust -since velocity data is automatically derived by Obi for kinematic rigidbodies- and provides a fair amount of detail -except on areas that deform wildly, which is uncommon in humanoid characters-

cheers,
Reply
#9
(14-10-2022, 08:20 AM)josemendez Wrote: Not really, for multiple reasons:

1) Performance-wise: this only works if you can use the GPU to generate the SDF, and use the SDF in the GPU as well. Obi is (at least right now) 100% CPU based, collision detection happens on the CPU and so the SDF must be used there. Bringing SDF data from the GPU to the CPU is extremely expensive, which makes using compute shaders for this unfeasible.

2) Robustness-wise: a SDF flip book does not contain any velocity information: you just have static "snapshots" of the mesh at each animation frame. So for fast moving meshes, this would break the same way a dynamically updated MeshCollider does because it's not possible to use CCD on it.

3) Detail-wise: the SDF generated by Obi is adaptive, so the amount of detail it can capture is far greater than you'd be capable of in a fixed-resolution SDF (which is what you'd usually do on the GPU since regular grids map very well to graphics hardware).

For skeletally-animated characters, the way I've done this in the past is to decompose the into convex pieces (one per bone, you can use bone influences to automate this), approximate each one with a SDF, and parent the SDFs to the corresponding bone along with a kinematic rigidbody.

This is extremely fast even in the CPU -since the only work done at runtime is updating the bone transforms-, robust -since velocity data is automatically derived by Obi for kinematic rigidbodies- and provides a fair amount of detail -except on areas that deform wildly, which is uncommon in humanoid characters-

cheers,
Why Obi Cloth can not works in Compute Shader(GPU)?For example, the collision detection of TressFX works in GPU.So I think all can works in GPU(RTX4090).
By the way,the velocity information of SDF may be recorded in the RT(render texture).
Reply
#10
(17-10-2022, 02:36 AM)Ocean Wrote: Why Obi Cloth can not works in Compute Shader(GPU)?For example, the collision detection of TressFX works in GPU.So I think all can works in GPU(RTX4090).
Hi!

Never said it couldn't, said it currently doesn't. I've been working on a GPU physics backend for Obi for more than a year. It will ship as part of Obi 7, by the end of this year.

(17-10-2022, 02:36 AM)Ocean Wrote: By the way,the velocity information of SDF may be recorded in the RT(render texture).

Storing velocity is trivial, that's not the problem. At least in my experience, the naive approach of calculating per-vertex velocities on the mesh and resampling them on a grid doesn't work well because resampling + linear interpolation breaks them enough to not be useable for CCD unless your SDF is very high-resolution.
Reply