Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Acessing SkinnedMeshRenderer data
#1
Hi!

I'm trying to test the Obi Softbody add-on, and I need to retrieve the mesh data from the SkinnedMeshRenderer.

In my current implementation (without softbody), I set three Graphics Buffers of position, normal and tangent streams of the SkinnedMeshRenderer, and then access their offsets.
However, when I try to skin the SMR with Softbody Skinner, I retrieve initial vertex data like it was in the viewport before the simulation.
As far as I understand, Softbody Skinner should update skinned mesh renderer buffers after a simulation tick, and it should work, but I'm obviously missing something.

Code:
TryGetComponent<SkinnedMeshRenderer>(out var smr);
Mesh skinMesh = smr.sharedMesh;

using GraphicsBuffer skinPositionsBuffer = smr.GetVertexBuffer();
using GraphicsBuffer skinNormalsBuffer = smr.GetVertexBuffer();
using GraphicsBuffer skinTangentsBuffer = smr.GetVertexBuffer();

int[] skinPositionStrideOffset =
            {
                skinMesh.GetVertexBufferStride(positionStream),
                skinMesh.GetVertexAttributeOffset(VertexAttribute.Position),
            };
int[] skinNormalStrideOffset =
            {
                skinMesh.GetVertexBufferStride(normalStream),
                skinMesh.GetVertexAttributeOffset(VertexAttribute.Normal),
            };
int[] skinTangentStrideOffset =
            {
                skinMesh.GetVertexBufferStride(tangentStream),
                skinMesh.GetVertexAttributeOffset(VertexAttribute.Tangent),
            };


Thank you very much for your time!
Reply
#2
(02-02-2026, 02:37 PM)Eritar Wrote: As far as I understand, Softbody Skinner should update skinned mesh renderer buffers after a simulation tick, and it should work, but I'm obviously missing something.

Hi!

Obi doesn't even use the skinned mesh renderer component. Updating the skinned mesh renderer's GPU buffers would require processing each mesh/object separately (one scheduled job or one compute dispatch per object) which severely limits parallelism and entirely defeats the purpose of a multithreaded simulation.

Instead it performs mesh batching followed by its own skinning and rendering for performance reasons: we batch together *all softbody meshes* simulated by each solver into a few, larger data streams in which all vertex data can be processed in parallel. Batching is performed using material and render parameters (that is, all softbody meshes using the same material/render params are merged into a single batch). We then do our own skinning in the CPU (or GPU) including deformation due to simulation. Finally we do a single draw call per batch.

If you want to do your own modification of mesh vertices after simulation, look for BurstSoftbodyRenderSystem.cs or ComputeSoftbodyRenderSystem.cs, depending on which simulation backend you're using. These use jobs or compute shaders (respectively) to process batch vertex data and render them.

The main takeaway point is that there's no "one mesh per skinned mesh renderer" concept here, we deal with batches instead which may comprise many individual softbody objects.

kind regards
Reply