Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Help  Need some explanation of 6.3 Particle's properties
#1
Hi.
I found some properties of particles such as startOrientations, restOrientations, previousOrientations and orientationDeltas.
There is a simple explanation regarding restOrientations in the manual and API document but I could not find explanations for others. Indeed I still need a little more explanation of restOrientations as well.

Now I am trying to make transforms to mimic an array of particles' behavior but I am struggling with mimicking particles' rotations (orientations) when the transforms' and particles' initial rotations are not the same.
I think understanding above properties is the key.

Could you explain what those properties are doing and show me a simple example how to make a transform rotation to mimic a particle's rotation (orientation), please?
Reply
#2
(04-12-2021, 03:53 AM)Snail921 Wrote: Hi.
I found some properties of particles such as startOrientations, restOrientations, previousOrientations and orientationDeltas.
There is a simple explanation regarding restOrientations in the manual and API document but I could not find explanations for others. Indeed I still need a little more explanation of restOrientations as well.

Most of these deal with Obi's internal physics engine (which uses extended position-based dynamics) and aren't useful unless you're into writing your own constraints or interpolation scheme, which is very advanced stuff. Certainly not needed to mimic particle behavior.

startOrientations are the orientation of particles at the start of the timestep. These are used when interpolating rotations, if your solver has interpolation enabled. The resulting interpolated orientations are written to the renderableOrientations array, which should be used for rendering.

restOrientations are used as the "reference" orientation of particles for shape matching constraints, and also used to determine if particles overlap at rest and disable collision between them (just like restPositions).

previousOrientations are the orientations at the end of the previous timestep. These are used together with the current orientations (just "orientations") to calculate angular velocities.

orientationDeltas are adjustments made by constraints. Any adjustments made to particle orientations are accumulated here during each timestep. Once all constraints have accumulated their corrections these are then applied to the current orientations, and the deltas reset to zero.

Positions also have matching property arrays (startPositions, restPositions, previousPositions, positionDeltas). Take a look at how position-based dynamics works, these will start to make sense: https://matthias-research.github.io/page...sedDyn.pdf


(04-12-2021, 03:53 AM)Snail921 Wrote: Now I am trying to make transforms to mimic an array of particles' behavior but I am struggling with mimicking particles' rotations (orientations) when the transforms' and particles' initial rotations are not the same.
I think understanding above properties is the key.

No need to use any of the above arrays, just some basic matrix math: You need to calculate the transform's orientation/position relative to the particle at "bind" time -can be at the start of the game, when you press a button, or any time you want really- then every frame take the particle orientation/position, apply this relative orientation/position back, and write the result to your transform (this is exactly the same process used for skinning a mesh to a skeleton). I'm attaching a sample script that does this for softbodies.

Note this will only work if your actor uses oriented particles. Currently the only ones that do are rods, bones  and softbodies. Cloth, fluids and ropes do not use particle orientations. In the case of cloth, orientation is determined using the mesh's normals. In case of ropes, orientation is determined using parallel transport along the rope's path (you need to use ObiPathSmoother to get an orientation, let me know if you need info on this), and in case of fluids there's just no orientation.


Attached Files
.cs   AttachToSoftbody.cs (Size: 2.79 KB / Downloads: 5)
Reply
#3
(06-12-2021, 10:53 AM)josemendez Wrote: Most of these deal with Obi's internal physics engine (which uses extended position-based dynamics) and aren't useful unless you're into writing your own constraints or interpolation scheme, which is very advanced stuff. Certainly not needed to mimic particle behavior.

startOrientations are the orientation of particles at the start of the timestep. These are used when interpolating rotations, if your solver has interpolation enabled. The resulting interpolated orientations are written to the renderableOrientations array, which should be used for rendering.

restOrientations are used as the "reference" orientation of particles for shape matching constraints, and also used to determine if particles overlap at rest and disable collision between them (just like restPositions).

previousOrientations are the orientations at the end of the previous timestep. These are used together with the current orientations (just "orientations") to calculate angular velocities.

orientationDeltas are adjustments made by constraints. Any adjustments made to particle orientations are accumulated here during each timestep. Once all constraints have accumulated their corrections these are then applied to the current orientations, and the deltas reset to zero.

Positions also have matching property arrays (startPositions, restPositions, previousPositions, positionDeltas). Take a look at how position-based dynamics works, these will start to make sense: https://matthias-research.github.io/page...sedDyn.pdf



No need to use any of the above arrays, just some basic matrix math: You need to calculate the transform's orientation/position relative to the particle at "bind" time -can be at the start of the game, when you press a button, or any time you want really- then every frame take the particle orientation/position, apply this relative orientation/position back, and write the result to your transform (this is exactly the same process used for skinning a mesh to a skeleton). I'm attaching a sample script that does this for softbodies.

Note this will only work if your actor uses oriented particles. Currently the only ones that do are rods, bones  and softbodies. Cloth, fluids and ropes do not use particle orientations. In the case of cloth, orientation is determined using the mesh's normals. In case of ropes, orientation is determined using parallel transport along the rope's path (you need to use ObiPathSmoother to get an orientation, let me know if you need info on this), and in case of fluids there's just no orientation.
Thank you for your giving the explanation, resource and a sample code. Now I understand why they are not explained in documents for users.
I thought startOrientations or restOrientations might store the initial rotations of vertices of the very first frame. So I was trying to calculate the rotation difference between current and the initial rotations by comparing rendableRotations with either of them but I was wrong.
Thank you very much for supplying AttachSoftboy.cs. The code is very sophisticated and there are many things to learn and study. With my level of math experience, there are some parts which is very hard to fully understand for me but I think I can start learning from there. Your codes will be my heirloom!
Reply
#4
(07-12-2021, 08:01 AM)Snail921 Wrote: Thank you very much for supplying AttachSoftboy.cs. The code is very sophisticated and there are many things to learn and study. With my level of math experience, there are some parts which is very hard to fully understand for me but I think I can start learning from there. Your codes will be my heirloom!

Hi!

The code may look intimidating at first, but what's happening under the hood is not that complex. I'll try to explain:

Imagine you had two objects, A and B.

A is moving around, and you want to attach B to it. You could simply copy A's position to B and you'd be done, however this would put B right on top of A.

What if you want to preserve B's initial position relative to A? you can find the offset from A's position at the time of attaching them, and then place B at A's position plus the offset:

upon attaching:
Code:
offset = B.position - A.position;

every frame:
Code:
B.position = A.position + offset;

Once you add rotations and vector spaces, things start to look a bit more complicated but the core concept is the same.

Now A is a particle, and B a gameObject. Let's attach them and calculate the "offset": Obi's particle data is expressed in the solver's local space, and gameObject data is expressed in either world space or its parent's (if any) local space. In order to work with these we need to express both in the same space: we can either convert gameObject's position/rotation to solver space, or we can convert particle data to world space. We will choose the former (move gameObject data to solver space):

Code:
var objectPosSS = solver.transform.InverseTransformDirection(transform.position);
var objectRotSS = transform.rotation * Quaternion.Inverse(solver.transform.rotation);

This gets us the gameObject's position and rotation expressed in solver space (that's what the SS at the end of the variable name stands for).

We now iterate trough all particles in the softbody, and find the one closest to the gameObject. To do that, we get the particle's position (we already get it in solver space) and calculate the distance to the objectPosSS:

Code:
// get particle position and orientation:
var particlePosSS = solver.renderablePositions[softbody.solverIndices[i]];
var particleRotSS = solver.renderableOrientations[softbody.solverIndices[i]];

// calculate vector from object to particle:
var particleToObject = objectPosSS - (Vector3)particlePosSS;

Every time we find a particle that's closer than the closest yet, we update the "offset" both for position and rotation. Rotation complicates things a bit since rotating A (the particle) will not only rotate B, it will also translate it. Simplest way to deal with this is to express the "offset" in the particle's local space, this way rotating the particle does not affect the offset value we store:

Code:
// get a matrix that transforms from solver's local space to particle's local space:
var solverToParticleMatrix = Matrix4x4.TRS(particlePosSS, particleRotSS, Vector3.one).inverse;

// transform the position offset (particleToObject) from solver space to particle space:
restPos = solverToParticleMatrix.MultiplyVector(particleToObject);

// get the rotation offset:  (quaternion multiplication works as "addition", and adding solverToParticleMatrix is like subtracting particleToSolverMatrix: so this is the rotational equivalent to objectPosSS - particlePosSS)

restRot = solverToParticleMatrix.rotation * objectRotSS;

These 3 lines are probably the most complicated ones to understand as they require dealing with both matrices and quaternions.

At this point, we have B's positional and rotational offset from A (the closest particle). All that's left to do is move B at the end of every frame. To do that, we just need to convert the offsets we stored to world space. To do this, we build the matrix that converts from particle space to world space:

Code:
var particlePosSS = (Vector3)softbody.solver.renderablePositions[softbody.solverIndices[closestParticleIndex]];
var particleRotSS = softbody.solver.renderableOrientations[softbody.solverIndices[closestParticleIndex]];
var particleToWorldMatrix = Matrix4x4.TRS(particlePosSS, particleRotSS, Vector3.one);

And then use it to convert the offsets we stored to world space and apply them to the object:

Code:
transform.position = particleToWorldMatrix.MultiplyPoint3x4(restPos);
transform.rotation = particleToWorldMatrix.rotation * restRot;

Hope it all made some sense! As you get more experienced with vector spaces and matrix math it will become much simpler. One thing that helped me a lot with matrices and quaternions is finding parallelisms between their math and regular addition/subtraction: Quaternion multiplication and matrix multiplication behave like regular addition in a lot of ways. Inverting them is like making them "negative". So for instance to subtract two rotations in quaternion form you do:

Code:
difference = newRotation * Quaternion.Inverse(oldRotation);

This is like newRotation + (-oldRotation) = newRotation - oldRotation.

Then you can just forget that you're dealing with "weird guys" and think in terms of adding and subtracting numbers.

Let me know if I can be of further help Sonrisa
Reply
#5
Thank you so much for detailed explanation.

By reading the explanation repeatedly, I was able to narrow down which part of my understanding was particularly ambiguous. In my case, it seems that I was barely able to understand the quaternion, but I needed to re-study the matrix transformation.

Fortunately, I've found some useful teaching material about matrix transformations and related APIs on the web, so I think I now have a good understanding of how this whole code works. Cannot find enough word of gratitude!
Reply