06-12-2021, 10:53 AM
(This post was last modified: 06-12-2021, 11:37 AM by josemendez.)
(04-12-2021, 03:53 AM)Snail921 Wrote: Hi.
I found some properties of particles such as startOrientations, restOrientations, previousOrientations and orientationDeltas.
There is a simple explanation regarding restOrientations in the manual and API document but I could not find explanations for others. Indeed I still need a little more explanation of restOrientations as well.
Most of these deal with Obi's internal physics engine (which uses extended position-based dynamics) and aren't useful unless you're into writing your own constraints or interpolation scheme, which is very advanced stuff. Certainly not needed to mimic particle behavior.
startOrientations are the orientation of particles at the start of the timestep. These are used when interpolating rotations, if your solver has interpolation enabled. The resulting interpolated orientations are written to the renderableOrientations array, which should be used for rendering.
restOrientations are used as the "reference" orientation of particles for shape matching constraints, and also used to determine if particles overlap at rest and disable collision between them (just like restPositions).
previousOrientations are the orientations at the end of the previous timestep. These are used together with the current orientations (just "orientations") to calculate angular velocities.
orientationDeltas are adjustments made by constraints. Any adjustments made to particle orientations are accumulated here during each timestep. Once all constraints have accumulated their corrections these are then applied to the current orientations, and the deltas reset to zero.
Positions also have matching property arrays (startPositions, restPositions, previousPositions, positionDeltas). Take a look at how position-based dynamics works, these will start to make sense: https://matthias-research.github.io/page...sedDyn.pdf
(04-12-2021, 03:53 AM)Snail921 Wrote: Now I am trying to make transforms to mimic an array of particles' behavior but I am struggling with mimicking particles' rotations (orientations) when the transforms' and particles' initial rotations are not the same.
I think understanding above properties is the key.
No need to use any of the above arrays, just some basic matrix math: You need to calculate the transform's orientation/position relative to the particle at "bind" time -can be at the start of the game, when you press a button, or any time you want really- then every frame take the particle orientation/position, apply this relative orientation/position back, and write the result to your transform (this is exactly the same process used for skinning a mesh to a skeleton). I'm attaching a sample script that does this for softbodies.
Note this will only work if your actor uses oriented particles. Currently the only ones that do are rods, bones and softbodies. Cloth, fluids and ropes do not use particle orientations. In the case of cloth, orientation is determined using the mesh's normals. In case of ropes, orientation is determined using parallel transport along the rope's path (you need to use ObiPathSmoother to get an orientation, let me know if you need info on this), and in case of fluids there's just no orientation.