Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Help  Normal positions vs Renderable positions
#1
Hello.

We're using Obi Cloth in our VR app. There are two pieces of cloth that the user must grab and drop on the floor. So we must check if the controller is within range of a particle to grab the cloth, and if majority of the particles are on the floor.

Previously, we were checking the position of the particles by doing actor.GetParticlePositions().

If I understand correctly, that returns renderable positions, and only works appropriately if you tell the solver of the actor to RequireRenderablePositions? Otherwise those renderable positions don't move if the cloth moves. Then you can call RelinquishRenderablePositions when you're done, and the main reason for that is that renderable positions are a little expensive?

I also discovered an alternative way to do this by calling actor.PullDataFromSolver(ParticleData.POSITIONS), then the actor has a vector3 positions array of all the particles. The documentation referred to this as "normal positions".

Three questions:
  1. Is that more or less expensive than doing renderable positions?
  2. If it's less expensive, how often do you have to call PullDataFromSolver()? Every frame?
  3. And am I on the right track here, or is there a better way we should be checking the position of the particles?

Thank you.
Reply


Messages In This Thread
Normal positions vs Renderable positions - by tsantoro - 25-04-2019, 10:49 PM