Yesterday, 02:11 PM
(Yesterday, 12:12 PM)josemendez Wrote: Yes, there's solver and actor constraint batches. These are completely different. When you create an actor, its constraints are grouped into batches. When you add an actor to a solver (so it becomes part of the simulation), a copy of each constraint batch in the actor is created and merged with *all other existing constraints in the solver*, including those belonging to other actors to maximize parallelism.
As a result:
- Actor batches contain constraint data at rest for a specific actor.
- Solver batches contain current constraint data for all constraints in the simulation, at any point during simulation.
- The number of solver batches and the number of constraints in each solver batch is different than its actor counterparts, and are accessed differently.
Oh ok, I checked docs before, but I understood it works in the same way as positions, I mean that only solver holds the data, so I assumed GetConstraint from actor is just proxy to the one managed by solver. So to sum up - solver has real data we operate on, while actor has "original copy" of initial rest state.
I assume, in case of indices (to get particle position) I could use both of them (batch.particleIndices or solverBatch.particleIndices), but when it comes to lambda I need the one from solver as it changes over time.
Quote:Is there difference between GetBatch(j) and batches[j]?
I assume both of them are bassicaly the same thing.
(Yesterday, 12:12 PM)josemendez Wrote: Entirely depends on what you're using them for. Just like in Unity you sometimes use deltaTime, but some other times you use fixedDeltaTime.
Hah, I asked silly question. I came up with this, because my logic was that I need to use simulation delta (simulationTime), but lambda and that other example confused me, and I assumed I am missing something important. Your answers to all other bits clarified everything, especially the part with broken example and steps.
(Yesterday, 12:12 PM)josemendez Wrote: Lambda values are set to zero at the start of each substep, and accumulated between iterations. However, while time advances from one substep to the next, it does not advance between iterations: iterations simply refine the solution for the current substep.
What is better way to get rigid rod (close to truth) in this case?
Documentation says:
"The quality improvement you get by reducing the timestep duration is greater than you'd get by using more iterations, and they both have similar performance cost."
I mean, number of particles affects the end result, so for example if I twist/push one particle, then to propagate this force to other parts of the rod (or any other actor) requires more time and iterations. What is better: 5 substeps and 10 iterations, or 1 substep and 50 iterations? I guess the first one should be better, but at the same time it increases the cost of all other aspects of simulation. However let's assume, there is only one actor and I want to get as good result as possible.
(Yesterday, 12:12 PM)josemendez Wrote: I'd recommend reading the "how it works" section of the manual, as it goes in-depth about timesteps, substeps and iterations:
https://obi.virtualmethodstudio.com/manu...gence.html
These articles are also useful for reference:
https://matthias-research.github.io/page...s/XPBD.pdf
https://mmacklin.com/smallsteps.pdf
I already checked the whole manual, but those two other links is nice addition, thanks!