Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Help  Scripting rod forces
#11
(Yesterday, 12:12 PM)josemendez Wrote: Yes, there's solver and actor constraint batches. These are completely different. When you create an actor, its constraints are grouped into batches. When you add an actor to a solver (so it becomes part of the simulation), a copy of each constraint batch in the actor is created and merged with *all other existing constraints in the solver*, including those belonging to other actors to maximize parallelism.

As a result:
- Actor batches contain constraint data at rest for a specific actor.
- Solver batches contain current constraint data for all constraints in the simulation, at any point during simulation.
- The number of solver batches and the number of constraints in each solver batch is different than its actor counterparts, and are accessed differently.

Oh ok, I checked docs before, but I understood it works in the same way as positions, I mean that only solver holds the data, so I assumed GetConstraint from actor is just proxy to the one managed by solver. So to sum up - solver has real data we operate on, while actor has "original copy" of initial rest state.
I assume, in case of indices (to get particle position) I could use both of them (batch.particleIndices or solverBatch.particleIndices), but when it comes to lambda I need the one from solver as it changes over time.

Quote:Is there difference between GetBatch(j) and batches[j]?

I assume both of them are bassicaly the same thing.

(Yesterday, 12:12 PM)josemendez Wrote: Entirely depends on what you're using them for. Just like in Unity you sometimes use deltaTime, but some other times you use fixedDeltaTime.

Hah, I asked silly question. I came up with this, because my logic was that I need to use simulation delta (simulationTime), but lambda and that other example confused me, and I assumed I am missing something important. Your answers to all other bits clarified everything, especially the part with broken example and steps.

(Yesterday, 12:12 PM)josemendez Wrote: Lambda values are set to zero at the start of each substep, and accumulated between iterations. However, while time advances from one substep to the next, it does not advance between iterations: iterations simply refine the solution for the current substep.

What is better way to get rigid rod (close to truth) in this case?
Documentation says: 
"The quality improvement you get by reducing the timestep duration is greater than you'd get by using more iterations, and they both have similar performance cost."

I mean, number of particles affects the end result, so for example if I twist/push one particle, then to propagate this force to other parts of the rod (or any other actor) requires more time and iterations. What is better: 5 substeps and 10 iterations, or 1 substep and 50 iterations? I guess the first one should be better, but at the same time it increases the cost of all other aspects of simulation. However let's assume, there is only one actor and I want to get as good result as possible.

(Yesterday, 12:12 PM)josemendez Wrote: I'd recommend reading the "how it works" section of the manual, as it goes in-depth about timesteps, substeps and iterations:
https://obi.virtualmethodstudio.com/manu...gence.html

These articles are also useful for reference:
https://matthias-research.github.io/page...s/XPBD.pdf
https://mmacklin.com/smallsteps.pdf

I already checked the whole manual, but those two other links is nice addition, thanks!
Reply
#12
(Yesterday, 02:11 PM)Qriva0 Wrote: What is better: 5 substeps and 10 iterations, or 1 substep and 50 iterations? I guess the first one should be better,

The best possible configuration in terms of convergence speed/quality is N substeps, and only 1 iteration. This is what the article I shared demonstrates:

Quote:We make the surprising observation that performing a single large time step with n constraint solver iterations is less effective than computing n smaller time steps, each with a single constraint solver iteration.

In other words: it's costlier to correct constraint error caused by using large time steps (which is what iterations do) than to keep the error from growing in the first place, by using a smaller timestep (more substeps). To correct the error caused by using only 1 substep instead of say, 4 substeps, you need a lot more than 4 iterations.

(Yesterday, 02:11 PM)Qriva0 Wrote: but at the same time it increases the cost of all other aspects of simulation. However let's assume, there is only one actor and I want to get as good result as possible.

"All other aspects" is just integration - moving state forward in time, which is dirt cheap. Because of this the cost of substeps = N, iterations = 1 and substeps = 1, iterations = N is basically the same. Collision detection is only performed once per full step, and the resulting contact constraints reused for all substeps/iterations.

For this reason, the manual recommends to start by setting all iteration counts to 1 and adjusting substeps first.

Also note that using more substeps dissipates a lot less energy than using iterations and will result in more lively simulations, so what substeps/iterations combination to use can also be a bit of a stylistic choice.

kind regards,
Reply
#13
Thank you very much! 
It really helps, I am going to try different configurations to see what fits best Gran sonrisa
Reply