Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How to use with OVR camera
#1
Hi there,
I'm just getting started with Fluid and I have a few questions:

1. If there's a special relationship between rendering and a camera a scene, can you help me understand how this relationship works? I've placed an OVRCameraRig (using Oculus) in the Obi Sample Scene > FluidMill and I'm not seeing any particles being rendered. If I disable the OVRCameraRig and use the default camera, I can see the particles.

2. Can someone provide a more detailed explanation of "Simulate in Local Space" and what the alternatives are?

3. If I place my OVRCameraRig in a brand new scene, I can see particle imposters.

Hopefully this these are some simple noob questions. Looking for any tips/tricks that might help me work with ObuFluid in VR.

-Mike
Reply
#2
I don’t own Obi Fluod yet but on the FAQ it says:

Does it support VR (Virtual Reality)?

Cloth and Rope will support VR. For Obi Fluid, you will have to use separate cameras for each eye since the renderer does not support single-pass stereo rendering.

I don’t know how to help, but just something I remember reading. Try going to player setting and set Stero Rendering to multipass slow.
Reply
#3
I missed that in the FAQ section. In Player Settings I only see an option called Stereo Rendering Method* with the options being: Multi Pass and Single Pass.

I can see that I might have opened a can of worms with this VR issue when trying to use the OVRCameraRig with Fluid Indeciso Your suggested solution helped me refine my Google search and I now see that most of the discussions about solutions are dealing with things that I'm not skilled enough to understand at this point.

Perhaps someone else can provide a very specific solution or at least tell us if it's not at all possible to use VR with Fluid.

Thanks for the useful info niZmo!
Reply
#4
(09-09-2017, 02:47 PM)DrSoos Wrote: I missed that in the FAQ section. In Player Settings I only see an option called Stereo Rendering Method* with the options being: Multi Pass and Single Pass.

I can see that I might have opened a can of worms with this VR issue when trying to use the OVRCameraRig with Fluid  Indeciso  Your suggested solution helped me refine my Google search and I now see that most of the discussions about solutions are dealing with things that I'm not skilled enough to understand at this point.

Perhaps someone else can provide a very specific solution or at least tell us if it's not at all possible to use VR with Fluid.

Thanks for the useful info niZmo!

Hi!

Fluid is rendered as a screen-space post process, which is much faster than calculating an actual 3D surface mesh. However this requires to use a separate pass for each eye.
You need to set up two cameras (one for each eye), each one with its own ObiFluidRenderer and a different stereoTargetEye mask (so that one renders the right eye, and another the left eye).
Reply
#5
(08-09-2017, 06:10 PM)DrSoos Wrote: Hi there,
I'm just getting started with Fluid and I have a few questions:

1. If there's a special relationship between rendering and a camera a scene, can you help me understand how this relationship works? I've placed an OVRCameraRig (using Oculus) in the Obi Sample Scene > FluidMill and I'm not seeing any particles being rendered. If I disable the OVRCameraRig and use the default camera, I can see the particles.

2. Can someone provide a more detailed explanation of "Simulate in Local Space" and what the alternatives are?

3. If I place my OVRCameraRig in a brand new scene, I can see particle imposters.

Hopefully this these are some simple noob questions. Looking for any tips/tricks that might help me work with ObuFluid in VR.

-Mike

Regarding your question about local-space simulation:

When dealing with graphics and/or physics, there are usually 2 different vector spaces your data can be expressed in: world space and local (aka "model" or "object") space. The difference lies in the reference frame used. In world space, the reference frame is the center of your scene. In local space, the reference frame is the object's transform (in Obi's case, the solver itself). 

So simulating fluid in the solver's local space means that if you scale, translate or rotate the solver around, the fluid will be scaled, translated or rotated along with it because the simulation uses the solver transform as its reference frame. If you don't know what to use this for, chances are you should just stick with world space (the default).

Vector spaces are a pretty critical concept to understand. If you're starting out I can provide you some reading material, such as:
http://www.codinglabs.net/article_world_...atrix.aspx
Reply
#6
(10-09-2017, 07:52 PM)josemendez Wrote: This is very useful information. Thank you for the detailed explanation and the link, josemendez. I will attempt to ingest that article so I can gain a better understanding of vector spaces.




Regarding your question about local-space simulation:

When dealing with graphics and/or physics, there are usually 2 different vector spaces your data can be expressed in: world space and local (aka "model" or "object") space. The difference lies in the reference frame used. In world space, the reference frame is the center of your scene. In local space, the reference frame is the object's transform (in Obi's case, the solver itself). 

So simulating fluid in the solver's local space means that if you scale, translate or rotate the solver around, the fluid will be scaled, translated or rotated along with it because the simulation uses the solver transform as its reference frame. If you don't know what to use this for, chances are you should just stick with world space (the default).

Vector spaces are a pretty critical concept to understand. If you're starting out I can provide you some reading material, such as:
http://www.codinglabs.net/article_world_...atrix.aspx
Reply