This post presents a TEDx talk threading through the connected research topics of games, neuroscience, vr as an input device, and BCI.
When developing AR applications for Apple phones there are two cameras that we speak about. One is the physical camera on the back of the phone. The other is the virtual camera that you will have in your Unity scene to in turn, match the position and orientation of the real world camera.
A camera in Unity (virtual) has a component called Clear Flags which determines which parts of the screen will be cleared. On your main virtual camera setting this to “Depth Only” will instruct the renderer to clear the layer of the virtual background environment. Allowing for the seamless overlay of virtual objects on the (physical) camera feed as a backdrop for your virtual objects.
More to come on differences between hit testing and ray casting in the context of ARKit and a broader look at intersection testing approaches in the next post.