When developing AR applications for Apple phones there are two cameras that we speak about. One is the physical camera on the back of the phone. The other is the virtual camera that you will have in your Unity scene to in turn, match the position and orientation of the real world camera.
A camera in Unity (virtual) has a component called Clear Flags which determines which parts of the screen will be cleared. On your main virtual camera setting this to “Depth Only” will instruct the renderer to clear the layer of the virtual background environment. Allowing for the seamless overlay of virtual objects on the (physical) camera feed as a backdrop for your virtual objects.
More to come on differences between hit testing and ray casting in the context of ARKit and a broader look at intersection testing approaches in the next post.
Over the last few years, virtual reality has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate extremely immersive experiences. To facilitate comfortable long-term experiences and wide-spread user acceptance, however, the vergence-accommodation conflict inherent to all stereoscopic displays will have to be solved. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] present the first factored near-eye display technology supporting high image resolution as well as focus cues: accommodation and retinal blur. To this end, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] build on Wheatstone’s original stereoscope but augment it with modern factored light field synthesis via stacked liquid crystal panels. The proposed light field stereoscope is conceptually closely related to emerging factored light field displays, but it has very unique characteristics compared to the television-type displays explored thus far. Foremost, the required field of view is extremely small – just the size of the pupil – which allows for rank-1 factorizations to produce correct or nearly-correct focus cues. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze distortions of the lenses in the near-eye 4D light fields and correct them using the high-dimensional image formation afforded by our display. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] demonstrate significant improvements in resolution and retinal blur quality over previously-proposed near-eye displays. Finally, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze diffraction limits of these types of displays along with fundamental resolution limits.