This post presents a TEDx talk threading through the connected research topics of games, neuroscience, vr as an input device, and BCI.
Apple analyst Ming-Chi Kuo claiming in his latest report that two of the 2020 iPhones will feature a rear time-of-flight (ToF) 3D depth sensor for better augmented reality features and portrait shots, via MacRumors.
“It’s not the first we’ve heard of Apple considering a ToF camera for its 2020 phones, either. Bloomberg reported a similar rumor back in January, and reports of a 3D camera system for the iPhone have existed since 2017. Other companies have beaten Apple to the punch here, with several phones on the market already featuring ToF cameras. But given the prevalence of Apple’s hardware and the impact it tends to have on the industry, it’s worth taking a look at what this camera technology is and how it works.
What is a ToF sensor, and how does it work?
Time-of-flight is a catch-all term for a type of technology that measures the time it takes for something (be it a laser, light, liquid, or gas particle) to travel a certain distance.
In the case of camera sensors, specifically, an infrared laser array is used to send out a laser pulse, which bounces off the objects in front of it and reflects back to the sensor. By calculating how long it takes that laser to travel to the object and back, you can calculate how far it is from the sensor (since the speed of light in a given medium is a constant). And by knowing how far all of the different objects in a room are, you can calculate a detailed 3D map of the room and all of the objects in it.
The technology is typically used in cameras for things like drones and self-driving cars (to prevent them from crashing into stuff), but recently, we’ve started seeing it pop up in phones as well.”
The current state of ARKit 3 and an observation
Today, ARKit 3 uses raycasting as well as ML Based Plane Detection on awake or when the app using ARKit 3 is initially opened in order to place the floor, for example.
Check the video below. In it, I’m standing in front of my phone which is propped up on a table.
In this video, I’m using motion capture via an iPhone XR. My phone is sitting on a surface (namely the table) that it has determined is the floor plane, and as a result, you’ll notice that our avatar, once populated into the scene, has an incorrect notion of where the ground is.
It’s the hope that new ToF sensor technology will allow for a robust and complete understanding of the layout of objects in the room and the floor. Such that, for the same context, the device is able to tell that it is sitting on a table yet, the floor is not that plane but the one further away in the real world scene before it.
Hey everyone! Today I read that America’s biggest movie theater chain, AMC Entertainment Holdings Inc., is putting $20 million behind a Hollywood virtual reality startup and plans to begin installing its technology at cinemas starting next year. That startup, namely, Dreamscape Immersive is said to be backed by Steven Spielberg and offering experiences allowing six people to participate at the same time.
As a Southern California native, I’m excited that… “‘[i]ts first location will be at a Los Angeles mall run by Westfield Corp., [who is] a series A investor. It is expected to launch there in the winter of 2018’ said Dreamscape’s CEO, Bruce Vaughn”.
Not only will experiences that build on traditional movie-going be available. For example, think of John Wick Chronicles which was an immersive FPS allowing people to play as John Wick and travel into the world of hired guns leading up to John Wick 2 earlier this year. But, the WSJ article says that you can expect to be able to attend, for instance, sporting events virtually with Dreamscape Immersive. An interesting appeal, given that we don’t really associate a trip to the theaters with sports fan viewing experiences.
I’m curious to see how these Dreamscape Immersive locations will be outfitted. Some might find a useful comparison to be The Void – Ghostbusters Dimensions which brings the cinematic experience to life at Madame Tussauds in New York for you and three others. Their experience highlighted dynamism and complete immersion where you walk around an expansive physical space by leveraging custom hardware.
The article goes on to say that, “the VR locations may be in theater lobbies or auditoriums or locations adjacent to cinemas”. Last year in September we saw Emax, for example, a Shenzhen-based startup execute the adjacent layout. The open layout was nice, in my humble opinion, while there are charms to giving folks privacy a la VR booths one might find at large conferences. Perhaps because it shows how much fun people in the virtual experience are having and gives onlooking friends the chance to share their reactions.
On that topic, creative VR applications like Tiltbrush and Mindshow yield some kind of shareable content innately. In the former, when you’re finished with your painting you can export the work of art as a model, scene, or perhaps just the creation video and view it later online. In the latter, you are essentially creating a show for others to watch.
But if the experience is a bit more passive, as in watching a sporting event… are there ways to share that which you experienced with others? Definitely. Via green screen infrastructure and video content. The la-based company, LIV, has been striving towards productization of the infrastructure needed to seamlessly capture guests in a better way. Succinctly put, LIV “think[s] VR is amazing to be inside, but rather underwhelming to spectate….” Perhaps Dreamscape Immersive will leverage similar infrastructure to expand the digital footprint of these location-based experiences.
What do you think are the most salient points about this announcement?