RealityKit Motion Capture and Apple’s future iPhone including a time-of-flight camera

Apple analyst Ming-Chi Kuo claiming in his latest report that two of the 2020 iPhones will feature a rear time-of-flight (ToF) 3D depth sensor for better augmented reality features and portrait shots, via MacRumors.

“It’s not the first we’ve heard of Apple considering a ToF camera for its 2020 phones, either. Bloomberg reported a similar rumor back in January, and reports of a 3D camera system for the iPhone have existed since 2017. Other companies have beaten Apple to the punch here, with several phones on the market already featuring ToF cameras. But given the prevalence of Apple’s hardware and the impact it tends to have on the industry, it’s worth taking a look at what this camera technology is and how it works.

What is a ToF sensor, and how does it work?

Time-of-flight is a catch-all term for a type of technology that measures the time it takes for something (be it a laser, light, liquid, or gas particle) to travel a certain distance.

In the case of camera sensors, specifically, an infrared laser array is used to send out a laser pulse, which bounces off the objects in front of it and reflects back to the sensor. By calculating how long it takes that laser to travel to the object and back, you can calculate how far it is from the sensor (since the speed of light in a given medium is a constant). And by knowing how far all of the different objects in a room are, you can calculate a detailed 3D map of the room and all of the objects in it.

The technology is typically used in cameras for things like drones and self-driving cars (to prevent them from crashing into stuff), but recently, we’ve started seeing it pop up in phones as well.”

The current state of ARKit 3 and an observation

Screen Shot 2019-07-29 at 12.07.16 PM.png

ARKit 3 has an ever-increasing scope, and of particular interest to me are those AR features which under the hood rely upon machine learning, namely Motion Capture.

Today, ARKit 3 uses raycasting as well as ML Based Plane Detection on awake or when the app using ARKit 3 is initially opened in order to place the floor, for example.

Check the video below. In it, I’m standing in front of my phone which is propped up on a table.

In this video, I’m using motion capture via an iPhone XR. My phone is sitting on a surface (namely the table) that it has determined is the floor plane, and as a result, you’ll notice that our avatar, once populated into the scene, has an incorrect notion of where the ground is.

Screen Shot 2019-07-30 at 1.27.20 PM

It’s the hope that new ToF sensor technology will allow for a robust and complete understanding of the layout of objects in the room and the floor. Such that, for the same context, the device is able to tell that it is sitting on a table yet, the floor is not that plane but the one further away in the real world scene before it.

 

Source:
The Verge, “Apple’s future iPhone might add a time-of-flight camera — here’s what it could do”

Update; Oculus Launch Pad 2017: Future of Farming

Everything is hierarchical in the brain. In VR design for users, my hypothesis is that this can be really helpful for setting the context. For example, at Virtually Live where the VR content is “Formula E Season 2 Highlights”, meaning the one donning the headset is able to watch races. I once proposed that we use the amazing UX of Realities.io to use an interactive model of Earth as the highest level of abstraction from the races (which occur all over the world). The user can spin the globe around and find a location to load in. The hierarchy written abstractly in this example is, Globe is a superset of Countries, Countries that of Cities, and Cities that of Places. I figured that this would be perfect for an electric motorsport championship series that travels to famous cities each month. We went with a carousel design that was more expeditious than the globe in the end.

The Future of Farming
 takes place largely in a metropolitan area, namely San Francisco. So I’ve decided that to begin, I’ll borrow from the hierarchical plan. I want to showcase an orthographic project of San Francisco to the user with maybe a handful of locations highlighted as interactable. To do this I’ve setup WRLD in my project for city landscape.

Upon selection of one of the highlighted locations with the GearVR controller, a scene will load with a focal piece of farming equipment that has made its way into the type of place (e.g. Warehouse, House, or Apartment, etc.).

A quick aside, last week I had a tough travel and work schedule to New York. I came upon a pretty bare blog post upon reading back what I wrote, so I decided, it was better to not share. One of the other hurdles I had, was an unfortunate loss of the teammate I announced two weeks prior, simply due to his prioritizing projects with budgets more appealing to him. I dwelled on this for awhile, as I admired his plant modeling work a lot. With the loss of that collaborator and weighing a few other factors, I’ve decided to pursue an art style much akin to that of Virtual Virtual Reality or that of Superhot. Less geometry all created in VR. Doing most of this via Google Blocks and a workflow involving pushing created environments to Unity which is pretty straight-forward. After you have created your model in Google Blocks, visit on an Internet browser with WebGL-friendly settings and download your model. From there, you can unzip that file and drag it into Unity Assets>Blocks Import which I recommend you create as a way of staying organized. You’ll note that Blocks imports speciate a .mtl, materials folder, and a .obj model usually. In order to have your intended Google Blocks model to show through you need to change one setting called “Material Naming” after you’ve clicked on your .obj. Change it to “By Base Texture Name” and Material Search can be by “Recursive Up”.

Unity_2017_1_1f1_Personal__64bit__-_blocks2unity_unity_-_Blocks_Tutorial_-_Android__Personal___OpenGL_4_1_

 

Here’s a look at the artwork for a studio apartment in SF for the app, as viewed from above. It’s a public bedroom that I’m remixing and you can see I’ve added a triangular floor space for a kitchen and this is likely where the window sill variety of hydroponic crop equipment will go. Modeling one such piece is going to be really fun.

 

 

View from Above

 

room

Angle View

 

 

 

In the past weeks, I’ve dedicated myself to edification on gardening and farming practices via readings, podcasts, and talking to people in business ecosystems involving food product suppliers. I learned about growing shitake mushrooms and broccoli sprouts in the home and got hands on with these. I learned about the technology evolution behind rice cookers and about relevant policy for farmers on the west coast over the last dozen years. In the industry, there are a number of effective farming methods that I’m planning to draw on (indoor hydroponic and aeroponic) that I can see working in some capacity in the home, and milieus I will highlight such as a legitimate vertical indoor farm facility (https://techcrunch.com/2017/07/19/billionaires-make-it-rain-on-plenty-the-indoor-farming-startup/).

I have asked for help from a design consultant standpoint from someone that works at Local Bushel.

To expound on why Local Bushel is perhaps a helpful reference point: Local Bushel is a community of individuals dedicated to increasing our consumption of responsibly raised food. Their values align well with edifying me (the creator) about the marketplace that I want to project into the future about. Those are:

  1. Fostering Community
  2. Being Sustainable and Responsible
  3. Providing High Quality, Fresh Ingredients

——
For interactions, I can start simple and use info-cards/move scenes based on the orientation of the users head using ray casts. Working in Oculus Gear VR Controller eventually.