RealityKit Motion Capture and Apple’s future iPhone including a time-of-flight camera

Apple analyst Ming-Chi Kuo claiming in his latest report that two of the 2020 iPhones will feature a rear time-of-flight (ToF) 3D depth sensor for better augmented reality features and portrait shots, via MacRumors.

“It’s not the first we’ve heard of Apple considering a ToF camera for its 2020 phones, either. Bloomberg reported a similar rumor back in January, and reports of a 3D camera system for the iPhone have existed since 2017. Other companies have beaten Apple to the punch here, with several phones on the market already featuring ToF cameras. But given the prevalence of Apple’s hardware and the impact it tends to have on the industry, it’s worth taking a look at what this camera technology is and how it works.

What is a ToF sensor, and how does it work?

Time-of-flight is a catch-all term for a type of technology that measures the time it takes for something (be it a laser, light, liquid, or gas particle) to travel a certain distance.

In the case of camera sensors, specifically, an infrared laser array is used to send out a laser pulse, which bounces off the objects in front of it and reflects back to the sensor. By calculating how long it takes that laser to travel to the object and back, you can calculate how far it is from the sensor (since the speed of light in a given medium is a constant). And by knowing how far all of the different objects in a room are, you can calculate a detailed 3D map of the room and all of the objects in it.

The technology is typically used in cameras for things like drones and self-driving cars (to prevent them from crashing into stuff), but recently, we’ve started seeing it pop up in phones as well.”

The current state of ARKit 3 and an observation

Screen Shot 2019-07-29 at 12.07.16 PM.png

ARKit 3 has an ever-increasing scope, and of particular interest to me are those AR features which under the hood rely upon machine learning, namely Motion Capture.

Today, ARKit 3 uses raycasting as well as ML Based Plane Detection on awake or when the app using ARKit 3 is initially opened in order to place the floor, for example.

Check the video below. In it, I’m standing in front of my phone which is propped up on a table.

In this video, I’m using motion capture via an iPhone XR. My phone is sitting on a surface (namely the table) that it has determined is the floor plane, and as a result, you’ll notice that our avatar, once populated into the scene, has an incorrect notion of where the ground is.

Screen Shot 2019-07-30 at 1.27.20 PM

It’s the hope that new ToF sensor technology will allow for a robust and complete understanding of the layout of objects in the room and the floor. Such that, for the same context, the device is able to tell that it is sitting on a table yet, the floor is not that plane but the one further away in the real world scene before it.

 

Source:
The Verge, “Apple’s future iPhone might add a time-of-flight camera — here’s what it could do”

Reblog: Thoughts on SwiftUI from WWDC 19

SwiftUI

So what’s the big deal with SwiftUI? Well here’s why I think it’s great.

  1. One UI framework for all platforms It has always baffled me why Apple never made UIKit work on the Mac. If it worked for iOS and tvOS it could certainly also work on the Mac (which it does now thanks to Project Catalyst). For me, this means having double the work on many parts of the UI on Secrets for Mac and iOS. Now and then, you would see rumors that would give you hope. “Maybe next year” you’d think… but the years passed and nothing. Looking back, I can’t help but wonder if this was Apple’s plan all along. SwiftUI is certainly a multi-year effort. The underpinnings of the combined framework are at least 5 years old:

    Joe Groff@jckarter

    Combine goes back before even Swift existed. I’ve been helping the SwiftUI folks for at least three years, and they were probably working on stuff before I knew about it

    David Smith@Catfish_Man

    I was curious what the earliest Combine-related file I have on my computer is, and it turns out it’s August 14th 2013. I filed the radar it references on 10/23/2012.

    Also apparently yet another short-lived project name I forgot about?? pic.twitter.com/FR6NADWrs5

    View image on Twitter

    And although I haven’t played much with it yet certainly there’ll be a lot of bugs/shortcomings to iron out for next few years.

  2. Declarative To put it succinctly, this means that instead of telling the framework what to do you tell it what you want. The framework then figures out how to achieve. And you’ve seen this style of coding already with Auto Layout. It offloads much of the complexity to the framework.By introducing this abstraction and letting the framework do the job of composing the UI for you we get:
    • Automatic support for many of the system features: dynamic type, accessibility, dark mode, etc;
    • Adaptive layouts on different platforms (a switch on the iPhone becomes a checkbox on the Mac);
    • Freedom from having to adapt our UI whenever Apple needs to evolve it (what SwiftUI uses to satisfy a Text element may change on the next release).

    I had a professor that used to say:

    All problems in CS can be solved with one more level of indirection.

    It still holds.

  3. Reactive I’ve never invested much time with any of the reactive frameworks out there. I definitely appreciated the principles behind them but I’ve always been very critical of frameworks1 or technologies2 that are too invasive.With what I’ve already seen on the sessions and demos, I’m just about ready to forgive Apple for abandoning development of the controversial Cocoa Bindings3.

    We write so much glue code that I’ve got no problems accepting the learning curve of all the new stuff that is driving this both in the Swift language and the new Combine framework.

I’m cautiously excited about SwiftUI and sincerely hope it will live up to expectations.
Did you enjoy this article? Then read the full version from the author’s website.