Ahead of Oculus Connect 6 (OC6), I attended the Oculus Launchpad and Start dinner tonight. I saw a ton of vibrant communication and hopes for the next few days. In no small order, developers were internationally based, from places such as Canada and New Zealand. I noticed a pattern of developers who seem to be holding full-time jobs all the while in pursuit of publishing an app to the Oculus Store.
Apple analyst Ming-Chi Kuo claiming in his latest report that two of the 2020 iPhones will feature a rear time-of-flight (ToF) 3D depth sensor for better augmented reality features and portrait shots, via MacRumors.
“It’s not the first we’ve heard of Apple considering a ToF camera for its 2020 phones, either. Bloomberg reported a similar rumor back in January, and reports of a 3D camera system for the iPhone have existed since 2017. Other companies have beaten Apple to the punch here, with several phones on the market already featuring ToF cameras. But given the prevalence of Apple’s hardware and the impact it tends to have on the industry, it’s worth taking a look at what this camera technology is and how it works.
What is a ToF sensor, and how does it work?
Time-of-flight is a catch-all term for a type of technology that measures the time it takes for something (be it a laser, light, liquid, or gas particle) to travel a certain distance.
In the case of camera sensors, specifically, an infrared laser array is used to send out a laser pulse, which bounces off the objects in front of it and reflects back to the sensor. By calculating how long it takes that laser to travel to the object and back, you can calculate how far it is from the sensor (since the speed of light in a given medium is a constant). And by knowing how far all of the different objects in a room are, you can calculate a detailed 3D map of the room and all of the objects in it.
The technology is typically used in cameras for things like drones and self-driving cars (to prevent them from crashing into stuff), but recently, we’ve started seeing it pop up in phones as well.”
The current state of ARKit 3 and an observation
Today, ARKit 3 uses raycasting as well as ML Based Plane Detection on awake or when the app using ARKit 3 is initially opened in order to place the floor, for example.
Check the video below. In it, I’m standing in front of my phone which is propped up on a table.
In this video, I’m using motion capture via an iPhone XR. My phone is sitting on a surface (namely the table) that it has determined is the floor plane, and as a result, you’ll notice that our avatar, once populated into the scene, has an incorrect notion of where the ground is.
It’s the hope that new ToF sensor technology will allow for a robust and complete understanding of the layout of objects in the room and the floor. Such that, for the same context, the device is able to tell that it is sitting on a table yet, the floor is not that plane but the one further away in the real world scene before it.
What is this? I chatted with Evan, Operations Modeling and Simulation Engineer at Northrop Grumman about engineering use cases for the Hololens.
His opening remarks: It’s often a struggle integrating new technology into large-scale manufacturers due to adherence to strict methods and processes. Finding/molding problems into good use cases for a given new technology can be challenging. It’s much easier to start with the problem and find/mold a good solution than the other way around. The challenge is helping engineers and operations leadership understand what modern solutions exist.
Evan’s Take: In the context of engineering, to show the Hololen’s capabilities in relation to the (DOD acquisition lifecycle) lifecycle stages of a product might be a high value strategy.
Temporary Minimum Risk Route (TMRR): How do we design a product that fulfills mission requirements? This can take the form of:
- visualizing the designs, making sure they’re feasible (i.e. are wires getting pinched?). Uncovering design flaws you’ll discover later in the form of defects during manufacturing. Making sure the design is producible (DFM – Design for Manufacturability).
- communicating to the customer: In that stage of the lifecycle it’s important to be able to communicate your designs to the customer to demonstrate technical maturity.
- inspect the product: this part of the product is called “XYZ” can then be exploded.
Engineering and Manufacturing Development (EMD): At this stage the customer (NG) cares about “how are we going to build it”
- tooling design: visualizing the product sitting in the tools or workstands that will be used in production
- visualizing the ergonomics people are going to have to deal with for example are the clearances sufficient to *screw in the screw, so ergonomics*
- visualizing the factory flow, the customer (NG’s customer) would also be interested in seeing the proposed factory flow to build confidence. It’s becoming more common to see this as a line item in contracts (Contract Data Requirements List or CDRL)
Subsequent steps in Production & Deployment are:
- Low rate initial production (LRIP)
- Full rate production (FRP)
Who the customer is: Mechanics on the factory floor using hololens for work instructions, saw a lot of interest at Raytheon and NG to use Virtual Work instructions overlayed onto the hardware (Google Glass, Light Guide Systems, etc). In a more mature program that’s in production, the mechanic, or the electrician on the factory floor would be the end user. Today, they look away from the product where work instructions are pulled up on the computer. Their instructions might be several feet away from the work, hopefully they’ve interpreted the instructions well so they don’t cause a defect. Operators work from memory or don’t follow work instructions if it’s too cumbersome to do so. DCMA (Customer’s oversight) issues corrective action requests (CAR’s) to the contractor when operators don’t appear to be following work instructions (i.e. the page they’re on doesn’t match the step in the process they’re currently working on, or worse, they don’t have the instructions pulled up). Getting too many of these is really bad. So where AR is really useful, is when AR is overlaying instructions on the product as it’s built. Care should be given to the Manufacturing Engineer’s workflow for creating and approving work instructions, work instruction revisions, etc. Long-term, consideration probably needs to be given to integration with the Manufacturing execution system (MES) and possibly many other systems (ERP, PLM, etc.).
The Hololens tech is seemingly a ways away from that––seamlessly identifying the hardware regardless of physical position/orientation as well as making it easy for manufacturing engineers to author compliant work instructions
Another consideration, for any of the above use cases in the defense industry, is wireless. Most facilities will not accommodate devices that transmit or receive signals over any form of wireless. For the last use case, tethering a mechanic to a wired AR device is inhibiting.
Games as Medicine | FDA Clearance Methods
Noah Falstein, @nfalstein
President, The Inspiracy
Technically software and games are cleared and not approved by the FDA.
By background, Noah:
- Has attended 31 GDCs
- Been working in games since 1980 (started in entertainment and arcade games with Lucas Entertainment)
- Gradually shifted over and consulted for 17 years on a wide variety of games
- Started getting interested in medical games in 1991 (i.e. East3)
- Went to Google and left due to platform perspective one had to have at Google
- Game designer not a doctor, but voraciously learns about science and medical topics
Table of Content:
- Context of games for health
- New factor of FDA clearance
- Deeper dive
- Adv. and Disadvan. to clearance
Why are games and health an interesting thing?
Three reasons why games for health are growing quickly and are poised to be a very important thing
- It’s about helping people (i.e. Dr. Sam Rodriguez’s work Google “Rodriguez pain VR”)
- It’s challenging, exciting, and more diverse than standard games (i.e. games need to be fun, but if they’re not having the desired effect, for example restoring motion after a stroke, then you encounter an interesting challenge). The people in the medical field tend to be more diverse than those in the gaming space.
- It’s a huge market* FDA clearance = big market
So what’s the catch?
Mis-steps along the way
- Brain Training (i.e. Nintendo Gameboy had popular Japanese games claiming brain training)
- Wii Fit (+U) (i.e. the balance board)
- Lumosity fine (i.e. claims made that were unsubstantiated by research)
upshot: lack of research and good studies underpinning claims
Some bright spots
- Remission from Hopelab (i.e. they targeted adherence: using the consequences of not having enough chemotherapy in their body)
FDA clearance is a gold standard
- Because it provides a stamp of good, trustable, etc.
- The burden is on the people who make products to go through a regimen of tests that are science-driven
- Noah strongly recommends Game Devs to link up with a university
- Working on SaMD – Software as a Med Device
- Biggest single world market drives others
- Necessary for a prescription and helps with insurance reimbursement
- but it’s very expensive and time-consuming
FDA definition of a serious disease
- FDA clearance May 2017
- Stroke Rehabilitation
- Early in-hospital acute care while plasticity high
- Positions its product as a “prescription digital therapeutic”
Akili Interactive Labs
- Treats pediatric ADHD
- Late-stage trial results (Dec. 2017) were very positive with side effects of a headache and frustration, which is much better than alternatives like Ritalin
- Seeking De Novo clearance
- Adam Gazzaley – began as aging adult research with Neuroracer, a multi-year study published in Nature
The Future – Good, Bad, Ugly, Sublime
- Each successful FDA clearance helps
- But they still will require big $, years to dev
- you have to create a company, rigorously study it, stall production because changing your game
would make results invalid from studies, then you need to release it
- Pharma is a powerful but daunting partner
- Can FDA certification for games then reveal that some games are essentially street drugs?
Hello World Building Augmented Reality for Snapchat
Fun fact – 30000+ lenses created by Snapchatters, leading to over a billion views of lense content
Table of Contents
Hello World! Lens Studio Live Demo
High-Quality Rendering with Allegorithmic
Chester Fetch with Klei Entertainment
Cuphead and Mugman with Studio MDHR
- Worked at Bad Robot, Neversoft, and Blizzard
- Snapchat has always opened up to the camera, which has positively affected their engagement
- Pair your phone with Lens Studio
- A community forum on the site exists where devs q & a
- Has been out for less than 4 months today, and the lenses have resulted in over a billion experiences
- The tool has been used for a variety of things hamburger photogrammetry, full-screen 2d experiences,
Distributing your lenses is really easy
Within, snap you can discover a lense where you can see more lenses by the same creators or you
can pull up on the base of a story to figure out what lense was used.
Lense Boost – All users see the Snapchat carousel, a Lens Boost to get your lense into this carousel
Find which template best fits your creative intent
- Static object
- Animated object
- Interactive templates (tap, approach, look at)
- Immersive (look around, window)
- For 2D creators (cutout, picture frame, fullscreen, soundboard)
- Interactive path (idle, walk, and arrival states necessary) coming soon
- Brian Garcia, Neon Book
- Pinot, 2D textures, cutout template, then character animator to animate
- DFace, DDog, imported into lenstudio (from camera reflections feature)
- Jordan & Snapchat, ‘88 static Jordan 3D model
- Netflix & Snapchat, Stranger Things – turning on the TV, or spelling your name out,
awakiening the demi gorgon
Lens Studio is made up of panels:
- Live Preview, to see what it will be like, it includes tracked content and interaction support
- Objects panel, like the Unity scene view, it shows you what is in the preview
- Resources panel, all your resources and where you’d import stuff
Start with the animated object template
Select an object in the resources panel and move it to the objects panel
Google blocks + mixamo + export free animations from Adobe and import your character animated from
File import monitor and astronaut
Child the imported 3D model to the fox as a child and delete the fox
Add a shadow
Substance Painter is an app to apply materials or paint textures for 2D or 3D.
Any material you bring in you can apply to an object, they apply uniformly, but there’s
also, smart materials which applies intelligently to geometry (rust example).
The layers tab is like the scene view the place to drag and drop
Alphas provides a cutout, you can apply materials to the cutout
Upon clicking export
Challenge: Rubber Ducky
Chester Fetch with Klei Entertainment
Games studio since 2005
Why is AR interesting for Klei?
- AR is about bringing the virtual world out to the player.
- Limited bandwidth
- Seems hard
- Would require too much time from others at the Studio
Cuphead and Mugman with Studio MDHR
- Cuphead and Mugman wanted to build and snap a boss battle
- All of the lenses used in cuphead were from assets created directly from the game
- Chains together 5 2D animations
- Within the Snap app, I noticed you can rent/create a lense “as a service” how does this pertain to lens studio?
- A question I had was, looking forward to a day when you can use targets like people for further interactable and shareable content like the examples shown in Mugman, when will person/object recognition be available to developers and users of Snap?
- What is the github account for Snap?
When developing AR applications for Apple phones there are two cameras that we speak about. One is the physical camera on the back of the phone. The other is the virtual camera that you will have in your Unity scene to in turn, match the position and orientation of the real world camera.
A camera in Unity (virtual) has a component called Clear Flags which determines which parts of the screen will be cleared. On your main virtual camera setting this to “Depth Only” will instruct the renderer to clear the layer of the virtual background environment. Allowing for the seamless overlay of virtual objects on the (physical) camera feed as a backdrop for your virtual objects.
More to come on differences between hit testing and ray casting in the context of ARKit and a broader look at intersection testing approaches in the next post.