AR Industrial Applications: Defense Engineering

What is this? I chatted with Evan, Operations Modeling and Simulation Engineer at Northrop Grumman about engineering use cases for the Hololens. 

His opening remarks: It’s often a struggle integrating new technology into large-scale manufacturers due to adherence to strict methods and processes. Finding/molding problems into good use cases for a given new technology can be challenging. It’s much easier to start with the problem and find/mold a good solution than the other way around. The challenge is helping engineers and operations leadership understand what modern solutions exist.

 

——-

Evan’s Take: In the context of engineering, to show the Hololen’s capabilities in relation to the (DOD acquisition lifecycle) lifecycle stages of a product might be a high value strategy.

Image result for dod engineering systemTemporary Minimum Risk Route (TMRR): How do we design a product that fulfills mission requirements? This can take the form of:

  • visualizing the designs, making sure they’re feasible (i.e. are wires getting pinched?). Uncovering design flaws you’ll discover later in the form of defects during manufacturing. Making sure the design is producible (DFM – Design for Manufacturability).
  • communicating to the customer: In that stage of the lifecycle it’s important to be able to communicate your designs to the customer to demonstrate technical maturity.
    • inspect the product: this part of the product is called “XYZ” can then be exploded.

 

Engineering and Manufacturing Development (EMD): At this stage the customer (NG) cares about “how are we going to build it”

  • tooling design: visualizing the product sitting in the tools or workstands that will be used in production
  • visualizing the ergonomics people are going to have to deal with for example are the clearances sufficient to *screw in the screw, so ergonomics*
  • visualizing the factory flow, the customer (NG’s customer) would also be interested in seeing the proposed factory flow to build confidence. It’s becoming more common to see this as a line item in contracts (Contract Data Requirements List or CDRL)

Subsequent steps in Production & Deployment are:

  • Low rate initial production (LRIP)
  • Full rate production (FRP)

 

Who the customer is: Mechanics on the factory floor using hololens for work instructions, saw a lot of interest at Raytheon and NG to use Virtual Work instructions overlayed onto the hardware (Google Glass, Light Guide Systems, etc). In a more mature program that’s in production, the mechanic, or the electrician on the factory floor would be the end user. Today, they look away from the product where work instructions are pulled up on the computer. Their instructions might be several feet away from the work, hopefully they’ve interpreted the instructions well so they don’t cause a defect. Operators work from memory or don’t follow work instructions if it’s too cumbersome to do so. DCMA (Customer’s oversight) issues corrective action requests (CAR’s) to the contractor when operators don’t appear to be following work instructions (i.e. the page they’re on doesn’t match the step in the process they’re currently working on, or worse, they don’t have the instructions pulled up). Getting too many of these is really bad. So where AR is really useful, is when AR is overlaying instructions on the product as it’s built. Care should be given to the Manufacturing Engineer’s workflow for creating and approving work instructions, work instruction revisions, etc. Long-term, consideration probably needs to be given to integration with the Manufacturing execution system (MES) and possibly many other systems (ERP, PLM, etc.).

The Hololens tech is seemingly a ways away from that––seamlessly identifying the hardware regardless of physical position/orientation as well as making it easy for manufacturing engineers to author compliant work instructions

Another consideration, for any of the above use cases in the defense industry, is wireless. Most facilities will not accommodate devices that transmit or receive signals over any form of wireless. For the last use case, tethering a mechanic to a wired AR device is inhibiting.

 

Games as Medicine | FDA Clearance Methods

 

Games as Medicine | FDA Clearance Methods

Noah Falstein, @nfalstein
President, The Inspiracy
Neurogaming Consultant

Technically software and games are cleared and not approved by the FDA.

By background, Noah:

  • Has attended 31 GDCs
  • Been working in games since 1980 (started in entertainment and arcade games with Lucas Entertainment)
  • Gradually shifted over and consulted for 17 years on a wide variety of games
  • Started getting interested in medical games in 1991 (i.e. East3)
  • Went to Google and left due to platform perspective one had to have at Google
  • Game designer not a doctor, but voraciously learns about science and medical topics

Table of Content:

  • Context of games for health
  • New factor of FDA clearance
  • Deeper dive
  • Adv. and Disadvan. to clearance

Why are games and health an interesting thing?

Three reasons why games for health are growing quickly and are poised to be a very important thing

  • It’s about helping people (i.e. Dr. Sam Rodriguez’s work Google “Rodriguez pain VR”)
  • It’s challenging, exciting, and more diverse than standard games (i.e. games need to be fun, but if they’re not having the desired effect, for example restoring motion after a stroke, then you encounter an interesting challenge). The people in the medical field tend to be more diverse than those in the gaming space.
  • It’s a huge market* FDA clearance = big market
    IMG_2271

So what’s the catch?

Mis-steps along the way

  • Brain Training (i.e. Nintendo Gameboy had popular Japanese games claiming brain training)
  • Wii Fit (+U) (i.e. the balance board)
  • Lumosity fine (i.e. claims made that were unsubstantiated by research)

upshot: lack of research and good studies underpinning claims

Some bright spots

  • Remission from Hopelab (i.e. they targeted adherence: using the consequences of not having enough chemotherapy in their body)

FDA clearance is a gold standard

  • Because it provides a stamp of good, trustable, etc.
  • The burden is on the people who make products to go through a regimen of tests that are science-driven
  • Noah strongly recommends Game Devs to link up with a university
  • Working on SaMD – Software as a Med Device
  • Biggest single world market drives others
  • Necessary for a prescription and helps with insurance reimbursement
  • but it’s very expensive and time-consuming

IMG_2272

FDA definition of a serious disease
[missing]

MindMaze Pro

  • FDA clearance May 2017
  • Stroke Rehabilitation
  • Early in-hospital acute care while plasticity high

Pear Therapeutic

  • Positions its product as a “prescription digital therapeutic”

IMG_2273

Akili Interactive Labs

  • Treats pediatric ADHD
  • Late-stage trial results (Dec. 2017) were very positive with side effects of a headache and frustration, which is much better than alternatives like Ritalin
  • Seeking De Novo clearance
  • Adam Gazzaley – began as aging adult research with Neuroracer, a multi-year study published in Nature

The Future – Good, Bad, Ugly, Sublime

  • Each successful FDA clearance helps
  • But they still will require big $, years to dev
  • you have to create a company, rigorously study it, stall production because changing your game
    would make results invalid from studies, then you need to release it
  • Pharma is a powerful but daunting partner

Questions

  • Can FDA certification for games then reveal that some games are essentially street drugs?

 

Snap Lens Studio

Hello World Building Augmented Reality for Snapchat

Fun fact – 30000+ lenses created by Snapchatters, leading to over a billion views of lense content
Table of Contents
Lens Studio
Hello World! Lens Studio Live Demo
High-Quality Rendering with Allegorithmic
Chester Fetch with Klei Entertainment
Cuphead and Mugman with Studio MDHR

Travis Chen

  • Worked at Bad Robot, Neversoft, and Blizzard

Lens Studio

  • Snapchat has always opened up to the camera, which has positively affected their engagement
  • Pair your phone with Lens Studio
  • A community forum on the site exists where devs q & a
  • Has been out for less than 4 months today, and the lenses have resulted in over a billion experiences
  • The tool has been used for a variety of things hamburger photogrammetry, full-screen 2d experiences,
    r/snaplenses
  • Distributing your lenses is really easy

  • Within, snap you can discover a lense where you can see more lenses by the same creators or you
    can pull up on the base of a story to figure out what lense was used.

Lense Boost – All users see the Snapchat carousel, a Lens Boost to get your lense into this carousel

Find which template best fits your creative intent

Templates

  • Static object
  • Animated object
  • Interactive templates (tap, approach, look at)
  • Immersive (look around, window)
  • For 2D creators (cutout, picture frame, fullscreen, soundboard)
  • Interactive path (idle, walk, and arrival states necessary) coming soon

Examples

  • Brian Garcia, Neon Book
  • Pinot, 2D textures, cutout template, then character animator to animate
  • DFace, DDog, imported into lenstudio (from camera reflections feature)
  • Jordan & Snapchat, ‘88 static Jordan 3D model
  • Netflix & Snapchat, Stranger Things – turning on the TV, or spelling your name out,
    awakiening the demi gorgon

Hello World

Lens Studio is made up of panels:

  • Live Preview, to see what it will be like, it includes tracked content and interaction support
  • Objects panel, like the Unity scene view, it shows you what is in the preview
  • Resources panel, all your resources and where you’d import stuff

Workflow

Start with the animated object template
Select an object in the resources panel and move it to the objects panel
Google blocks + mixamo + export free animations from Adobe and import your character animated from
mixamo
File import monitor and astronaut
Child the imported 3D model to the fox as a child and delete the fox
Add a shadow
Sprite

High-Quality Rendering

Substance Painter is an app to apply materials or paint textures for 2D or 3D.
Any material you bring in you can apply to an object, they apply uniformly, but there’s
also, smart materials which applies intelligently to geometry (rust example).

The layers tab is like the scene view the place to drag and drop

Alphas provides a cutout, you can apply materials to the cutout

Upon clicking export

lensstudio
Challenge: Rubber Ducky

Chester Fetch with Klei Entertainment

Games studio since 2005

Why is AR interesting for Klei?

  • AR is about bringing the virtual world out to the player.
  • Shareable
  • Limited bandwidth
  • Seems hard
  • Would require too much time from others at the Studio

Cuphead and Mugman with Studio MDHR

  • Cuphead and Mugman wanted to build and snap a boss battle
  • All of the lenses used in cuphead were from assets created directly from the game
  • Chains together 5 2D animations

 

Questions


  • Within the Snap app, I noticed you can rent/create a lense “as a service” how does this pertain to lens studio?
  • A question I had was, looking forward to a day when you can use targets like people for further interactable and shareable content like the examples shown in Mugman, when will person/object recognition be available to developers and users of Snap?   
  • What is the github account for Snap?

Developer Blog Post: ARKit #1

When developing AR applications for Apple phones there are two cameras that we speak about. One is the physical camera on the back of the phone. The other is the virtual camera that you will have in your Unity scene to in turn, match the position and orientation of the real world camera.

A camera in Unity (virtual) has a component called Clear Flags which determines which parts of the screen will be cleared. On your main virtual camera setting this to “Depth Only” will instruct the renderer to clear the layer of the virtual background environment. Allowing for the seamless overlay of virtual objects on the (physical) camera feed as a backdrop for your virtual objects.

More to come on differences between hit testing and ray casting in the context of ARKit and a broader look at intersection testing approaches in the next post.

Reblog: The Light Field Stereoscope | SIGGRAPH 2015

Inspired by Wheatstone’s original stereoscope and augmenting it with modern factored light field synthesis, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] present a new near-eye display technology that supports focus cues. These cues are critical for mitigating visual discomfort experienced in the commercially-available head mounted displays and providing comfortable, long-term immersive experiences.

 

ABSTRACT

Over the last few years, virtual reality has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate extremely immersive experiences. To facilitate comfortable long-term experiences and wide-spread user acceptance, however, the vergence-accommodation conflict inherent to all stereoscopic displays will have to be solved. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] present the first factored near-eye display technology supporting high image resolution as well as focus cues: accommodation and retinal blur. To this end, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] build on Wheatstone’s original stereoscope but augment it with modern factored light field synthesis via stacked liquid crystal panels. The proposed light field stereoscope is conceptually closely related to emerging factored light field displays, but it has very unique characteristics compared to the television-type displays explored thus far. Foremost, the required field of view is extremely small – just the size of the pupil – which allows for rank-1 factorizations to produce correct or nearly-correct focus cues. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze distortions of the lenses in the near-eye 4D light fields and correct them using the high-dimensional image formation afforded by our display. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] demonstrate significant improvements in resolution and retinal blur quality over previously-proposed near-eye displays. Finally, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze diffraction limits of these types of displays along with fundamental resolution limits.

FILES

  • technical paper (pdf)
  • technical paper supplement (zip)
  • presentation slides (slideshare)