Design Iteration for Oculus Go

 

Design iteration when building for the Oculus Go

With 6 degrees of freedom headsets like the Oculus Rift and HTC Vive, when working in Unreal or Unity3d, it takes only a push of the play button to test your application in the headset.

There are advantages to seeing your scene from within your headset such as how your first-person perspective is developing, checking performance metrics in HUD, checking in on rendering weirdness, or correcting for relative spacing. However, the constraint of having to deploy by building and running to the Oculus Go each time we needed to check something can lessen your appetite for quick checks like this. Besides, sometimes is not even necessary.

That’s why a quick way of iterating on your scene using traditional desktop inputs is nice. Typically duplicating a currently under-construction scene into two versions. One called “site tour” for example and another called “site tour desktop”. The naming convention splits up functionality so that when you need to test something using mouse and keyboard you quickly hop into the “site tour desktop” scene. Some example mappings include UI navigation with a pointer or locomotion. The UI navigation can be done using the left mouse button and cursor instead of shipping to Go and using the hand controller. The locomotion can be done using your keys ‘w’,’a’,’s’, and ‘d’, as is common to most FPS games, to move around the space and the mouse to click and drag to move your head instead of having to teleport.

Diving deeper on the locomotion example

By throwing on headphones and using a Fly script applied to the Main Camera to test quickly using WASD within the Unity editor, you’ll be able to check relevant aspects of your lighting, audio, animations, etc without needing to wear the Go.
sample:

void Update()
{

    if (Input.GetMouseButton(0))
{
yaw += Input.GetAxis(“Mouse X”) * lookSpeed;
pitch += Input.GetAxis(“Mouse Y”) * lookSpeed;
pitch = Mathf.Clamp(pitch, -90.0f, 90.0f);
}

transform.localRotation = Quaternion.AngleAxis(yaw, Vector3.up);
transform.localRotation *= Quaternion.AngleAxis(pitch, Vector3.left);

transform.position += transform.forward * moveSpeed * Input.GetAxis(“Vertical”);
transform.position += transform.right * moveSpeed * Input.GetAxis(“Horizontal”);
transform.position += transform.up * 3 * moveSpeed * Input.GetAxis(“Mouse ScrollWheel”);

}

For the purposes of testing out spatial audio, I’ve noticed it’s great––mimicking head movement by panning using the mouse x.

 

Turning to the Oculus Rift

For what it’s worth in a post that’s supposed to be about the Oculus Go design iteration loop. In progress with an Oculus Go app currently, I and a friend find the utility of swapping a project over to the Oculus Rift to be really helpful.

What this does for you is, allow you to take advantage of the Oculus Rift during Play Mode (in Unity) which gives way to much faster iteration time. Perfect for quick fixes to code and cohesion of various parts (for example, like Teleportation and UI).

AR Industrial Applications: Defense Engineering

What is this? I chatted with Evan, Operations Modeling and Simulation Engineer at Northrop Grumman about engineering use cases for the Hololens. 

His opening remarks: It’s often a struggle integrating new technology into large-scale manufacturers due to adherence to strict methods and processes. Finding/molding problems into good use cases for a given new technology can be challenging. It’s much easier to start with the problem and find/mold a good solution than the other way around. The challenge is helping engineers and operations leadership understand what modern solutions exist.

 

——-

Evan’s Take: In the context of engineering, to show the Hololen’s capabilities in relation to the (DOD acquisition lifecycle) lifecycle stages of a product might be a high value strategy.

Image result for dod engineering systemTemporary Minimum Risk Route (TMRR): How do we design a product that fulfills mission requirements? This can take the form of:

  • visualizing the designs, making sure they’re feasible (i.e. are wires getting pinched?). Uncovering design flaws you’ll discover later in the form of defects during manufacturing. Making sure the design is producible (DFM – Design for Manufacturability).
  • communicating to the customer: In that stage of the lifecycle it’s important to be able to communicate your designs to the customer to demonstrate technical maturity.
    • inspect the product: this part of the product is called “XYZ” can then be exploded.

 

Engineering and Manufacturing Development (EMD): At this stage the customer (NG) cares about “how are we going to build it”

  • tooling design: visualizing the product sitting in the tools or workstands that will be used in production
  • visualizing the ergonomics people are going to have to deal with for example are the clearances sufficient to *screw in the screw, so ergonomics*
  • visualizing the factory flow, the customer (NG’s customer) would also be interested in seeing the proposed factory flow to build confidence. It’s becoming more common to see this as a line item in contracts (Contract Data Requirements List or CDRL)

Subsequent steps in Production & Deployment are:

  • Low rate initial production (LRIP)
  • Full rate production (FRP)

 

Who the customer is: Mechanics on the factory floor using hololens for work instructions, saw a lot of interest at Raytheon and NG to use Virtual Work instructions overlayed onto the hardware (Google Glass, Light Guide Systems, etc). In a more mature program that’s in production, the mechanic, or the electrician on the factory floor would be the end user. Today, they look away from the product where work instructions are pulled up on the computer. Their instructions might be several feet away from the work, hopefully they’ve interpreted the instructions well so they don’t cause a defect. Operators work from memory or don’t follow work instructions if it’s too cumbersome to do so. DCMA (Customer’s oversight) issues corrective action requests (CAR’s) to the contractor when operators don’t appear to be following work instructions (i.e. the page they’re on doesn’t match the step in the process they’re currently working on, or worse, they don’t have the instructions pulled up). Getting too many of these is really bad. So where AR is really useful, is when AR is overlaying instructions on the product as it’s built. Care should be given to the Manufacturing Engineer’s workflow for creating and approving work instructions, work instruction revisions, etc. Long-term, consideration probably needs to be given to integration with the Manufacturing execution system (MES) and possibly many other systems (ERP, PLM, etc.).

The Hololens tech is seemingly a ways away from that––seamlessly identifying the hardware regardless of physical position/orientation as well as making it easy for manufacturing engineers to author compliant work instructions

Another consideration, for any of the above use cases in the defense industry, is wireless. Most facilities will not accommodate devices that transmit or receive signals over any form of wireless. For the last use case, tethering a mechanic to a wired AR device is inhibiting.

 

Acceleration and Motion Sickness in the Context of Virtual Reality (VR)

As I traveled around the world with the HTC Vive and Oculus Rift, universally first-timers would be fascinated, but a bit woozy after trying VR. What contributes to this? One possibility is the vergence-accommodation issue with current displays. However, the subject of this post is locomotion and the anatomical reasoning behind the discomfort arising from poorly designed VR.

With VR you typically occupy a larger virtual space than that of your immediate physical surroundings.

So, to help you traverse, locomotion or in other words a way of sending you from point A to point B in the virtual space was designed. Here’s what this looks like:

Image result for teleportation vr gif

Caption: This guy is switching his virtual location by pointing a laser on the tip of his controller to move around.

Movement with changing velocity through a virtual environment can contribute to this overall feeling of being in a daze.

That’s why most creators smooth transitions and avoid this kind of motion (i.e. blink teleport, constant velocity movement from Land’s End). Notice how the movement seems steady and controlled below?

Image result for lands end vr gif

Acceleration and Velocity

‘Acceleration’ is, put simply, any kind of change of speed measured over time, generally [written] as m^-2 (meters per second, per second) if it’s linear or in rad^-2 (same but with an angle) if it’s around an axis. Any type of continuous change in the speed of an object will induce a non-zero acceleration.”

The Human Vestibular System

When you change speed, your vestibular system should register an acceleration. The vestibular system is part of your inner ear. It’s basically the thing that tells your brain if your head is up or down, and permit[s] you to [stand] and walk without falling all the time!

Internal ear diagram that show the semi-circular cannals where the acceleartion forces are sensed.

Fluid moving in your semicircular canals is measured and the information is communicated to your brain by the cranial nerves. You can think of this as [similar to how] an accelerometer and a gyroscope works.

[This] acceleration not only includes linear acceleration (from translation in 3D space), but also rotational acceleration, which induces angular acceleration, and empirically, it seems to be the worse kind in the matter of VR sickness…”

Now that you have this grounding for our anatomical system of perceiving acceleration the upshot is that often viewers in VR will experience movement visually but not via these semicircular canals. It’s this incongruence that drives VR sickness with current systems.

Some keywords to explore more if you’re interested in the papers available are: Vection, Galvanic Vestibular Stimulation (GVS), and Self-motion.

via Read more on the ways developers reduce discomfort from the author’s website.

Reblog: The Mind-Expanding Ideas of Andy Clark

The idea of the extended mind or extended cognition is not part of common parlance; however, many of us have espoused this idea naturally since our youth. It’s the concept that we use external, physical or digital, information to extend our knowledge and thinking processes.

Today’s “born-digital” kids––the first generation to grow up with the Internet, born 1990 and later––store their thoughts, education, and self-dialogue in external notes saved to the cloud. [1]

“… [Andy Clark describes us as] cyborgs, in the most natural way. Without the stimulus of the world, an infant could not learn to hear or see, and a brain develops and rewires itself in response to its environment throughout its life.”

via Read the full version from the author’s website.

[1] McGonigal; “Reality is Broken” pg. 127

Rapid Worldbuilding with Probuilder

Rapid Worldbuilding with ProBuilder

ProBuilder is totally free available through the package manager.
If you’re using 2017 or 5.6 you’ll get critical bug fixes. However, from 2018 onwards there’s more support, using 2018.1.0b3 I dealt with a considerably severe crash bug, so update to b12 IMO.
Polybrush and Progrids are things you’ll have to go get individually from Unity.
To replace Probuilder objects with polished geo you can play around with the Unity FBX exporter

Real Quick Prototyping Demo

  • Main Probuilder window (found by going to tools>probuilder>probuilder window) – has tools and Probuilder is designed so that you can ignore it when you’ a new to using Probuilder
  • Face, Object, etc. mode – will allow you to touch only the variable selectable
  • A good way to learn the tool is to go through the main probuilder window and check the shortcuts
  • It really helps to keep things as simple as possible from the get go. Don’t add tons of polys
  • Shape selector will help you quickly make stuff
  • Connect edges and insert edge loop
  • Holding shift to grab multiple
  • ‘R’ will give you scale mode
  • Extrude is a fantastic way to add geometry
  • Grid Size – keep at 1 for 1 meter this is important for avoiding mistakes when creating geo and knowing your angles
  • Use the ortho top down view to see if your geo fits your grid
  • Detach face settings is a way to split geo selected but it’s still part of your item
  • Detach face using a different setting to create a new game object
  • Pivot point needs to be rejigged often (solutions: object action or set it to a specific element using “Center Pivot”)
  • center pivot and put it on the grid by using progrids
  • Settings changes become the default
  • Use the Poly Shape tool to spin up a room + extrude quickly
  • Merge objects
  • think in terms of Quads
  • try selecting to vertices and connect (Alt + E) them
  • select hidden as a toggle is a great option, because in 3D you are seeing an orthographic projection so you will click on the thing that is drawn closest to the camera!
  • Crafting a doorway, can be done using extrude and grid meter changes, toggle the views (X, Y, and Z) to help with that
  • hold v to make geo snap; this will save you time later on
  • Alt+C will collapse verts (as in the ramp option where the speaker started with a cube)
  • Weld vs. Collapse — weld great for merging to hallways, or collapse which is more like pushing all verts within a specific distance together
  • Grow selection and smooth group

Polybrush Stuff

  • Add more detail with loops or subdivide (smart connect)
  • Polybrush will let you sculpt, smooth, texture blend, scatter objects, etc.
  • Modes like smoothing
  • todo explore something prefabs
  • N-gons are bad because everything is made up of tris

Texturing stuff

Open up the UV editor
  • By default, everything is on auto which means that on in-scene handles toggle
  • When you’re prototyping this allows you to not use fancy toolbar stuff

Question

  • Why is progrids helpful? Short answer: if you’re not super familiar with 3D modeling and creation software (i.e. Maya) you can create simple geo without leaving Unity editor.
  • Why would you be obsessive about making sure your geo fits your 1 meter grid size? Short answer: This helps you avoid errors with geo creation such as horrid angles and hidden faces.
  • Can you talk a little bit about automation with Probuilder?

Reblog: 3 ways to draw 3D lines in Unity3D

Just as I was thinking about an interesting demo to play with drawing functions in Unity3D, Mrdoob published his Harmony drawing tool made with HTML5/Canvas. It looks really cool, so I though how about doing this in 3D? I only had to figure out how to draw lines.

I did some research and below I present 3 different solutions. You can grab the source of the examples discussed below here.

Drawing lines with Line Renderer [demo]

When it comes to lines, the first thing you’ll bump into in the Unity3D API is the Line Renderercomponent. As the name suggests, it is used to draw lines so it seems the right tool for the job. Lines in this case are defined by 2 or more points (segments), a material and a width.

It has an important limitation: the line must be continuous. So if you need two lines, you need two renderers. The other problem is that the Line Renderer acts very strangely when new points are added dynamically. The width of the line does not seem to render correctly. It’s either buggy or just wasn’t designed for such use. Because of these limitations I had to create a separate Line Renderer for each tiny bit of line I’m drawing.

It was easy to implement, but not very fast since I end up spawning lots of GameObjects each with a LineRenderer attached. It seems to be the only option if you don’t have Unity3D Prothough.

Drawing lines as a mesh using Graphics [demo]

The Graphics class allows to draw a mesh directly without the overhead of creating game objects and components to hold it. It runs much faster than Line Renderer, but you need to create the lines yourself. This is a bit more difficult but also gives you total control of the lines – their color, material, width and orientation.

Since meshes are composed of surfaces rather than lines or points, in 3D space a line is best rendered as a very thin quad. A quad is described with 4 vertices, and usually you’ll only have the start and end points and a width. Based on this data you can compute a line like this:

Vector3 normal = Vector3.Cross(start, end);
Vector3 side = Vector3.Cross(normal, end-start);
side.Normalize();
Vector3 a = start + side * (lineWidth / 2);
Vector3 b = start + side * (lineWidth / -2);
Vector3 c = end + side * (lineWidth / 2);
Vector3 d = end + side * (lineWidth / -2);

First, you get the normal of the plane on which both start and end vectors lie. This will be the plane on which the line-quad will located. The cross product of the normal and of the difference between end and start vectors gives you the side vector (the “thin” side of the quad). You need to normalize it to make it a unit vector. Finally calculate all 4 points of the rectangle by adding the side vector multiplied by half width to both start and end points in both directions. In the source code all this happens in MakeQuad and AddLine methods, so take a look in there.

It wasn’t easy to implement, but once I was there it runs pretty fast.

Direct drawing with GL [demo]

No fast is fast enough! Instead of leaving this topic and live happily with the Graphics solution, I kept searching for something even better. And I found the GL class. GL is used to “issue rendering commands similar to OpenGL’s immediate mode”. This sounds like fast, doesn’t it? It is!

Being much easier to implement that the Graphics solution it is a clear winner for me, the only drawback being that you don’t have much control over the appearance of the lines. You can’t set a width and perspective does not apply (i.e. lines that are far behind look exactly the same as those that are close to the camera).

Conclusion

For massive & dynamic line drawing LineRenderer is not the best solution, but it is the only one available in Unity free version. It can surely be useful to draw limited amounts of static lines and this is probably what it was made for. If you do have Unity3D Pro, the solution with Graphics is reasonable and very flexible but if it is performance you’re after choose GL.

via Did you enjoy this article? Then read the full version from the author’s website.

Games as Medicine | FDA Clearance Methods

 

Games as Medicine | FDA Clearance Methods

Noah Falstein, @nfalstein
President, The Inspiracy
Neurogaming Consultant

Technically software and games are cleared and not approved by the FDA.

By background, Noah:

  • Has attended 31 GDCs
  • Been working in games since 1980 (started in entertainment and arcade games with Lucas Entertainment)
  • Gradually shifted over and consulted for 17 years on a wide variety of games
  • Started getting interested in medical games in 1991 (i.e. East3)
  • Went to Google and left due to platform perspective one had to have at Google
  • Game designer not a doctor, but voraciously learns about science and medical topics

Table of Content:

  • Context of games for health
  • New factor of FDA clearance
  • Deeper dive
  • Adv. and Disadvan. to clearance

Why are games and health an interesting thing?

Three reasons why games for health are growing quickly and are poised to be a very important thing

  • It’s about helping people (i.e. Dr. Sam Rodriguez’s work Google “Rodriguez pain VR”)
  • It’s challenging, exciting, and more diverse than standard games (i.e. games need to be fun, but if they’re not having the desired effect, for example restoring motion after a stroke, then you encounter an interesting challenge). The people in the medical field tend to be more diverse than those in the gaming space.
  • It’s a huge market* FDA clearance = big market
    IMG_2271

So what’s the catch?

Mis-steps along the way

  • Brain Training (i.e. Nintendo Gameboy had popular Japanese games claiming brain training)
  • Wii Fit (+U) (i.e. the balance board)
  • Lumosity fine (i.e. claims made that were unsubstantiated by research)

upshot: lack of research and good studies underpinning claims

Some bright spots

  • Remission from Hopelab (i.e. they targeted adherence: using the consequences of not having enough chemotherapy in their body)

FDA clearance is a gold standard

  • Because it provides a stamp of good, trustable, etc.
  • The burden is on the people who make products to go through a regimen of tests that are science-driven
  • Noah strongly recommends Game Devs to link up with a university
  • Working on SaMD – Software as a Med Device
  • Biggest single world market drives others
  • Necessary for a prescription and helps with insurance reimbursement
  • but it’s very expensive and time-consuming

IMG_2272

FDA definition of a serious disease
[missing]

MindMaze Pro

  • FDA clearance May 2017
  • Stroke Rehabilitation
  • Early in-hospital acute care while plasticity high

Pear Therapeutic

  • Positions its product as a “prescription digital therapeutic”

IMG_2273

Akili Interactive Labs

  • Treats pediatric ADHD
  • Late-stage trial results (Dec. 2017) were very positive with side effects of a headache and frustration, which is much better than alternatives like Ritalin
  • Seeking De Novo clearance
  • Adam Gazzaley – began as aging adult research with Neuroracer, a multi-year study published in Nature

The Future – Good, Bad, Ugly, Sublime

  • Each successful FDA clearance helps
  • But they still will require big $, years to dev
  • you have to create a company, rigorously study it, stall production because changing your game
    would make results invalid from studies, then you need to release it
  • Pharma is a powerful but daunting partner

Questions

  • Can FDA certification for games then reveal that some games are essentially street drugs?