Reblog: Thoughts on SwiftUI from WWDC 19

SwiftUI

So what’s the big deal with SwiftUI? Well here’s why I think it’s great.

  1. One UI framework for all platforms It has always baffled me why Apple never made UIKit work on the Mac. If it worked for iOS and tvOS it could certainly also work on the Mac (which it does now thanks to Project Catalyst). For me, this means having double the work on many parts of the UI on Secrets for Mac and iOS. Now and then, you would see rumors that would give you hope. “Maybe next year” you’d think… but the years passed and nothing. Looking back, I can’t help but wonder if this was Apple’s plan all along. SwiftUI is certainly a multi-year effort. The underpinnings of the combined framework are at least 5 years old:

    Joe Groff@jckarter

    Combine goes back before even Swift existed. I’ve been helping the SwiftUI folks for at least three years, and they were probably working on stuff before I knew about it

    David Smith@Catfish_Man

    I was curious what the earliest Combine-related file I have on my computer is, and it turns out it’s August 14th 2013. I filed the radar it references on 10/23/2012.

    Also apparently yet another short-lived project name I forgot about?? pic.twitter.com/FR6NADWrs5

    View image on Twitter

    And although I haven’t played much with it yet certainly there’ll be a lot of bugs/shortcomings to iron out for next few years.

  2. Declarative To put it succinctly, this means that instead of telling the framework what to do you tell it what you want. The framework then figures out how to achieve. And you’ve seen this style of coding already with Auto Layout. It offloads much of the complexity to the framework.By introducing this abstraction and letting the framework do the job of composing the UI for you we get:
    • Automatic support for many of the system features: dynamic type, accessibility, dark mode, etc;
    • Adaptive layouts on different platforms (a switch on the iPhone becomes a checkbox on the Mac);
    • Freedom from having to adapt our UI whenever Apple needs to evolve it (what SwiftUI uses to satisfy a Text element may change on the next release).

    I had a professor that used to say:

    All problems in CS can be solved with one more level of indirection.

    It still holds.

  3. Reactive I’ve never invested much time with any of the reactive frameworks out there. I definitely appreciated the principles behind them but I’ve always been very critical of frameworks1 or technologies2 that are too invasive.With what I’ve already seen on the sessions and demos, I’m just about ready to forgive Apple for abandoning development of the controversial Cocoa Bindings3.

    We write so much glue code that I’ve got no problems accepting the learning curve of all the new stuff that is driving this both in the Swift language and the new Combine framework.

I’m cautiously excited about SwiftUI and sincerely hope it will live up to expectations.
Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Suffering-oriented programming

Suffering-oriented programming can be summarized like so: don’t build technology unless you feel the pain of not having it. It applies to the big, architectural decisions as well as the smaller everyday programming decisions. Suffering-oriented programming greatly reduces risk by ensuring that you’re always working on something important, and it ensures that you are well-versed in a problem space before attempting a large investment.

[Nathan Marz has] a mantra for suffering-oriented programming: “First make it possible. Then make it beautiful. Then make it fast.”

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Adventure at the 5th Oculus Connect Conference

oculus_connect_5_dilan.png

The following is a write up from a friend, Kathryn Hicks, on the Danse blog. The link to the original is at the bottom. 

Last week I attended the 5th Oculus Connect Conference held at the San Jose McEnery Convention Center. This two-day conference is held annually during the fall, which showcases the new virtual reality technology from Oculus. It was my second time attending, and it felt even better than the last one.

During the Keynote address, Zuckerberg announced a wireless headset that doesn’t need a cell phone, and an external computer. The Quest, a standalone headset with 6 degrees of freedom, touch controllers and is a potential game-changer for the VR industry. If you are familiar with the Rift and the Oculus Go, the Quest would be a marriage of the two. The Quest is scheduled to come out this spring and will be $399, and a lot of the Rift titles will be available on the Quest. While unfortunately, I was not able to try it, the feedback that I heard from others was positive. The tetherless aspect of the headset creates a more immersive experience and doesn’t feel confined. While the graphics capabilities of the headset are not as high as the Rift, they are good enough and don’t hinder the experience. Plus the optics, as well as the sound, have improved from the Oculus Go. On the downside, the Quest is reportedly top heavy and a denser headset than the Go, which I find the Go to be more substantial than the lightweight Rift. Since the Quest has four inside out cameras on the front of you, if you move the controllers behind you, you could potentially lose tracking. Hopefully, they will make these adjustments before it launches in the spring and add tracking on the strap. I can see much potential with the Quest, such as eSports, education, businesses, medical, engineering, set design; the list goes on. The possibilities are endless, and for the price point, it could substantially increase VR users. Considering that the Quest will be the price of most gaming consoles, without the need of television or home set up.

Walking around the conference was lovely, I felt like a kid in a candy store seeing people putting their full body into the Quest. The well-orchestrated design layouts and theme of the different experiences were terrific. It was a pleasure hearing eSports commentary and cheers as competitors go head to head playing Echo Arena and Onward. Seeing the VR community connect, share laughs, smile, and have a good time, warmed my heart. I enjoyed watching people play the Dead & Buried Quest experience in a large arena and seeing their digital avatars battle each other on screen. I can see more VR arenas being built specifically for the Quest, kind of like skate parks, or soccer parks, but with a sports stadium vibe.

While I was at the conference, I tried a few experiences like The Void – Star Wars Secrets of the Empire, which is a full sensory VR experience. You are an undercover Rebel fighter disguised as a Stormtrooper, as a user you get to interact with your teammates fully, feel, and smell the environment around you. It was a fantastic experience, and I would encourage others to try it at one of the nine locations.

Another experience I tried was the Wolves in the Walls a VR adaptation of Neil Gaiman’s book and created by the company Fable. The audience explores parts of Lucy’s house to try and find hidden wolves in the walls. It was a more intimate experience, and Lucy’s performance felt pretty lifelike. The environments and character designs were beautifully portrayed. Overall it was an enjoyable VR experience.

I also played a multiplayer combat experience called Conjure Strike by The Strike Team. It’s an engaging multiplayer experience, which you can play as a different rock like characters that have different classes like an Elementalist, Mage Hunter, Earth Warden and more. The multiplayer session I had played was similar to capture the flag game. One player has to push a box toward the other side while the opposing player stops the player. It was a fun experience similar to that of Overwatch but in VR. The multiplayer mechanics were excellent, but some of the controls felt foreign to me. Overall it’s an engaging game that seems like it would be popular amongst most VR users.

While I didn’t get to play as many demos as I would have liked, I enjoyed the ones I experienced, especially The Void. It was the most immersive experience I tried, the few things I would change are: update the headset and enhance the outside temperature and wind strength.

I’m looking forward to more development put towards, the Quest and I’m optimistic about the future of VR. As a team member at The Danse, I am excited to work on projects utilizing immersive technology such as virtual & augmented reality. Also, to work in an industry, the is ever changing and improving. It’s nice coming back to the Oculus Connect Conference and see the community excited about the future of VR.

AR Industrial Applications: Defense Engineering

What is this? I chatted with Evan, Operations Modeling and Simulation Engineer at Northrop Grumman about engineering use cases for the Hololens. 

His opening remarks: It’s often a struggle integrating new technology into large-scale manufacturers due to adherence to strict methods and processes. Finding/molding problems into good use cases for a given new technology can be challenging. It’s much easier to start with the problem and find/mold a good solution than the other way around. The challenge is helping engineers and operations leadership understand what modern solutions exist.

 

——-

Evan’s Take: In the context of engineering, to show the Hololen’s capabilities in relation to the (DOD acquisition lifecycle) lifecycle stages of a product might be a high value strategy.

Image result for dod engineering systemTemporary Minimum Risk Route (TMRR): How do we design a product that fulfills mission requirements? This can take the form of:

  • visualizing the designs, making sure they’re feasible (i.e. are wires getting pinched?). Uncovering design flaws you’ll discover later in the form of defects during manufacturing. Making sure the design is producible (DFM – Design for Manufacturability).
  • communicating to the customer: In that stage of the lifecycle it’s important to be able to communicate your designs to the customer to demonstrate technical maturity.
    • inspect the product: this part of the product is called “XYZ” can then be exploded.

 

Engineering and Manufacturing Development (EMD): At this stage the customer (NG) cares about “how are we going to build it”

  • tooling design: visualizing the product sitting in the tools or workstands that will be used in production
  • visualizing the ergonomics people are going to have to deal with for example are the clearances sufficient to *screw in the screw, so ergonomics*
  • visualizing the factory flow, the customer (NG’s customer) would also be interested in seeing the proposed factory flow to build confidence. It’s becoming more common to see this as a line item in contracts (Contract Data Requirements List or CDRL)

Subsequent steps in Production & Deployment are:

  • Low rate initial production (LRIP)
  • Full rate production (FRP)

 

Who the customer is: Mechanics on the factory floor using hololens for work instructions, saw a lot of interest at Raytheon and NG to use Virtual Work instructions overlayed onto the hardware (Google Glass, Light Guide Systems, etc). In a more mature program that’s in production, the mechanic, or the electrician on the factory floor would be the end user. Today, they look away from the product where work instructions are pulled up on the computer. Their instructions might be several feet away from the work, hopefully they’ve interpreted the instructions well so they don’t cause a defect. Operators work from memory or don’t follow work instructions if it’s too cumbersome to do so. DCMA (Customer’s oversight) issues corrective action requests (CAR’s) to the contractor when operators don’t appear to be following work instructions (i.e. the page they’re on doesn’t match the step in the process they’re currently working on, or worse, they don’t have the instructions pulled up). Getting too many of these is really bad. So where AR is really useful, is when AR is overlaying instructions on the product as it’s built. Care should be given to the Manufacturing Engineer’s workflow for creating and approving work instructions, work instruction revisions, etc. Long-term, consideration probably needs to be given to integration with the Manufacturing execution system (MES) and possibly many other systems (ERP, PLM, etc.).

The Hololens tech is seemingly a ways away from that––seamlessly identifying the hardware regardless of physical position/orientation as well as making it easy for manufacturing engineers to author compliant work instructions

Another consideration, for any of the above use cases in the defense industry, is wireless. Most facilities will not accommodate devices that transmit or receive signals over any form of wireless. For the last use case, tethering a mechanic to a wired AR device is inhibiting.

 

Acceleration and Motion Sickness in the Context of Virtual Reality (VR)

As I traveled around the world with the HTC Vive and Oculus Rift, universally first-timers would be fascinated, but a bit woozy after trying VR. What contributes to this? One possibility is the vergence-accommodation issue with current displays. However, the subject of this post is locomotion and the anatomical reasoning behind the discomfort arising from poorly designed VR.

With VR you typically occupy a larger virtual space than that of your immediate physical surroundings.

So, to help you traverse, locomotion or in other words a way of sending you from point A to point B in the virtual space was designed. Here’s what this looks like:

Image result for teleportation vr gif

Caption: This guy is switching his virtual location by pointing a laser on the tip of his controller to move around.

Movement with changing velocity through a virtual environment can contribute to this overall feeling of being in a daze.

That’s why most creators smooth transitions and avoid this kind of motion (i.e. blink teleport, constant velocity movement from Land’s End). Notice how the movement seems steady and controlled below?

Image result for lands end vr gif

Acceleration and Velocity

‘Acceleration’ is, put simply, any kind of change of speed measured over time, generally [written] as m^-2 (meters per second, per second) if it’s linear or in rad^-2 (same but with an angle) if it’s around an axis. Any type of continuous change in the speed of an object will induce a non-zero acceleration.”

The Human Vestibular System

When you change speed, your vestibular system should register an acceleration. The vestibular system is part of your inner ear. It’s basically the thing that tells your brain if your head is up or down, and permit[s] you to [stand] and walk without falling all the time!

Internal ear diagram that show the semi-circular cannals where the acceleartion forces are sensed.

Fluid moving in your semicircular canals is measured and the information is communicated to your brain by the cranial nerves. You can think of this as [similar to how] an accelerometer and a gyroscope works.

[This] acceleration not only includes linear acceleration (from translation in 3D space), but also rotational acceleration, which induces angular acceleration, and empirically, it seems to be the worse kind in the matter of VR sickness…”

Now that you have this grounding for our anatomical system of perceiving acceleration the upshot is that often viewers in VR will experience movement visually but not via these semicircular canals. It’s this incongruence that drives VR sickness with current systems.

Some keywords to explore more if you’re interested in the papers available are: Vection, Galvanic Vestibular Stimulation (GVS), and Self-motion.

via Read more on the ways developers reduce discomfort from the author’s website.

Design Iteration for Oculus Go

 

Design iteration when building for the Oculus Go

With 6 degrees of freedom headsets like the Oculus Rift and HTC Vive, when working in Unreal or Unity3d, it takes only a push of the play button to test your application in the headset.

There are advantages to seeing your scene from within your headset such as how your first-person perspective is developing, checking performance metrics in HUD, checking in on rendering weirdness, or correcting for relative spacing. However, the constraint of having to deploy by building and running to the Oculus Go each time we needed to check something can lessen your appetite for quick checks like this. Besides, sometimes is not even necessary.

That’s why a quick way of iterating on your scene using traditional desktop inputs is nice. Typically duplicating a currently under-construction scene into two versions. One called “site tour” for example and another called “site tour desktop”. The naming convention splits up functionality so that when you need to test something using mouse and keyboard you quickly hop into the “site tour desktop” scene. Some example mappings include UI navigation with a pointer or locomotion. The UI navigation can be done using the left mouse button and cursor instead of shipping to Go and using the hand controller. The locomotion can be done using your keys ‘w’,’a’,’s’, and ‘d’, as is common to most FPS games, to move around the space and the mouse to click and drag to move your head instead of having to teleport.

Diving deeper on the locomotion example

By throwing on headphones and using a Fly script applied to the Main Camera to test quickly using WASD within the Unity editor, you’ll be able to check relevant aspects of your lighting, audio, animations, etc without needing to wear the Go.
sample:

void Update()
{

    if (Input.GetMouseButton(0))
{
yaw += Input.GetAxis(“Mouse X”) * lookSpeed;
pitch += Input.GetAxis(“Mouse Y”) * lookSpeed;
pitch = Mathf.Clamp(pitch, -90.0f, 90.0f);
}

transform.localRotation = Quaternion.AngleAxis(yaw, Vector3.up);
transform.localRotation *= Quaternion.AngleAxis(pitch, Vector3.left);

transform.position += transform.forward * moveSpeed * Input.GetAxis(“Vertical”);
transform.position += transform.right * moveSpeed * Input.GetAxis(“Horizontal”);
transform.position += transform.up * 3 * moveSpeed * Input.GetAxis(“Mouse ScrollWheel”);

}

For the purposes of testing out spatial audio, I’ve noticed it’s great––mimicking head movement by panning using the mouse x.

 

Turning to the Oculus Rift

For what it’s worth in a post that’s supposed to be about the Oculus Go design iteration loop. In progress with an Oculus Go app currently, I and a friend find the utility of swapping a project over to the Oculus Rift to be really helpful.

What this does for you is, allow you to take advantage of the Oculus Rift during Play Mode (in Unity) which gives way to much faster iteration time. Perfect for quick fixes to code and cohesion of various parts (for example, like Teleportation and UI).

Reblog: The Mind-Expanding Ideas of Andy Clark

The idea of the extended mind or extended cognition is not part of common parlance; however, many of us have espoused this idea naturally since our youth. It’s the concept that we use external, physical or digital, information to extend our knowledge and thinking processes.

Today’s “born-digital” kids––the first generation to grow up with the Internet, born 1990 and later––store their thoughts, education, and self-dialogue in external notes saved to the cloud. [1]

“… [Andy Clark describes us as] cyborgs, in the most natural way. Without the stimulus of the world, an infant could not learn to hear or see, and a brain develops and rewires itself in response to its environment throughout its life.”

via Read the full version from the author’s website.

[1] McGonigal; “Reality is Broken” pg. 127