Reblog: Thoughts on SwiftUI from WWDC 19

SwiftUI

So what’s the big deal with SwiftUI? Well here’s why I think it’s great.

  1. One UI framework for all platforms It has always baffled me why Apple never made UIKit work on the Mac. If it worked for iOS and tvOS it could certainly also work on the Mac (which it does now thanks to Project Catalyst). For me, this means having double the work on many parts of the UI on Secrets for Mac and iOS. Now and then, you would see rumors that would give you hope. “Maybe next year” you’d think… but the years passed and nothing. Looking back, I can’t help but wonder if this was Apple’s plan all along. SwiftUI is certainly a multi-year effort. The underpinnings of the combined framework are at least 5 years old:

    Joe Groff@jckarter

    Combine goes back before even Swift existed. I’ve been helping the SwiftUI folks for at least three years, and they were probably working on stuff before I knew about it

    David Smith@Catfish_Man

    I was curious what the earliest Combine-related file I have on my computer is, and it turns out it’s August 14th 2013. I filed the radar it references on 10/23/2012.

    Also apparently yet another short-lived project name I forgot about?? pic.twitter.com/FR6NADWrs5

    View image on Twitter

    And although I haven’t played much with it yet certainly there’ll be a lot of bugs/shortcomings to iron out for next few years.

  2. Declarative To put it succinctly, this means that instead of telling the framework what to do you tell it what you want. The framework then figures out how to achieve. And you’ve seen this style of coding already with Auto Layout. It offloads much of the complexity to the framework.By introducing this abstraction and letting the framework do the job of composing the UI for you we get:
    • Automatic support for many of the system features: dynamic type, accessibility, dark mode, etc;
    • Adaptive layouts on different platforms (a switch on the iPhone becomes a checkbox on the Mac);
    • Freedom from having to adapt our UI whenever Apple needs to evolve it (what SwiftUI uses to satisfy a Text element may change on the next release).

    I had a professor that used to say:

    All problems in CS can be solved with one more level of indirection.

    It still holds.

  3. Reactive I’ve never invested much time with any of the reactive frameworks out there. I definitely appreciated the principles behind them but I’ve always been very critical of frameworks1 or technologies2 that are too invasive.With what I’ve already seen on the sessions and demos, I’m just about ready to forgive Apple for abandoning development of the controversial Cocoa Bindings3.

    We write so much glue code that I’ve got no problems accepting the learning curve of all the new stuff that is driving this both in the Swift language and the new Combine framework.

I’m cautiously excited about SwiftUI and sincerely hope it will live up to expectations.
Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Suffering-oriented programming

Suffering-oriented programming can be summarized like so: don’t build technology unless you feel the pain of not having it. It applies to the big, architectural decisions as well as the smaller everyday programming decisions. Suffering-oriented programming greatly reduces risk by ensuring that you’re always working on something important, and it ensures that you are well-versed in a problem space before attempting a large investment.

[Nathan Marz has] a mantra for suffering-oriented programming: “First make it possible. Then make it beautiful. Then make it fast.”

via Did you enjoy this article? Then read the full version from the author’s website.

Acceleration and Motion Sickness in the Context of Virtual Reality (VR)

As I traveled around the world with the HTC Vive and Oculus Rift, universally first-timers would be fascinated, but a bit woozy after trying VR. What contributes to this? One possibility is the vergence-accommodation issue with current displays. However, the subject of this post is locomotion and the anatomical reasoning behind the discomfort arising from poorly designed VR.

With VR you typically occupy a larger virtual space than that of your immediate physical surroundings.

So, to help you traverse, locomotion or in other words a way of sending you from point A to point B in the virtual space was designed. Here’s what this looks like:

Image result for teleportation vr gif

Caption: This guy is switching his virtual location by pointing a laser on the tip of his controller to move around.

Movement with changing velocity through a virtual environment can contribute to this overall feeling of being in a daze.

That’s why most creators smooth transitions and avoid this kind of motion (i.e. blink teleport, constant velocity movement from Land’s End). Notice how the movement seems steady and controlled below?

Image result for lands end vr gif

Acceleration and Velocity

‘Acceleration’ is, put simply, any kind of change of speed measured over time, generally [written] as m^-2 (meters per second, per second) if it’s linear or in rad^-2 (same but with an angle) if it’s around an axis. Any type of continuous change in the speed of an object will induce a non-zero acceleration.”

The Human Vestibular System

When you change speed, your vestibular system should register an acceleration. The vestibular system is part of your inner ear. It’s basically the thing that tells your brain if your head is up or down, and permit[s] you to [stand] and walk without falling all the time!

Internal ear diagram that show the semi-circular cannals where the acceleartion forces are sensed.

Fluid moving in your semicircular canals is measured and the information is communicated to your brain by the cranial nerves. You can think of this as [similar to how] an accelerometer and a gyroscope works.

[This] acceleration not only includes linear acceleration (from translation in 3D space), but also rotational acceleration, which induces angular acceleration, and empirically, it seems to be the worse kind in the matter of VR sickness…”

Now that you have this grounding for our anatomical system of perceiving acceleration the upshot is that often viewers in VR will experience movement visually but not via these semicircular canals. It’s this incongruence that drives VR sickness with current systems.

Some keywords to explore more if you’re interested in the papers available are: Vection, Galvanic Vestibular Stimulation (GVS), and Self-motion.

via Read more on the ways developers reduce discomfort from the author’s website.

Reblog: 3 ways to draw 3D lines in Unity3D

Just as I was thinking about an interesting demo to play with drawing functions in Unity3D, Mrdoob published his Harmony drawing tool made with HTML5/Canvas. It looks really cool, so I though how about doing this in 3D? I only had to figure out how to draw lines.

I did some research and below I present 3 different solutions. You can grab the source of the examples discussed below here.

Drawing lines with Line Renderer [demo]

When it comes to lines, the first thing you’ll bump into in the Unity3D API is the Line Renderercomponent. As the name suggests, it is used to draw lines so it seems the right tool for the job. Lines in this case are defined by 2 or more points (segments), a material and a width.

It has an important limitation: the line must be continuous. So if you need two lines, you need two renderers. The other problem is that the Line Renderer acts very strangely when new points are added dynamically. The width of the line does not seem to render correctly. It’s either buggy or just wasn’t designed for such use. Because of these limitations I had to create a separate Line Renderer for each tiny bit of line I’m drawing.

It was easy to implement, but not very fast since I end up spawning lots of GameObjects each with a LineRenderer attached. It seems to be the only option if you don’t have Unity3D Prothough.

Drawing lines as a mesh using Graphics [demo]

The Graphics class allows to draw a mesh directly without the overhead of creating game objects and components to hold it. It runs much faster than Line Renderer, but you need to create the lines yourself. This is a bit more difficult but also gives you total control of the lines – their color, material, width and orientation.

Since meshes are composed of surfaces rather than lines or points, in 3D space a line is best rendered as a very thin quad. A quad is described with 4 vertices, and usually you’ll only have the start and end points and a width. Based on this data you can compute a line like this:

Vector3 normal = Vector3.Cross(start, end);
Vector3 side = Vector3.Cross(normal, end-start);
side.Normalize();
Vector3 a = start + side * (lineWidth / 2);
Vector3 b = start + side * (lineWidth / -2);
Vector3 c = end + side * (lineWidth / 2);
Vector3 d = end + side * (lineWidth / -2);

First, you get the normal of the plane on which both start and end vectors lie. This will be the plane on which the line-quad will located. The cross product of the normal and of the difference between end and start vectors gives you the side vector (the “thin” side of the quad). You need to normalize it to make it a unit vector. Finally calculate all 4 points of the rectangle by adding the side vector multiplied by half width to both start and end points in both directions. In the source code all this happens in MakeQuad and AddLine methods, so take a look in there.

It wasn’t easy to implement, but once I was there it runs pretty fast.

Direct drawing with GL [demo]

No fast is fast enough! Instead of leaving this topic and live happily with the Graphics solution, I kept searching for something even better. And I found the GL class. GL is used to “issue rendering commands similar to OpenGL’s immediate mode”. This sounds like fast, doesn’t it? It is!

Being much easier to implement that the Graphics solution it is a clear winner for me, the only drawback being that you don’t have much control over the appearance of the lines. You can’t set a width and perspective does not apply (i.e. lines that are far behind look exactly the same as those that are close to the camera).

Conclusion

For massive & dynamic line drawing LineRenderer is not the best solution, but it is the only one available in Unity free version. It can surely be useful to draw limited amounts of static lines and this is probably what it was made for. If you do have Unity3D Pro, the solution with Graphics is reasonable and very flexible but if it is performance you’re after choose GL.

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Player – Game – Designer

The above work comes from Thomas Bedenk, who I met at VRX London in 2016. See end his page for sources (link found at bottom).

This model provides a substrate, an interactive application namely a game and its production and consumption, and highlights the aspects regarding components Player, Game, and Designer into the full picture.

Read the full version from the author’s website.

Reblog: Can gaming & VR help you with combatting traumatic experiences?

Can gaming & VR help you with combatting traumatic experiences? The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

Can gaming & VR help you with combatting traumatic experiences?

Trauma affects a great many people in a variety of ways, some suffer from deep-seated trauma such as Post Traumatic Stress Disorder caused by war or abuse. And others suffer from anxiety and phobias caused by traumatic experiences such as an accident, a loss an attack.

Each needs its own unique and tailored regime to lessen the effects and to aid the individuals in regaining some normalcy to their lives. Often these customized treatments are very expensive and difficult to obtain.

In the world of ubiquitous technology and an ever-increasing speed in visual-based treatments, these personalized therapies are becoming more accessible to the average sufferer.

What I would like to do is take you through some of the beneficial effects that gaming and VR can have on those suffering from trauma, what these treatments sometimes look like and what the pitfalls can be when using them.
I am not a specialist in psychology or trauma treatment, but I feel that increasing awareness of what is out there is beneficial to everyone, and perhaps can help those suffering from trauma to take the first step in seeking help.

Games & VR as a positive mental activity

To date, a few studies have been done on the effectiveness of gaming and virtual reality gaming in therapeutic treatments. But due to the brief history of both, a lengthy study has yet to be completed. But the one thing that we can be sure of is the first-hand accounts of those that have experienced the benefit of these experiences.

A very basic exercise for those suffering from trauma is to engage in mindfulness or meditation exercises. Meditation guided through a VR system can have very positive effects on an individual’s disposition. Due to the immersive nature of VR, you can let yourself fall away into another world and detach yourself from the real world. It is as though you are “experiencing a virtual Zen garden” dedicated entirely to you.

This effect of letting go and identifying with an external locus is probably one of the most effective attributes of gaming and VR. It is the act of not focusing on yourself, on the memories and cues that cause the underlying trauma, but focusing on and engaging with another character, an avatar, on-screen who for all intents and purposes has led and now leads (through you) another life. This character has its own sense of agency to complete a quest or goal, totally independent from you.

The most effective way that games allow you to let go to offer you a challenge that requires your entire focus. And to enhance this, most games offer group challenges. These are two core drivers in improving positive emotions, personal empowerment, and social relatedness. With individuals who suffer from either PTSD or other deep trauma’s, being given a vehicle that allows easier connections with others helps them to cope with their own trauma’s much better. It takes their mind off what is troubling them and through repetition can even lead to a lessening of symptoms.
Did you enjoy this article? Then read the full version from the author’s website.

For a more behind the scenes look at how this manifests in practice, check out this PBS Frontline documentary. Master Sgt. Robert Butler, a Marine combat cameraman, recounts his struggle with PTSD and how Virtual Iraq helped.

Reblog: Virtual Reality Installations to Start Arriving at AMC Movie Theaters Next Year

 

Hey everyone! Today I read that America’s biggest movie theater chain, AMC Entertainment Holdings Inc., is putting $20 million behind a Hollywood virtual reality startup and plans to begin installing its technology at cinemas starting next year. That startup, namely, Dreamscape Immersive is said to be backed by Steven Spielberg and offering experiences allowing six people to participate at the same time.

As a Southern California native, I’m excited that… “‘[i]ts first location will be at a Los Angeles mall run by Westfield Corp., [who is] a series A investor. It is expected to launch there in the winter of 2018’ said Dreamscape’s CEO, Bruce Vaughn”.

Not only will experiences that build on traditional movie-going be available. For example, think of John Wick Chronicles which was an immersive FPS allowing people to play as John Wick and travel into the world of hired guns leading up to John Wick 2 earlier this year.  But, the WSJ article says that you can expect to be able to attend, for instance, sporting events virtually with Dreamscape Immersive. An interesting appeal, given that we don’t really associate a trip to the theaters with sports fan viewing experiences.

I’m curious to see how these Dreamscape Immersive locations will be outfitted. Some might find a useful comparison to be The Void – Ghostbusters Dimensions which brings the cinematic experience to life at Madame Tussauds in New York for you and three others. Their experience highlighted dynamism and complete immersion where you walk around an expansive physical space by leveraging custom hardware.

 

20160714_192840.jpg

Here’s a glimpse at how their setup looked in July 2016 when I went

 

The article goes on to say that, “the VR locations may be in theater lobbies or auditoriums or locations adjacent to cinemas”. Last year in September we saw Emax, for example, a Shenzhen-based startup execute the adjacent layout. The open layout was nice, in my humble opinion, while there are charms to giving folks privacy a la VR booths one might find at large conferences. Perhaps because it shows how much fun people in the virtual experience are having and gives onlooking friends the chance to share their reactions.

 

IMG_2662.JPG

Kiosk situated across from a cinema inside of a mall in Shenzhen

 

On that topic, creative VR applications like Tiltbrush and Mindshow yield some kind of shareable content innately. In the former, when you’re finished with your painting you can export the work of art as a model, scene, or perhaps just the creation video and view it later online. In the latter, you are essentially creating a show for others to watch.

But if the experience is a bit more passive, as in watching a sporting event… are there ways to share that which you experienced with others? Definitely. Via green screen infrastructure and video content. The la-based company, LIV, has been striving towards productization of the infrastructure needed to seamlessly capture guests in a better way.  Succinctly put, LIV “think[s] VR is amazing to be inside, but rather underwhelming to spectate….” Perhaps Dreamscape Immersive will leverage similar infrastructure to expand the digital footprint of these location-based experiences.

What do you think are the most salient points about this announcement?

Read the original WSJ article by clicking here

 

Reblog: Google creates coffee making sim to test VR learning

Most VR experiences so far have been games and 360-degree videos, but Google is exploring the idea that VR can be a way to learn real life skills. The skill it chose to use as a test of this hypothesis is making coffee. So of course, it created a coffee making simulator in VR.

As explained by author, Ryan Whitwam, this simulation proved more effective over the other group in the study that had just a video primer on the coffee-making technique herein.

Participants were allowed to watch the video or do the VR simulation as many times as they wanted, and then the test—they had to make real espresso. According to Google, the people who used the VR simulator learned faster and better, needing less time to get confident enough to do the real thing and making fewer mistakes when they did.

As you all know, I have the Future of Farming project going right now with Oculus Launch Pad. It is my ambition to impart some knowledge about farming/gardening to users of that experience. Therefore I found this article to be quite intriguing. How fast can we all learn to crop tend using novel equipment should we be primed first by an interactive experience/tutorial. This is what I’d name ‘environment transferable learning’ or ETL. The idea that in one environment you can learn core concepts or skills that transcend the tactical elements of the environment. For example, a skill learned in VR that translates into a real world environment, maybe “Environment Transferable Skills” or ETS.

A fantastic alternate example, also comes from Google, with Google Blocks. This application allows Oculus Rift or HTC Vive users to craft 3D models with controllers, and the tutorial walks users through how to use their virtual apparatuses. This example doesn’t use ETL, but we can learn from the design of the tutorial nonetheless for ETL applications. For instance, when Blocks teaches how to use the 3D shape tool it focuses on teaching the user by showing outlines of 3D models that it wants the user to place. The correct button is colored differently relative to other touch controller buttons. This signals a constraint to the user that this is the button to use. With sensors found in the Oculus Touch controllers, one could force the constraint of pointing with the index finger or grasping. In the example of farming, if there is a button interface in both the real and virtual world (the latter modeled closely to mimic the real world) I can then show a user how to push the correct buttons on the equipment to get started.

What I want to highlight is that it’s kind of a re-engineering of having someone walk you through your first time exercising a skill (i.e. espresso-making). It’s cool that the tutorial can animate a sign pointing your hands to the correct locations etc. Maybe not super useful for complicated tasks but to kind of instruct anything that requires basic motor skills VR ETL can be very interesting.

via Did you enjoy this article? Then read the full version from the author’s website.