Reblog: Entropy: Why Life Always Seems to Get More Complicated

This pithy statement references the annoying tendency of life to cause trouble and make things difficult. Problems seem to arise naturally on their own, while solutions always require our attention, energy, and effort. Life never seems to just work itself out for us. If anything, our lives become more complicated and gradually decline into disorder rather than remaining simple and structured.

Why is that?

Murphy’s Law is just a common adage that people toss around in conversation, but it is related to one of the great forces of our universe. This force is so fundamental to the way our world works that it permeates nearly every endeavor we pursue. It drives many of the problems we face and leads to disarray. It is the one force that governs everybody’s life: Entropy.
from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Unpackaging: 360˚ Video and Real-time CG Elements Compositing in Unity

At Unity Vision Summit a couple days ago, Unity announced that 360˚ video compositing will be available in “Unity2017”. Unity2017 is the next stable release of their engine, said to focus on artists and designers.

With this new feature in Unity engine, anyone can add graphics effects such as lens flares, digital animations, and interactivity in real-time to a video. The presenter Natalie Grant, a Senior Product Marketing Manager @Unity of VR/AR/Film, said one of the most important aspects about VR is that it achieves the “feeling like you are actually there.” She continued, “These are a few small ways to make a regular 360˚ video interactive and immersive”.

The purpose of this post is to explain that I posit that consumers and creators alike will learn more about computer graphics topics like (General Purpose GPU usage) and virtualization of the real-world. WebVR and 360˚ content on laptops and phones will “bring people up the immersion curve” as Mike Schroepfer says. This approach where content is composed of 360˚ video and real-time 3D model content will contribute to that.

 

What is compositing?

Compositing is combining of visual elements from separate sources into single images.

How it’s achieved in Unity with 360˚ videos

As described in the talk*, 360˚ composited with real-time CG elements is essentially two spheres in a scene and 360˚ videos on the interior of each sphere with a camera at the shared center point. To explain, imagine the layers of the Earth as an analogy.

earth_analog

The inner core is essentially the user’s head or main camera looking around the environment. The outer core is the first 360˚ video player with a shader** applied to it masking some of the video but not all of it. Skipping the lower mantle temporarily, the upper mantle is where the second 360˚ video player is. This upper mantle is showing the same 360˚ video as the inner player but normally, without a shader. In between, the lower mantle is where users can now place digital animations, 3D objects, and UI elements that are interactable. This is where all the magic happens–specifically because the space between the two concentric 360˚ video player spheres allows for CG content to really seem in the scene. Both copies of the composited 360˚ video are exactly in alignment–meaning that as long as the user’s view position is confined to 3DoF, the user can’t tell there are two copies of this video file playing when viewing this.

Finally, for more immersion, the Crust and the rest of Space, is also a layer where any Unity object can be positioned.

Natalie shows this is important in the use case of matching a Unity directional light source to the position, direction, and intensity of the sun as captured in a 360˚ video. This means that because of Unity’s physics based rendering, the CG elements (in the lower mantle or outer crust with standard shaders) should have shadows, color, reflections and more produced in a realistic way (because they are affected by the light source). This increases the effectiveness of the illusion that footage and real-time elements are composited.

Another way to think about this approach is as the Russian nesting dolls of spheres (credit: Ann Greenberg). In this comparison, each doll corresponds to a 360˚ video sphere, and just like the dolls, the spheres are concentrically nested and aligned with the same rotation.

russiandolls

As demonstrated by Natalie on stage, when done deliberately enough the 3D content will actually look like it’s in the camera-captured shot. Creating the illusion that 3D objects are occluded or hidden behind the inner sphere playing video (see below a 3D dinosaur moving behind the first 360 video).

ujyxCC.gif

In the short-term, I think this will help people engage with more 360˚ video content and potentially excite people about mixing camera captures and virtual content.

 

When demonstrating “locomotion in 360˚ video”, Natalie Grant of Unity showed that one can click to move to another 360˚ video. For starters, this isn’t exactly like movement with teleportation in a completely digital environment, where one can teleport anywhere a pointer collides with a plane. Remember that the creative behind the project must capture each 360˚ video using a camera and tripod, and that’s still a constraint on the freedom of choice for location. However, with potentially a lot less work, the creative can begin making compelling 360˚ video experiences with an interactive component (i.e. switching the 360˚ video) and layers of spatially accurate CG objects.

Also at this year’s F8 developer conference Facebook announced a new camera, the Surround 360 video camera, that will let users move around inside live-action scenes. The product can infer 3D point and vector data of its surroundings using overlapping video image data from adjacent cameras. So a reasonable implication is that we may even have 6 DoF live action scenes eventually with CG elements composited***.

However, I’d imagine that blind spots would exist once a user has moved significantly from the original center of the two spheres, and that will also impact the integrity of the illusion that both CG and video are composited.

 

I look forward to seeing some creative applications of this method.

*found at the 35-minute mark https://www.youtube.com/watch?v=ODXMhaNIF5E

** A Unity Standard Shader attempts to light objects in a “physically-accurate” way. This technique is called Physically-Based Rendering or PBR for short. Instead of defining how an object looks in one lighting environment, one needs only to specify the properties of the object (e.g. how metal or plastic it is).

Then, the shader computes its shading based on those factors.

***The original Surround 360 was estimated to cost about $30,000 at using the company’s exact schematics.

Three Trending VR Topics from GDC and Unity Updates

Three Trending VR Topics from GDC

The following is a synthesis of a talk given by Greenlight Insight’s Alexis Macklin and Unity’s Tony Parisi along with my own experiences at Oculus Developer Day 2017. The notes on VR experiences are important as the industry grows because we can reflect on what strategies are yielding better experiences for our users.

1) Complex Story-Telling

Dear Angelica and Why it Was Ground-Breaking

  • Creators at Oculus Story Studio used a combination of Houdini (cinema), UE4 (games), and Quill (art)
  • Use of Houdini in the flow of VR development is also illustrated really well by Mike Murdoc, Creative Director at Trihelix VR
  • Mike uses Houdini to permute and design VR interfaces check out what I mean.

2) Locomotion

  • Design Standards Don’t Exist Yet The style of teleportation or varieties of movement that are considered correct are still just beginning to be explored and developed––don’t expect to see a consensus on a design strategy
  • An indicator of Motion Sickness Many VR developers identified however that movement in the user’s peripheral vision is a good indicator of motion sickness. So if the user was teleporting from point A to point B and that user saw something moving in the corner of his or her eye that is a scenario you can expect will yield some sickness. This is controllable, outside of individual tolerances
  • Accessibility How to bring VR to those that can’t use VR right now, was a big theme. Different control schemes are to be developed throughout the year

3) Social VR Experiences

  • Sony VR bringing games and multiplayer activities to users. Eye and mouth movements are said to evoke the most emotional appeal out of the users. Do not approach the uncanny valley––in short, this means to stick to surrealism and cartoony type of avatars––the best way I can explain this is with Bitmoji. If you have created one of them, you probably understand, that they are not created to look exactly like you. They are an abstract that might share a similar skin tone to you or a similar clothing style; and, it’s intentionally just enough to create resemblance and purposefully doesn’t go too close to your real image.
  • Simply put, it can become a little messy as your brain develops a picture around all of the minute details that might be wrong about the representation as opposed to a clearly abstract example of a person

Examples

Robo Recall by Epic Games

  • Subject: Similar to Destiny or Call of Duty––stop bad guys
  • Lacked an overarching plot intentionally
  • Users are forced to explore new interactions and environments
  • Hopes to have longer play times Stevi Rex and Alexis both note that for them it’s the first VR title that left them wanting to play more

Sprint Vector by Survios

  • Subject: Run as fast as you can at top speeds to reach the finish line
  • Previously released Raw Data (made $1 million in a month)
  • Throws locomotion standards out of the window and asks you to swing your arms with tracked controllers as if you were on an elliptical machine
  • A lot of people were expecting motion sickness but so far, there has been positive reviews
  • Many many videos of people racing can be found online

Bebylon by Kite and Lightning

  • Subject: Futuristic baby battles
  • Currently, this is in a closed beta
  • 2 different levels of social interaction one is amongst other players and another amongst spectators––it is cross-platform
    • PC VR
    • Mobile VR
    • Console VR
    • Youtube (includes spectator-mode)
    • Twitch (includes spectator-mode)

From Other Suns by Gunfire Games

  • Subject: A four-player RTS in which you fight and try to save humanity
  • A unique movement mechanic combines stepped teleportation with a switch from first-person–you are your character–to third-person–you are controlling your character–to guide your character to his or her next location
  • They have also employed accessibility well so you can choose to opt out of the aforementioned mode of transportation if you’re more comfortable with VR
  • Finally, comfort turning is displayed well in the app–this is where the player can rotate at no less than a 20-degree angle increment.
  • Oculus is the publisher for this title

Brass Tactics by Hidden Path Entertainment

  • Subject: A real-time strategy game that asks you to engage your enemy and conquer territories
  • Most notable perhaps is the paint-brush-stroke-like style that you are encouraged to corral your troops and send them to other territories with the Oculus Touch controllers
  • This is a social app as well, and you can see your opponent across the expanse of the game board in front of you
  • I felt an awesome thrill and palpable sensation of pressure to guard my already captured territories before the game clock ran out–which is undoubtedly because of the fact that you have another player in view

Unity Updates – featuring Tony Parisi, Global Head of VR/AR @ Unity

At the end of the month Unity 5.6 will be released

  • Physics-based rendering and lighting––yielding much more realism
  • Significant optimization and latency reduction with single-pass rendering for mobile
  • Vulcan Support with Unity 5.6

It’s clear that Unity brings a lot of different toolsets together… namely, the gaming and cinematic arts sectors seem to be bridged by Unity

How can those individuals working on VR––coming from Animator and Photo-capture backgrounds––use Unity to optimize their VR experiences and would you please share a few successful examples?

  • Unity is super excited about the cinematic use of Unity for storytelling, that isn’t about leveling up or gaining points. Rather, you have environments where you can explore and video game technology is used to tell stories.
  • This also ties into Unity’s big focus on enabling designers and artists more in the year 2017–as Unity has been more of a programmer’s tool. Coming to the beta of Unity 2017 (will be in beta later this year), there is a feature called Timeline which is all about a keyframe and timeline based animation system in which you can bring in keyframe 3d graphics, skin characters, audio, and video, and even synchronize them all in a linear timeline.

Now there’s a video player that supports 360-degree playback. Also supported is 4K video playback, ingest a 360-degree video and then augment that video for enhanced viewing.

Non-Gaming Examples #MadeWithUnity

Asteroids by Baobab

  • Asteroids is a follow-up to their popular “Invasion” (you can find Invasion on Daydream and GearVR). It’s Pixar quality, looks like a feature film quality piece. They have some novel locomotion mechanics, and you play a supporting character where you have to participate to move the story along. A great full, end to end, narrative where there’s only one way the story ends.

A Price of Freedom by Construct Studios

  • You play a secret agent, inspired by the MK Ultra experiments that the CIA was doing–to use psychedelics to create operatives.

The Life of Us by Within

  • Chris Milk’s shop down in LA, amazing breakthrough-VR-story-telling
  • They created a social VR experience here where you start out as a single-celled organism and move up to larger organisms (fish, primates, human, and finally a futuristic robot) and you collaborate with one other person
  • Use your Vive controllers to swim and fly
  • Tony says he’s never felt so embodied in a VR experience and attributes it largely to the nature of social interaction within the experience

Zero Days VR by Scatter

  • A Brooklyn-based shop focused on Cinematic VR
  • They combine, video, audio, CG, data-visualization, and a completely linear timeline, all synchronized using the timeline product
  • It takes you through a VR version of a film documentary on the U.S. and Israeli intelligence agencies trying to sabotage an Iranian nuclear facility
  • It integrates interviews and voice over and is super compelling human-interest and high drama where you can move around with the Oculus Rift

 

In Conclusion

Unity Focus Remains on Core Graphics and Physics

  • The upshot is that the core functionality of Unity will continue to serve clients producing all categories of content
  • There is a long, growing list of customers that are using the engine for non-gaming needs and that informs where the product will go in the future
  • PSVR is pushing a million units shipped, and Vive and Rift are in the small hundreds of thousands of units active respectively
  • Mobile – Cardboard is in the tens of million and the more deluxe drop in headsets with Daydream and GearVR are over 5 million units – the development here is catalyzing the growth of the overall VR industry scale
  • However, you cannot replace what the higher-end systems are doing in terms of interaction and room scale
  • The hope is that the two trends–Mobile and High-End–converge
  • 30+ platforms are supported by Unity which is one of the biggest draws as a development platform for Unity

Snapchat’s Latent AR Strategy

I recently shared that Sony will be debuting a 360˚ ad with Snapchat. The following is a research report put together in June by, Matt Terndrup, and covers the potential AR strategy at Snapchat.

 

snap-back-dshah

My Snapcode

 

 

Snapchat’s approach––which is continuously educating the market with novel app structure, and an interface that has few instructions is appealing. This report does not include that Facebook’s Instagram recently included Snapchat’s Story functionality, and I have written up some of my thoughts on how that has changed the way I used Instagram personally [to come separately].  As well as Snapchat’s most recent acquisition of Vurb, whose mission is to create a smarter, more connected mobile world that empowers people to do more of what they want all in one app.
The highlights include:
  • The features of Snapchat that use AR already
  • Ahead of the curve: future success that will be driven by talent from Oculus, Microsoft Hololens, Engineering at Qualcomm on the Vuforia Team, and Emblematic Group
  • How small hints of Snapchat building stylish AR glasses have surfaced
Here is the table of contents, to read more check the Scribd link at the end of the post.
Table of Contents:
  • Userbase
  • Snapchat’s AR History
    • Overlays
    • Lenses
    • 3D Stickers
  • Acquisitions
    • Vergence Labs
    • Looksery
    • Seene
  • Public Spotting
  • Engineering and Research Talent
  • Patent
  • Forecasts
  • Conclusion

Bare Hand Input (BHI) with Virtual Reality

User experience (UX) designers commonly make products that rely on common knowledge. When this works out, the result is often an intuitive product. For example, consider a straw for drinking. It relies on the common knowledge of sucking and requires little explanation.¹

On one hand this dependence on knowledge, increases comfort, familiarity, and usability. On another hand, you do it wrong and confusion can ensue.

For example, how would you pick up a pamphlet from a table? You’re probably thinking, uh, I reach out and pick it up. That’s an action that is so automatic to us we don’t think about how complex it is to someone who hasn’t learned that behavior. Now how would you pick up a pamphlet from a table in a virtual environment with a hand controller? Do you try to mimic the way that the hand reaches out and touches the pamphlet? Do you have the user touch a hand controller to the pamphlet and pull a trigger to grab it? Do you point a laser at the pamphlet and have it levitate?

You can see that mimicking hand movements with a controller quickly devolves into something that needs explanation for a first time user to figure out. With multiple VR device designs (e.g. HTC Vive controllers, Oculus Touch controllers, PlayStation Move) on the market there’s also no standard yet to be established for interactions.

Evolutionary Design

In the past, we’ve interacted with virtual contents in unnatural ways. Previous innovations taught us to map our thoughts to typing on a QWERTY keyboard, clicking a mouse, using button mental models with a gamepad, or gesturing on a trackpad with swipes. While innovation has progressed, we still haven’t fully solved for natural human interaction.

Wouldn’t it be amazing to interact with virtual environments with nothing but your hands? Using the pamphlet example, instead of pointing at it with your hand, you could  grab it as you would normally. Or pinching and pulling a virtual window to resize it without a controller. That’s what researchers, designers, and users really want – direct manual input (or the sensation of it), and there’s no shortage of people working to figure it out. Going back to 1985 there was the NASA Ames Research Center’s Datagloves, Mark Bolas’ Fakespace Labs Pinch GlovesManus VRLeap Motion, Gloveone, Samsung’s Rink, and others for this.

Revolutionary Design

This space is beyond the scope of this article, but I believe the most exciting option is something  called bare hand input (BHI).

Project Soli technology is a mini doppler radar with a very sophisticated hand gesture signal recognition. Ivan Poupyrev and his team at the Google Advanced Technology and Projects division worked on Soli.

Google’s Soli technology will be integrated into wearables, phones, computers, cars and other IoT devices. As you think about the common gestures that you use in activities everyday, you will begin to notice patterns. Turning a key. Turning a shower faucet. Turning a door handle. Turning a page of a book.

The Project Soli team is focused on codifying those patterns for meaningful standards for BHI. The most exciting innovation, to me, is that repository of gestures. One example is shown below in which the model uses his thumb and index fingers to create a slide adjuster gesture perhaps for volume on a speaker (I actually considered ‘touch-free’ input when deciding a name for BHI, but since touch among fingers is useful in many applications that seemed inappropriate).

For a product example, take a look at Bixi. Bixi uses 3D, touch-free input for GoPro camera controls in tricky situations. Back to VR, BHI is interesting for Cardboard and mobile VR (which have limited controls to begin with). Generally, those controls include some variation of gaze, swipe, and click/tap. The Soli BHI scheme may become ideal for tethered HMDs as well.

However, there are still things that BHI doesn’t make more intuitive such as locomotion. Specifically, VR products with controller input schemes currently use variations of point and click to teleport. Such products would need a BHI replacement like pointing with the index finger and thumb cocked back in the direction in 3D space that you wish to move.

Though, this technology might fall short for premium VR content that asks for a realistic emulation of the sense of touch. Content examples where you need to feel rain on your hands, or the cold metal of a sword hilt, or the reverberation of a baseball bat after connecting with a ball. Understandably, certain systems aim to achieve this immersion level. Today it is my opinion that we are far away from people buying accessories like haptic gloves. I’d contend that for most VR use cases, through good VR UX design, makers can use the natural feeling of our fingers contacting one another to mitigate that loss of sense of touch.

I’m looking forward to getting hands on (funny right?) with BHI for VR and would like to see what others create as well. Please tweet us your thoughts.

¹ Mike Alger on VR UX

Coffee Calm

Coffee Calm

Growing up my parents never really drank coffee, and consequently I kind of developed an association bias that coffee was something to stay away from. Since college started—along with long studying crammed nights—I’ve opened my eyes to the benefits of coffee. My appreciation of coffee has nothing to do with its actual physical effects but more the culture surrounding it. When I’m around coffee drinkers or a coffee shop, I feel more creative, productive, and most importantly calm. That calmness enhances my feeling of being ‘present’ or being more immersed in whatever it is that I’m doing.