OC4 Talks: Designing for Feeling – Robin Hunicke

Notes from OC4 Designing for Feeling – Robin Hunicke

Philosophy of Exploration and Design

Robin opened with her concept of triple E content (a play on AAA, disambiguated below) and extolled the value of figuring out where you want to go first

  • Elegant Expressive and Emotional content (EEE)
  • She presented a 2×2 matrix with high impact, low cost as the quadrant where most content aims… the problem, she expressed, was that the matrix leaves out elegance as a focal point
  • Evolve concepts, tools, & solutions, to reduce cost & improve impact
  • Evolve ux
  • Expressive – Players Speak

Process & the Broad Applicability of EEE

Axes in her slide graphic included rational, eee, baroque, and scripted (e.g. Sims, Black ops)

  1. Test your concept like it isn’t your
  2. Throw away ideas
  3. Find the feeling in your idea (lock in on it)
  4. This is your secret sauce
  5. Test the prototype like it isn’t yours
  6. the prototype is different than what is on paper
  7. the process is what helps
  8. Repeat


Uncertainty is surpassed only by the effort that needs to go into it

For Luna she took inspiration for the design from a paper world feel, influenced by origami, and during the process she packed her mind with fairytales

Not everyone needs to get into hands-on design influences, but she
thought that making origami and the concepts and learning how the
tactile quality turned out were really informative
 I’ve definitely found with Project Futures: The Future of
Farming it’s really key to actually gain some influence from real world knowledge and folks that have built
constructs or structures that are going to lend to the look and feeling of the world space in the app. Namely

One important side note Robin dropped was that none of the characters in Luna have genders.

Other Random Notes

  • Mood boards
  • Luna started out as a PC and VR title from the beginning
  • The demo and vision existed before the actual prototype (i.e. the hands
    controlling the stars)
  • Tested prototype part 2 and threw it away
  • Music is integrated into the testing process with feeling at the center, namely, “what kind of feeling is it communicating?”

4 year process for Luna – started out as a drawing in a book

  • They went through a massive phase where no VR was implemented, then in November 2016 it came to life in VR (7 person
  • By 2017 the pieces are starting to become cohesive and informed by the feeling

Fail Forward was key, it takes a lot of work.

Have to lean into the idea of interesting different challenging titles

UPSHOT = Diverse and inclusive teams, failure is ok, and the belief that you’re
going to get there. Leads to the triple EEEs and successful titles.

Developer Blog Post: ARKit #1

When developing AR applications for Apple phones there are two cameras that we speak about. One is the physical camera on the back of the phone. The other is the virtual camera that you will have in your Unity scene to in turn, match the position and orientation of the real world camera.

A camera in Unity (virtual) has a component called Clear Flags which determines which parts of the screen will be cleared. On your main virtual camera setting this to “Depth Only” will instruct the renderer to clear the layer of the virtual background environment. Allowing for the seamless overlay of virtual objects on the (physical) camera feed as a backdrop for your virtual objects.

More to come on differences between hit testing and ray casting in the context of ARKit and a broader look at intersection testing approaches in the next post.

Reblog: Virtual Reality Installations to Start Arriving at AMC Movie Theaters Next Year


Hey everyone! Today I read that America’s biggest movie theater chain, AMC Entertainment Holdings Inc., is putting $20 million behind a Hollywood virtual reality startup and plans to begin installing its technology at cinemas starting next year. That startup, namely, Dreamscape Immersive is said to be backed by Steven Spielberg and offering experiences allowing six people to participate at the same time.

As a Southern California native, I’m excited that… “‘[i]ts first location will be at a Los Angeles mall run by Westfield Corp., [who is] a series A investor. It is expected to launch there in the winter of 2018’ said Dreamscape’s CEO, Bruce Vaughn”.

Not only will experiences that build on traditional movie-going be available. For example, think of John Wick Chronicles which was an immersive FPS allowing people to play as John Wick and travel into the world of hired guns leading up to John Wick 2 earlier this year.  But, the WSJ article says that you can expect to be able to attend, for instance, sporting events virtually with Dreamscape Immersive. An interesting appeal, given that we don’t really associate a trip to the theaters with sports fan viewing experiences.

I’m curious to see how these Dreamscape Immersive locations will be outfitted. Some might find a useful comparison to be The Void – Ghostbusters Dimensions which brings the cinematic experience to life at Madame Tussauds in New York for you and three others. Their experience highlighted dynamism and complete immersion where you walk around an expansive physical space by leveraging custom hardware.



Here’s a glimpse at how their setup looked in July 2016 when I went


The article goes on to say that, “the VR locations may be in theater lobbies or auditoriums or locations adjacent to cinemas”. Last year in September we saw Emax, for example, a Shenzhen-based startup execute the adjacent layout. The open layout was nice, in my humble opinion, while there are charms to giving folks privacy a la VR booths one might find at large conferences. Perhaps because it shows how much fun people in the virtual experience are having and gives onlooking friends the chance to share their reactions.



Kiosk situated across from a cinema inside of a mall in Shenzhen


On that topic, creative VR applications like Tiltbrush and Mindshow yield some kind of shareable content innately. In the former, when you’re finished with your painting you can export the work of art as a model, scene, or perhaps just the creation video and view it later online. In the latter, you are essentially creating a show for others to watch.

But if the experience is a bit more passive, as in watching a sporting event… are there ways to share that which you experienced with others? Definitely. Via green screen infrastructure and video content. The la-based company, LIV, has been striving towards productization of the infrastructure needed to seamlessly capture guests in a better way.  Succinctly put, LIV “think[s] VR is amazing to be inside, but rather underwhelming to spectate….” Perhaps Dreamscape Immersive will leverage similar infrastructure to expand the digital footprint of these location-based experiences.

What do you think are the most salient points about this announcement?

Read the original WSJ article by clicking here


Reblog: The Light Field Stereoscope | SIGGRAPH 2015

Inspired by Wheatstone’s original stereoscope and augmenting it with modern factored light field synthesis, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] present a new near-eye display technology that supports focus cues. These cues are critical for mitigating visual discomfort experienced in the commercially-available head mounted displays and providing comfortable, long-term immersive experiences.



Over the last few years, virtual reality has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate extremely immersive experiences. To facilitate comfortable long-term experiences and wide-spread user acceptance, however, the vergence-accommodation conflict inherent to all stereoscopic displays will have to be solved. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] present the first factored near-eye display technology supporting high image resolution as well as focus cues: accommodation and retinal blur. To this end, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] build on Wheatstone’s original stereoscope but augment it with modern factored light field synthesis via stacked liquid crystal panels. The proposed light field stereoscope is conceptually closely related to emerging factored light field displays, but it has very unique characteristics compared to the television-type displays explored thus far. Foremost, the required field of view is extremely small – just the size of the pupil – which allows for rank-1 factorizations to produce correct or nearly-correct focus cues. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze distortions of the lenses in the near-eye 4D light fields and correct them using the high-dimensional image formation afforded by our display. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] demonstrate significant improvements in resolution and retinal blur quality over previously-proposed near-eye displays. Finally, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze diffraction limits of these types of displays along with fundamental resolution limits.


  • technical paper (pdf)
  • technical paper supplement (zip)
  • presentation slides (slideshare)

Reblog: Google creates coffee making sim to test VR learning

Most VR experiences so far have been games and 360-degree videos, but Google is exploring the idea that VR can be a way to learn real life skills. The skill it chose to use as a test of this hypothesis is making coffee. So of course, it created a coffee making simulator in VR.

As explained by author, Ryan Whitwam, this simulation proved more effective over the other group in the study that had just a video primer on the coffee-making technique herein.

Participants were allowed to watch the video or do the VR simulation as many times as they wanted, and then the test—they had to make real espresso. According to Google, the people who used the VR simulator learned faster and better, needing less time to get confident enough to do the real thing and making fewer mistakes when they did.

As you all know, I have the Future of Farming project going right now with Oculus Launch Pad. It is my ambition to impart some knowledge about farming/gardening to users of that experience. Therefore I found this article to be quite intriguing. How fast can we all learn to crop tend using novel equipment should we be primed first by an interactive experience/tutorial. This is what I’d name ‘environment transferable learning’ or ETL. The idea that in one environment you can learn core concepts or skills that transcend the tactical elements of the environment. For example, a skill learned in VR that translates into a real world environment, maybe “Environment Transferable Skills” or ETS.

A fantastic alternate example, also comes from Google, with Google Blocks. This application allows Oculus Rift or HTC Vive users to craft 3D models with controllers, and the tutorial walks users through how to use their virtual apparatuses. This example doesn’t use ETL, but we can learn from the design of the tutorial nonetheless for ETL applications. For instance, when Blocks teaches how to use the 3D shape tool it focuses on teaching the user by showing outlines of 3D models that it wants the user to place. The correct button is colored differently relative to other touch controller buttons. This signals a constraint to the user that this is the button to use. With sensors found in the Oculus Touch controllers, one could force the constraint of pointing with the index finger or grasping. In the example of farming, if there is a button interface in both the real and virtual world (the latter modeled closely to mimic the real world) I can then show a user how to push the correct buttons on the equipment to get started.

What I want to highlight is that it’s kind of a re-engineering of having someone walk you through your first time exercising a skill (i.e. espresso-making). It’s cool that the tutorial can animate a sign pointing your hands to the correct locations etc. Maybe not super useful for complicated tasks but to kind of instruct anything that requires basic motor skills VR ETL can be very interesting.

via Did you enjoy this article? Then read the full version from the author’s website.

Update; Oculus Launch Pad 2017: Future of Farming

Everything is hierarchical in the brain. In VR design for users, my hypothesis is that this can be really helpful for setting the context. For example, at Virtually Live where the VR content is “Formula E Season 2 Highlights”, meaning the one donning the headset is able to watch races. I once proposed that we use the amazing UX of Realities.io to use an interactive model of Earth as the highest level of abstraction from the races (which occur all over the world). The user can spin the globe around and find a location to load in. The hierarchy written abstractly in this example is, Globe is a superset of Countries, Countries that of Cities, and Cities that of Places. I figured that this would be perfect for an electric motorsport championship series that travels to famous cities each month. We went with a carousel design that was more expeditious than the globe in the end.

The Future of Farming
 takes place largely in a metropolitan area, namely San Francisco. So I’ve decided that to begin, I’ll borrow from the hierarchical plan. I want to showcase an orthographic project of San Francisco to the user with maybe a handful of locations highlighted as interactable. To do this I’ve setup WRLD in my project for city landscape.

Upon selection of one of the highlighted locations with the GearVR controller, a scene will load with a focal piece of farming equipment that has made its way into the type of place (e.g. Warehouse, House, or Apartment, etc.).

A quick aside, last week I had a tough travel and work schedule to New York. I came upon a pretty bare blog post upon reading back what I wrote, so I decided, it was better to not share. One of the other hurdles I had, was an unfortunate loss of the teammate I announced two weeks prior, simply due to his prioritizing projects with budgets more appealing to him. I dwelled on this for awhile, as I admired his plant modeling work a lot. With the loss of that collaborator and weighing a few other factors, I’ve decided to pursue an art style much akin to that of Virtual Virtual Reality or that of Superhot. Less geometry all created in VR. Doing most of this via Google Blocks and a workflow involving pushing created environments to Unity which is pretty straight-forward. After you have created your model in Google Blocks, visit on an Internet browser with WebGL-friendly settings and download your model. From there, you can unzip that file and drag it into Unity Assets>Blocks Import which I recommend you create as a way of staying organized. You’ll note that Blocks imports speciate a .mtl, materials folder, and a .obj model usually. In order to have your intended Google Blocks model to show through you need to change one setting called “Material Naming” after you’ve clicked on your .obj. Change it to “By Base Texture Name” and Material Search can be by “Recursive Up”.



Here’s a look at the artwork for a studio apartment in SF for the app, as viewed from above. It’s a public bedroom that I’m remixing and you can see I’ve added a triangular floor space for a kitchen and this is likely where the window sill variety of hydroponic crop equipment will go. Modeling one such piece is going to be really fun.



View from Above



Angle View




In the past weeks, I’ve dedicated myself to edification on gardening and farming practices via readings, podcasts, and talking to people in business ecosystems involving food product suppliers. I learned about growing shitake mushrooms and broccoli sprouts in the home and got hands on with these. I learned about the technology evolution behind rice cookers and about relevant policy for farmers on the west coast over the last dozen years. In the industry, there are a number of effective farming methods that I’m planning to draw on (indoor hydroponic and aeroponic) that I can see working in some capacity in the home, and milieus I will highlight such as a legitimate vertical indoor farm facility (https://techcrunch.com/2017/07/19/billionaires-make-it-rain-on-plenty-the-indoor-farming-startup/).

I have asked for help from a design consultant standpoint from someone that works at Local Bushel.

To expound on why Local Bushel is perhaps a helpful reference point: Local Bushel is a community of individuals dedicated to increasing our consumption of responsibly raised food. Their values align well with edifying me (the creator) about the marketplace that I want to project into the future about. Those are:

  1. Fostering Community
  2. Being Sustainable and Responsible
  3. Providing High Quality, Fresh Ingredients

For interactions, I can start simple and use info-cards/move scenes based on the orientation of the users head using ray casts. Working in Oculus Gear VR Controller eventually.