This post presents a TEDx talk threading through the connected research topics of games, neuroscience, vr as an input device, and BCI.
Tag / VR

RealityKit Motion Capture and Apple’s future iPhone including a time-of-flight camera
Apple analyst Ming-Chi Kuo claiming in his latest report that two of the 2020 iPhones will feature a rear time-of-flight (ToF) 3D depth sensor for better augmented reality features and portrait shots, via MacRumors.
“It’s not the first we’ve heard of Apple considering a ToF camera for its 2020 phones, either. Bloomberg reported a similar rumor back in January, and reports of a 3D camera system for the iPhone have existed since 2017. Other companies have beaten Apple to the punch here, with several phones on the market already featuring ToF cameras. But given the prevalence of Apple’s hardware and the impact it tends to have on the industry, it’s worth taking a look at what this camera technology is and how it works.
What is a ToF sensor, and how does it work?
Time-of-flight is a catch-all term for a type of technology that measures the time it takes for something (be it a laser, light, liquid, or gas particle) to travel a certain distance.
In the case of camera sensors, specifically, an infrared laser array is used to send out a laser pulse, which bounces off the objects in front of it and reflects back to the sensor. By calculating how long it takes that laser to travel to the object and back, you can calculate how far it is from the sensor (since the speed of light in a given medium is a constant). And by knowing how far all of the different objects in a room are, you can calculate a detailed 3D map of the room and all of the objects in it.
The technology is typically used in cameras for things like drones and self-driving cars (to prevent them from crashing into stuff), but recently, we’ve started seeing it pop up in phones as well.”
The current state of ARKit 3 and an observation

ARKit 3 has an ever-increasing scope, and of particular interest to me are those AR features which under the hood rely upon machine learning, namely Motion Capture.
Today, ARKit 3 uses raycasting as well as ML Based Plane Detection on awake or when the app using ARKit 3 is initially opened in order to place the floor, for example.
Check the video below. In it, I’m standing in front of my phone which is propped up on a table.
In this video, I’m using motion capture via an iPhone XR. My phone is sitting on a surface (namely the table) that it has determined is the floor plane, and as a result, you’ll notice that our avatar, once populated into the scene, has an incorrect notion of where the ground is.
It’s the hope that new ToF sensor technology will allow for a robust and complete understanding of the layout of objects in the room and the floor. Such that, for the same context, the device is able to tell that it is sitting on a table yet, the floor is not that plane but the one further away in the real world scene before it.
Source:
The Verge, “Apple’s future iPhone might add a time-of-flight camera — here’s what it could do”
Acceleration and Motion Sickness in the Context of Virtual Reality (VR)
As I traveled around the world with the HTC Vive and Oculus Rift, universally first-timers would be fascinated, but a bit woozy after trying VR. What contributes to this? One possibility is the vergence-accommodation issue with current displays. However, the subject of this post is locomotion and the anatomical reasoning behind the discomfort arising from poorly designed VR.
With VR you typically occupy a larger virtual space than that of your immediate physical surroundings.
So, to help you traverse, locomotion or in other words a way of sending you from point A to point B in the virtual space was designed. Here’s what this looks like:
Caption: This guy is switching his virtual location by pointing a laser on the tip of his controller to move around.
Movement with changing velocity through a virtual environment can contribute to this overall feeling of being in a daze.
That’s why most creators smooth transitions and avoid this kind of motion (i.e. blink teleport, constant velocity movement from Land’s End). Notice how the movement seems steady and controlled below?
Acceleration and Velocity
‘Acceleration’ is, put simply, any kind of change of speed measured over time, generally [written] as m^-2 (meters per second, per second) if it’s linear or in rad^-2 (same but with an angle) if it’s around an axis. Any type of continuous change in the speed of an object will induce a non-zero acceleration.”
The Human Vestibular System
When you change speed, your vestibular system should register an acceleration. The vestibular system is part of your inner ear. It’s basically the thing that tells your brain if your head is up or down, and permit[s] you to [stand] and walk without falling all the time!
Fluid moving in your semicircular canals is measured and the information is communicated to your brain by the cranial nerves. You can think of this as [similar to how] an accelerometer and a gyroscope works.
[This] acceleration not only includes linear acceleration (from translation in 3D space), but also rotational acceleration, which induces angular acceleration, and empirically, it seems to be the worse kind in the matter of VR sickness…”
Now that you have this grounding for our anatomical system of perceiving acceleration the upshot is that often viewers in VR will experience movement visually but not via these semicircular canals. It’s this incongruence that drives VR sickness with current systems.
Some keywords to explore more if you’re interested in the papers available are: Vection, Galvanic Vestibular Stimulation (GVS), and Self-motion.
via Read more on the ways developers reduce discomfort from the author’s website.

Rapid Worldbuilding with Probuilder
Rapid Worldbuilding with ProBuilder
Real Quick Prototyping Demo
- Main Probuilder window (found by going to tools>probuilder>probuilder window) – has tools and Probuilder is designed so that you can ignore it when you’ a new to using Probuilder
- Face, Object, etc. mode – will allow you to touch only the variable selectable
- A good way to learn the tool is to go through the main probuilder window and check the shortcuts
- It really helps to keep things as simple as possible from the get go. Don’t add tons of polys
- Shape selector will help you quickly make stuff
- Connect edges and insert edge loop
- Holding shift to grab multiple
- ‘R’ will give you scale mode
- Extrude is a fantastic way to add geometry
- Grid Size – keep at 1 for 1 meter this is important for avoiding mistakes when creating geo and knowing your angles
- Use the ortho top down view to see if your geo fits your grid
- Detach face settings is a way to split geo selected but it’s still part of your item
- Detach face using a different setting to create a new game object
- Pivot point needs to be rejigged often (solutions: object action or set it to a specific element using “Center Pivot”)
- center pivot and put it on the grid by using progrids
- Settings changes become the default
- Use the Poly Shape tool to spin up a room + extrude quickly
- Merge objects
- think in terms of Quads
- try selecting to vertices and connect (Alt + E) them
- select hidden as a toggle is a great option, because in 3D you are seeing an orthographic projection so you will click on the thing that is drawn closest to the camera!
- Crafting a doorway, can be done using extrude and grid meter changes, toggle the views (X, Y, and Z) to help with that
- hold v to make geo snap; this will save you time later on
- Alt+C will collapse verts (as in the ramp option where the speaker started with a cube)
- Weld vs. Collapse — weld great for merging to hallways, or collapse which is more like pushing all verts within a specific distance together
- Grow selection and smooth group
Polybrush Stuff
- Add more detail with loops or subdivide (smart connect)
- Polybrush will let you sculpt, smooth, texture blend, scatter objects, etc.
- Modes like smoothing
- todo explore something prefabs
- N-gons are bad because everything is made up of tris
Texturing stuff
- By default, everything is on auto which means that on in-scene handles toggle
- When you’re prototyping this allows you to not use fancy toolbar stuff
Question
- Why is progrids helpful? Short answer: if you’re not super familiar with 3D modeling and creation software (i.e. Maya) you can create simple geo without leaving Unity editor.
- Why would you be obsessive about making sure your geo fits your 1 meter grid size? Short answer: This helps you avoid errors with geo creation such as horrid angles and hidden faces.
- Can you talk a little bit about automation with Probuilder?

Reblog: Google creates coffee making sim to test VR learning
Most VR experiences so far have been games and 360-degree videos, but Google is exploring the idea that VR can be a way to learn real life skills. The skill it chose to use as a test of this hypothesis is making coffee. So of course, it created a coffee making simulator in VR.
As explained by author, Ryan Whitwam, this simulation proved more effective over the other group in the study that had just a video primer on the coffee-making technique herein.
Participants were allowed to watch the video or do the VR simulation as many times as they wanted, and then the test—they had to make real espresso. According to Google, the people who used the VR simulator learned faster and better, needing less time to get confident enough to do the real thing and making fewer mistakes when they did.
As you all know, I have the Future of Farming project going right now with Oculus Launch Pad. It is my ambition to impart some knowledge about farming/gardening to users of that experience. Therefore I found this article to be quite intriguing. How fast can we all learn to crop tend using novel equipment should we be primed first by an interactive experience/tutorial. This is what I’d name ‘environment transferable learning’ or ETL. The idea that in one environment you can learn core concepts or skills that transcend the tactical elements of the environment. For example, a skill learned in VR that translates into a real world environment, maybe “Environment Transferable Skills” or ETS.
A fantastic alternate example, also comes from Google, with Google Blocks. This application allows Oculus Rift or HTC Vive users to craft 3D models with controllers, and the tutorial walks users through how to use their virtual apparatuses. This example doesn’t use ETL, but we can learn from the design of the tutorial nonetheless for ETL applications. For instance, when Blocks teaches how to use the 3D shape tool it focuses on teaching the user by showing outlines of 3D models that it wants the user to place. The correct button is colored differently relative to other touch controller buttons. This signals a constraint to the user that this is the button to use. With sensors found in the Oculus Touch controllers, one could force the constraint of pointing with the index finger or grasping. In the example of farming, if there is a button interface in both the real and virtual world (the latter modeled closely to mimic the real world) I can then show a user how to push the correct buttons on the equipment to get started.
What I want to highlight is that it’s kind of a re-engineering of having someone walk you through your first time exercising a skill (i.e. espresso-making). It’s cool that the tutorial can animate a sign pointing your hands to the correct locations etc. Maybe not super useful for complicated tasks but to kind of instruct anything that requires basic motor skills VR ETL can be very interesting.
via Did you enjoy this article? Then read the full version from the author’s website.

Update; Oculus Launch Pad 2017: Future of Farming
Everything is hierarchical in the brain. In VR design for users, my hypothesis is that this can be really helpful for setting the context. For example, at Virtually Live where the VR content is “Formula E Season 2 Highlights”, meaning the one donning the headset is able to watch races. I once proposed that we use the amazing UX of Realities.io to use an interactive model of Earth as the highest level of abstraction from the races (which occur all over the world). The user can spin the globe around and find a location to load in. The hierarchy written abstractly in this example is, Globe is a superset of Countries, Countries that of Cities, and Cities that of Places. I figured that this would be perfect for an electric motorsport championship series that travels to famous cities each month. We went with a carousel design that was more expeditious than the globe in the end.
The Future of Farming takes place largely in a metropolitan area, namely San Francisco. So I’ve decided that to begin, I’ll borrow from the hierarchical plan. I want to showcase an orthographic project of San Francisco to the user with maybe a handful of locations highlighted as interactable. To do this I’ve setup WRLD in my project for city landscape.
Upon selection of one of the highlighted locations with the GearVR controller, a scene will load with a focal piece of farming equipment that has made its way into the type of place (e.g. Warehouse, House, or Apartment, etc.).
A quick aside, last week I had a tough travel and work schedule to New York. I came upon a pretty bare blog post upon reading back what I wrote, so I decided, it was better to not share. One of the other hurdles I had, was an unfortunate loss of the teammate I announced two weeks prior, simply due to his prioritizing projects with budgets more appealing to him. I dwelled on this for awhile, as I admired his plant modeling work a lot. With the loss of that collaborator and weighing a few other factors, I’ve decided to pursue an art style much akin to that of Virtual Virtual Reality or that of Superhot. Less geometry all created in VR. Doing most of this via Google Blocks and a workflow involving pushing created environments to Unity which is pretty straight-forward. After you have created your model in Google Blocks, visit on an Internet browser with WebGL-friendly settings and download your model. From there, you can unzip that file and drag it into Unity Assets>Blocks Import which I recommend you create as a way of staying organized. You’ll note that Blocks imports speciate a .mtl, materials folder, and a .obj model usually. In order to have your intended Google Blocks model to show through you need to change one setting called “Material Naming” after you’ve clicked on your .obj. Change it to “By Base Texture Name” and Material Search can be by “Recursive Up”.
Here’s a look at the artwork for a studio apartment in SF for the app, as viewed from above. It’s a public bedroom that I’m remixing and you can see I’ve added a triangular floor space for a kitchen and this is likely where the window sill variety of hydroponic crop equipment will go. Modeling one such piece is going to be really fun.

View from Above

Angle View
In the past weeks, I’ve dedicated myself to edification on gardening and farming practices via readings, podcasts, and talking to people in business ecosystems involving food product suppliers. I learned about growing shitake mushrooms and broccoli sprouts in the home and got hands on with these. I learned about the technology evolution behind rice cookers and about relevant policy for farmers on the west coast over the last dozen years. In the industry, there are a number of effective farming methods that I’m planning to draw on (indoor hydroponic and aeroponic) that I can see working in some capacity in the home, and milieus I will highlight such as a legitimate vertical indoor farm facility (https://techcrunch.com/2017/07/19/billionaires-make-it-rain-on-plenty-the-indoor-farming-startup/).
I have asked for help from a design consultant standpoint from someone that works at Local Bushel.
To expound on why Local Bushel is perhaps a helpful reference point: Local Bushel is a community of individuals dedicated to increasing our consumption of responsibly raised food. Their values align well with edifying me (the creator) about the marketplace that I want to project into the future about. Those are:
- Fostering Community
- Being Sustainable and Responsible
- Providing High Quality, Fresh Ingredients
——
For interactions, I can start simple and use info-cards/move scenes based on the orientation of the users head using ray casts. Working in Oculus Gear VR Controller eventually.
Reblog: Pay attention: Practice can make your brain better at focusing
Practicing paying attention can boost performance on a new task, and change the way the brain processes information, a new study says.
In my first blog post on the Oculus forums, I write:
“Boiling things down, I realized a few tenets of virtual reality to highlight 1) is that one is cut off from the real world if settings are in accordance (i.e. no mobile phone notifications) and therefore undivided attention is made. 2) Immersion and presence can help us condense fact from the vapor of nuance. The nuance being all of the visual information you will automatically gather from looking around that you would otherwise not necessarily have with i.e. a textbook.”
What would you leverage VR’s innate ability to funnel our attention and focus for?
from Pocket
via Did you enjoy this article? Then read the full version from the author’s website.

OLP Day 2: Chris Pruett Unity Session
The following are my notes from a day at Oculus Launch Pad 2017 with Director of Mobile Engineering @ Oculus, Chris Pruett.
Chris Talking about Unity Workflow and Areas of OLP Interest
Loading Scene and other tricks and tips for optimizing load-in
-
Put scenes assets in an asset bundle, use the OVROverlay script, synchronously load, when complete turn the cube map off
-
You could decide that a one-time level load is better than a multiple level load. As long as your session time is fairly long you paid all the costs at one time, and now you have a memory buffer for the experience.
What’s notable inside OVR Utilities
-
High-level VR Camera Headset (e.g. LeftEyeAnchor, RightEyeAnchor, CenterEyeAnchor)
-
Controllers API for higher-end hand-controllers (Touch) and lower-end hand-controller (gear)
-
User gets to choose left/right setting for the Gear controller
-
OVRInput.cs is abstracted in a way that allows for input from any controller (i.e. LTrackedController or RTrackedController)
OVROverlay
-
Built into the Oculus SDK – it’s a texture that is rendered not by the game engine but by timewarp –– which is something similar to asynchronous reprojection
-
Your engine renders a left and right eye buffer and submits it to the Oculus SDK
-
The basic things that it does are projects images, warping the edge of images in the right way for the specific hardware in use
-
Timewarp – Tries to alleviate judder. It takes a previous frame and reshows this in practice, it only knows about orientation information and it’s not going to help with the camera moving forward. It will render some overlays for you. First of all timewarp has an opportunity to render faster than Unity. Timewarp composites in the layers that you submit which are essentially “Quads”. This is particularly good if you’re rendering video. It was made initially for mobile, but there’s now an additional buffer for Rift that you have to Upshot: You can get a higher fidelity by pushing certain texture through Timewarp.
VR Compositor layer

OVRInput.cs
This section could use some filling out
-
Check the public enum Button for more interesting maps
-
Check out public enum Controller for both Oculus Touch and Gear VR Controller orientation info e.g. LTrackedRemote or RTrackedRemote –– will give you back a quaternion
Potentially To Come: OculusDebugTool, right now it’s only on the Rift
Fill Cost
-
Today eye buffers aren’t rendering the same resolution as the device, but rather 1024 x 1024, which is a 3rd of the resolution of GearVR displays
-
No one comes fill bound but the buffers are 1400 x 1400
-
The way that Chris thinks about this is the total number of pixels that will get touched for a computation, and the number of times it will compute/touched
Draw Calls
- Draw calls are organized around a mesh
- Batches are “when you take like five meshes of the same material and collect them up and issue draw calls in succession (this is because the real-time cost comes with loading in info about the draw)” in Stats this is the total number of draw calls (want to keep under 150), “Saved by Batching” refers to
- Static Batched objects are objects that you mark as “Static” in the details pane on the right side. Saying that this object isn’t going to move or scale.
Movie Textures
Optimization: Dynamic Versions of Interactive Objects
Frame Debug
Use this to walk through all your draw calls. Can be very helpful to understand how Unity draws your scene. Opaque geometry comes first, followed by more transparent objects.
Expand on how one can open this up in the Unity Editor, please.
Lightmap and Lightmap Index
-
Window>Lighting>Setting you can basically bake your lighting here in “Lightmap Settings”
-
Please fill out this section more if you have other notes
-
If you wanted to have a crack of lightning or something, the way to do that is write your own shader that will light all surfaces for objects by increasing the saturation of every object etc.
-
In the past, Chris has found it edifying to delve into the code for the Unity Shaders such as Mobile Diffuse or Standard, which are all available publicly

You can barely see but on the far right highlighted in the yellow box, you have a setting for source that is set to “Skybox”
Oculus SDK for Multiplayer
-
Rooms: once players are in the room they can share info
-
Hard-code a roomID – helps with info transfer across multiple instances of Unity running (i.e.two different gearVRs running with the same app open can share info)
-
Specular – computes the same simple (Lambertian) lighting as Diffuse, plus a viewer dependent specular highlight.
-
Draw calls are organized around a mesh
-
In some cases, Unity will take images that match the same material and batch them (two versions: static mesh batching and dynamic batching) it works based on material pointers.
-
At playtime/buildtime Unity will load a bunch of stuff into the same static combined mesh
-
Colliders get expensive when you start to move them
-
Progressive Lightmapper
-
Combined meshes can be viewed which is cool
- Unity isn’t going to batch a texture, that’s why he/a very talented artist made the atlas. You can try using Unity’s API for atlas creation or MeshBaker for similar effects.
-
Set Pass Call – is a pass within a shader (some shaders require multiple passes)
-
Unity 5.6 – Single-pass stereo rendering –– halves your draw call # and in practice its about 1/3 of all of this
-
If you want to use Oculus Touch to do pointing or thumbs up (i.e. Facebook Spaces) there is a function found in one of the scripts called GetNearTouch() which allows you to check sensors on Touch Controllers and toggle a hand model point/thumbs up on and off
-
Mipmaps – further reading
-
Occlusion Culling – What you’re able to see or not see at any given second (i.e. Horizon Zero Dawn below) – Window>Occlusion Culling
-
Android debug bridge – may help with debugging without having to put your headset on