Reblog: Can gaming & VR help you with combatting traumatic experiences?

Can gaming & VR help you with combatting traumatic experiences? The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community. The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

Can gaming & VR help you with combatting traumatic experiences?

Trauma affects a great many people in a variety of ways, some suffer from deep-seated trauma such as Post Traumatic Stress Disorder caused by war or abuse. And others suffer from anxiety and phobias caused by traumatic experiences such as an accident, a loss an attack.

Each needs its own unique and tailored regime to lessen the effects and to aid the individuals in regaining some normalcy to their lives. Often these customized treatments are very expensive and difficult to obtain.

In the world of ubiquitous technology and an ever-increasing speed in visual-based treatments, these personalized therapies are becoming more accessible to the average sufferer.

What I would like to do is take you through some of the beneficial effects that gaming and VR can have on those suffering from trauma, what these treatments sometimes look like and what the pitfalls can be when using them.
I am not a specialist in psychology or trauma treatment, but I feel that increasing awareness of what is out there is beneficial to everyone, and perhaps can help those suffering from trauma to take the first step in seeking help.

Games & VR as a positive mental activity

To date, a few studies have been done on the effectiveness of gaming and virtual reality gaming in therapeutic treatments. But due to the brief history of both, a lengthy study has yet to be completed. But the one thing that we can be sure of is the first-hand accounts of those that have experienced the benefit of these experiences.

A very basic exercise for those suffering from trauma is to engage in mindfulness or meditation exercises. Meditation guided through a VR system can have very positive effects on an individual’s disposition. Due to the immersive nature of VR, you can let yourself fall away into another world and detach yourself from the real world. It is as though you are “experiencing a virtual Zen garden” dedicated entirely to you.

This effect of letting go and identifying with an external locus is probably one of the most effective attributes of gaming and VR. It is the act of not focusing on yourself, on the memories and cues that cause the underlying trauma, but focusing on and engaging with another character, an avatar, on-screen who for all intents and purposes has led and now leads (through you) another life. This character has its own sense of agency to complete a quest or goal, totally independent from you.

The most effective way that games allow you to let go to offer you a challenge that requires your entire focus. And to enhance this, most games offer group challenges. These are two core drivers in improving positive emotions, personal empowerment, and social relatedness. With individuals who suffer from either PTSD or other deep trauma’s, being given a vehicle that allows easier connections with others helps them to cope with their own trauma’s much better. It takes their mind off what is troubling them and through repetition can even lead to a lessening of symptoms.
Did you enjoy this article? Then read the full version from the author’s website.

For a more behind the scenes look at how this manifests in practice, check out this PBS Frontline documentary. Master Sgt. Robert Butler, a Marine combat cameraman, recounts his struggle with PTSD and how Virtual Iraq helped.

Reblog: Virtual Reality Installations to Start Arriving at AMC Movie Theaters Next Year

 

Hey everyone! Today I read that America’s biggest movie theater chain, AMC Entertainment Holdings Inc., is putting $20 million behind a Hollywood virtual reality startup and plans to begin installing its technology at cinemas starting next year. That startup, namely, Dreamscape Immersive is said to be backed by Steven Spielberg and offering experiences allowing six people to participate at the same time.

As a Southern California native, I’m excited that… “‘[i]ts first location will be at a Los Angeles mall run by Westfield Corp., [who is] a series A investor. It is expected to launch there in the winter of 2018’ said Dreamscape’s CEO, Bruce Vaughn”.

Not only will experiences that build on traditional movie-going be available. For example, think of John Wick Chronicles which was an immersive FPS allowing people to play as John Wick and travel into the world of hired guns leading up to John Wick 2 earlier this year.  But, the WSJ article says that you can expect to be able to attend, for instance, sporting events virtually with Dreamscape Immersive. An interesting appeal, given that we don’t really associate a trip to the theaters with sports fan viewing experiences.

I’m curious to see how these Dreamscape Immersive locations will be outfitted. Some might find a useful comparison to be The Void – Ghostbusters Dimensions which brings the cinematic experience to life at Madame Tussauds in New York for you and three others. Their experience highlighted dynamism and complete immersion where you walk around an expansive physical space by leveraging custom hardware.

 

20160714_192840.jpg

Here’s a glimpse at how their setup looked in July 2016 when I went

 

The article goes on to say that, “the VR locations may be in theater lobbies or auditoriums or locations adjacent to cinemas”. Last year in September we saw Emax, for example, a Shenzhen-based startup execute the adjacent layout. The open layout was nice, in my humble opinion, while there are charms to giving folks privacy a la VR booths one might find at large conferences. Perhaps because it shows how much fun people in the virtual experience are having and gives onlooking friends the chance to share their reactions.

 

IMG_2662.JPG

Kiosk situated across from a cinema inside of a mall in Shenzhen

 

On that topic, creative VR applications like Tiltbrush and Mindshow yield some kind of shareable content innately. In the former, when you’re finished with your painting you can export the work of art as a model, scene, or perhaps just the creation video and view it later online. In the latter, you are essentially creating a show for others to watch.

But if the experience is a bit more passive, as in watching a sporting event… are there ways to share that which you experienced with others? Definitely. Via green screen infrastructure and video content. The la-based company, LIV, has been striving towards productization of the infrastructure needed to seamlessly capture guests in a better way.  Succinctly put, LIV “think[s] VR is amazing to be inside, but rather underwhelming to spectate….” Perhaps Dreamscape Immersive will leverage similar infrastructure to expand the digital footprint of these location-based experiences.

What do you think are the most salient points about this announcement?

Read the original WSJ article by clicking here

 

Reblog: Google creates coffee making sim to test VR learning

Most VR experiences so far have been games and 360-degree videos, but Google is exploring the idea that VR can be a way to learn real life skills. The skill it chose to use as a test of this hypothesis is making coffee. So of course, it created a coffee making simulator in VR.

As explained by author, Ryan Whitwam, this simulation proved more effective over the other group in the study that had just a video primer on the coffee-making technique herein.

Participants were allowed to watch the video or do the VR simulation as many times as they wanted, and then the test—they had to make real espresso. According to Google, the people who used the VR simulator learned faster and better, needing less time to get confident enough to do the real thing and making fewer mistakes when they did.

As you all know, I have the Future of Farming project going right now with Oculus Launch Pad. It is my ambition to impart some knowledge about farming/gardening to users of that experience. Therefore I found this article to be quite intriguing. How fast can we all learn to crop tend using novel equipment should we be primed first by an interactive experience/tutorial. This is what I’d name ‘environment transferable learning’ or ETL. The idea that in one environment you can learn core concepts or skills that transcend the tactical elements of the environment. For example, a skill learned in VR that translates into a real world environment, maybe “Environment Transferable Skills” or ETS.

A fantastic alternate example, also comes from Google, with Google Blocks. This application allows Oculus Rift or HTC Vive users to craft 3D models with controllers, and the tutorial walks users through how to use their virtual apparatuses. This example doesn’t use ETL, but we can learn from the design of the tutorial nonetheless for ETL applications. For instance, when Blocks teaches how to use the 3D shape tool it focuses on teaching the user by showing outlines of 3D models that it wants the user to place. The correct button is colored differently relative to other touch controller buttons. This signals a constraint to the user that this is the button to use. With sensors found in the Oculus Touch controllers, one could force the constraint of pointing with the index finger or grasping. In the example of farming, if there is a button interface in both the real and virtual world (the latter modeled closely to mimic the real world) I can then show a user how to push the correct buttons on the equipment to get started.

What I want to highlight is that it’s kind of a re-engineering of having someone walk you through your first time exercising a skill (i.e. espresso-making). It’s cool that the tutorial can animate a sign pointing your hands to the correct locations etc. Maybe not super useful for complicated tasks but to kind of instruct anything that requires basic motor skills VR ETL can be very interesting.

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Pay attention: Practice can make your brain better at focusing

Practicing paying attention can boost performance on a new task, and change the way the brain processes information, a new study says.

 

In my first blog post on the Oculus forums, I write:

“Boiling things down, I realized a few tenets of virtual reality to highlight 1) is that one is cut off from the real world if settings are in accordance (i.e. no mobile phone notifications) and therefore undivided attention is made. 2) Immersion and presence can help us condense fact from the vapor of nuance. The nuance being all of the visual information you will automatically gather from looking around that you would otherwise not necessarily have with i.e. a textbook.”

What would you leverage VR’s innate ability to funnel our attention and focus for?

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Entropy: Why Life Always Seems to Get More Complicated

This pithy statement references the annoying tendency of life to cause trouble and make things difficult. Problems seem to arise naturally on their own, while solutions always require our attention, energy, and effort. Life never seems to just work itself out for us. If anything, our lives become more complicated and gradually decline into disorder rather than remaining simple and structured.

Why is that?

Murphy’s Law is just a common adage that people toss around in conversation, but it is related to one of the great forces of our universe. This force is so fundamental to the way our world works that it permeates nearly every endeavor we pursue. It drives many of the problems we face and leads to disarray. It is the one force that governs everybody’s life: Entropy.
from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Doob 3D-High Fidelity Partnership Promo: First 100 Scans Free

Today, High Fidelity and doob™ announce a partnership to enable the importing of doob™ full body scans into High Fidelity virtual reality environments as fully-rigged user avatars.

I myself have made an appointment and will post later on how my model comes out by uploading it to Sketchfab with a high likelihood.

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Refining Images using Convolutional Neural Networks (CNN) as Proxies by Jasmine L. Collins

For my Machine Learning class project, I decided to look at whether or not we can refine images by using techniques for visualizing what a Convolutional Neural Network (CNN) trained on an image recognition task has learned.

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Football: A deep dive into the tech and data behind the best players in the world

S.L. Benfica—Portugal’s top football team and one of the best teams in the world—makes as much money from carefully nurturing, training, and selling players as actually playing football.

Football teams have always sold and traded players, of course, but Sport Lisboa e Benfica has turned it into an art form: buying young talent; using advanced technology, data science, and training to improve their health and performance; and then selling them for tens of millions of pounds—sometimes as much as 10 or 20 times the original fee.

Let me give you a few examples. Benfica signed 17-year-old Jan Oblak in 2010 for €1.7 million; in 2014, as he blossomed into one of the best goalies in the world, Atlético Madrid picked him up for a cool €16 million. In 2007 David Luiz joined Benfica for €1.5 million; just four years later, Luiz was traded to Chelsea for €25 million and player Nemanja Matic. Then, three years after that, Matic returned to Chelsea for another €25 million. All told, S.L. Benfica raised more than £270 million (€320m) from player transfers over the last six years.

At Benfica’s Caixa Futebol Campus there are seven grass pitches, two artificial fields, an indoor test lab, and accommodation for 65 youth team members. With three top-level football teams (SL Benfica, SL Benfica B, and SL Benfica Juniors) and other youth levels below that, there are over 100 players actively training at the campus—and almost every aspect of their lives is tracked, analyzed, and improved by technology. How much they eat and sleep, how fast they run, tire, and recover, their mental health—everything is ingested into a giant data lake.

With machine learning and predictive analytics running on Microsoft Azure, combined with Benfica’s expert data scientists and the learned experience of the trainers, each player receives a personalized training regime where weaknesses are ironed out, strengths enhanced, and the chance of injury significantly reduced.

Sensors, lots of sensors

Before any kind of analysis can occur, Benfica has to gather lots and lots of data—mostly from sensors, but some data points (psychology, diet) have to be surveyed manually. Because small, low-power sensors are a relatively new area with lots of competition, there’s very little standardization to speak of: every sensor (or sensor system) uses its own wireless protocol or file format. “Hundreds of thousands” of data points are collected from a single match or training session.

Processing all of that data wouldn’t be so bad if there were just three or four different sensors, but we counted almost a dozen disparate systems—Datatrax for match day tracking, Prozone, Philips Actiware biosensors, StatSports GPS tracking, OptoGait gait analysis, Biodex physiotherapy machines, the list goes on—and each one outputs data in a different format, or has to be connected to its own proprietary base station.

Benfica uses a custom middleware layer that sanitises the output from each sensor into a single format (yes, XKCD 927 is in full force here). The sanitised data is then ingested into a giant SQL data lake hosted on the team’s own data centre. There might even be

Joao Copeto, chief information officer of S.L. Benfica.

a few Excel spreadsheets along the way, Benfica’s chief information officer Joao Copeto tells Ars—”they exist in every club,” he says with a laugh—but they are in the process of moving everything to the cloud with Dynamics 365 and Microsoft Azure.

Once everything is floating around in the data lake, maintaining the security and privacy of that data is very important. “Access to the data is segregated, to protect confidentiality,” says Copeto. “Detailed information is only available to a very restricted group of professionals.” Benfica’s data scientists, which are mostly interested in patterns in the data, only have access to anonymised player data—they can see the player’s position, but not much else.

Players have full access to their own data, which they can compare to team or position averages, to see how they’re doing in the grand scheme of things. Benfica is very careful to comply with existing EU data protection laws and is ready to embrace the even-more-stringent General Data Protection Regulation (GPDR) when it comes into force in 2018.

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Why Voxels Are the Future of Video Games, VR, and Simulating Reality

At VRLA this past month, I had the opportunity to see first-hand how the technology gap is closing in terms of photorealistic rendering in virtual reality. Using the ODG R-9 Smartglasses, Otoy was showing a CG scene rendered using Octane Renderer that was so realistic I couldn’t tell whether or not it was real. The ORBX VR media file that results when you build a scene using Octane can be played back at 18K on the GearVR. Unity and Otoy are actively working to integrate their rendering pipeline in the Unity2017 release of the engine. And in short, with a light-field render option, you can move your head around if the device’s positional tracking allows for it.

Screen_Shot_2017-05-08_at_4_08_29_PM.png

Octane is an unbiased renderer. In computer graphics, unbiased rendering is a method of rendering that does not introduce systematic errors or distortions in the estimation of illumination. Octane became a pipeline mainly used for visualization work, everything from a tree to a building for architects, in the early 2010s. About 13 years ago, Sony Pictures Imageworks enabled VFX knowledge that is coming to VR content from Magnopus. How VR, AR, & MR Are Driving True Pipeline Convergence, Presented by Foundry

change of pace with the voxel, so named as a shortened form of “volume element”, is kind of like an atom. It represents a value on a regular grid in three-dimensional space.

This is analogous to a texel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). As with pixels in a bitmap, voxels themselves do not typically have their position (their coordinates) explicitly encoded along with their values. Instead, the position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image). In contrast to pixels and voxels, points and polygons are often explicitly represented by the coordinates of their vertices. A direct consequence of this difference is that polygons are able to efficiently represent simple 3D structures with lots of empty or homogeneously filled space, while voxels are good at representing regularly sampled spaces that are non-homogeneously filled. [1]

Within the last year, I’ve seen Google’s Tango launch in the Lenovo Phab 2 Pro (and soon the Asus ZenFone AR), Improbable’s SpatialOS Demo Live, Otoy’s ODG R-9’s at Otoy’s booth at VRLA of an incredibly realistic scene completely rendered using some form of point cloud data.

“The ability to more accurately model reality in this manner should come as no surprise, given that reality is also voxel based. The difference being that our voxels are exceedingly small, and we call them subatomic particles.”

[1] Wikipedia : Voxel
from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.