Reblog: Google creates coffee making sim to test VR learning

Most VR experiences so far have been games and 360-degree videos, but Google is exploring the idea that VR can be a way to learn real life skills. The skill it chose to use as a test of this hypothesis is making coffee. So of course, it created a coffee making simulator in VR.

As explained by author, Ryan Whitwam, this simulation proved more effective over the other group in the study that had just a video primer on the coffee-making technique herein.

Participants were allowed to watch the video or do the VR simulation as many times as they wanted, and then the test—they had to make real espresso. According to Google, the people who used the VR simulator learned faster and better, needing less time to get confident enough to do the real thing and making fewer mistakes when they did.

As you all know, I have the Future of Farming project going right now with Oculus Launch Pad. It is my ambition to impart some knowledge about farming/gardening to users of that experience. Therefore I found this article to be quite intriguing. How fast can we all learn to crop tend using novel equipment should we be primed first by an interactive experience/tutorial. This is what I’d name ‘environment transferable learning’ or ETL. The idea that in one environment you can learn core concepts or skills that transcend the tactical elements of the environment. For example, a skill learned in VR that translates into a real world environment, maybe “Environment Transferable Skills” or ETS.

A fantastic alternate example, also comes from Google, with Google Blocks. This application allows Oculus Rift or HTC Vive users to craft 3D models with controllers, and the tutorial walks users through how to use their virtual apparatuses. This example doesn’t use ETL, but we can learn from the design of the tutorial nonetheless for ETL applications. For instance, when Blocks teaches how to use the 3D shape tool it focuses on teaching the user by showing outlines of 3D models that it wants the user to place. The correct button is colored differently relative to other touch controller buttons. This signals a constraint to the user that this is the button to use. With sensors found in the Oculus Touch controllers, one could force the constraint of pointing with the index finger or grasping. In the example of farming, if there is a button interface in both the real and virtual world (the latter modeled closely to mimic the real world) I can then show a user how to push the correct buttons on the equipment to get started.

What I want to highlight is that it’s kind of a re-engineering of having someone walk you through your first time exercising a skill (i.e. espresso-making). It’s cool that the tutorial can animate a sign pointing your hands to the correct locations etc. Maybe not super useful for complicated tasks but to kind of instruct anything that requires basic motor skills VR ETL can be very interesting.

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Pay attention: Practice can make your brain better at focusing

Practicing paying attention can boost performance on a new task, and change the way the brain processes information, a new study says.

 

In my first blog post on the Oculus forums, I write:

“Boiling things down, I realized a few tenets of virtual reality to highlight 1) is that one is cut off from the real world if settings are in accordance (i.e. no mobile phone notifications) and therefore undivided attention is made. 2) Immersion and presence can help us condense fact from the vapor of nuance. The nuance being all of the visual information you will automatically gather from looking around that you would otherwise not necessarily have with i.e. a textbook.”

What would you leverage VR’s innate ability to funnel our attention and focus for?

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Football: A deep dive into the tech and data behind the best players in the world

S.L. Benfica—Portugal’s top football team and one of the best teams in the world—makes as much money from carefully nurturing, training, and selling players as actually playing football.

Football teams have always sold and traded players, of course, but Sport Lisboa e Benfica has turned it into an art form: buying young talent; using advanced technology, data science, and training to improve their health and performance; and then selling them for tens of millions of pounds—sometimes as much as 10 or 20 times the original fee.

Let me give you a few examples. Benfica signed 17-year-old Jan Oblak in 2010 for €1.7 million; in 2014, as he blossomed into one of the best goalies in the world, Atlético Madrid picked him up for a cool €16 million. In 2007 David Luiz joined Benfica for €1.5 million; just four years later, Luiz was traded to Chelsea for €25 million and player Nemanja Matic. Then, three years after that, Matic returned to Chelsea for another €25 million. All told, S.L. Benfica raised more than £270 million (€320m) from player transfers over the last six years.

At Benfica’s Caixa Futebol Campus there are seven grass pitches, two artificial fields, an indoor test lab, and accommodation for 65 youth team members. With three top-level football teams (SL Benfica, SL Benfica B, and SL Benfica Juniors) and other youth levels below that, there are over 100 players actively training at the campus—and almost every aspect of their lives is tracked, analyzed, and improved by technology. How much they eat and sleep, how fast they run, tire, and recover, their mental health—everything is ingested into a giant data lake.

With machine learning and predictive analytics running on Microsoft Azure, combined with Benfica’s expert data scientists and the learned experience of the trainers, each player receives a personalized training regime where weaknesses are ironed out, strengths enhanced, and the chance of injury significantly reduced.

Sensors, lots of sensors

Before any kind of analysis can occur, Benfica has to gather lots and lots of data—mostly from sensors, but some data points (psychology, diet) have to be surveyed manually. Because small, low-power sensors are a relatively new area with lots of competition, there’s very little standardization to speak of: every sensor (or sensor system) uses its own wireless protocol or file format. “Hundreds of thousands” of data points are collected from a single match or training session.

Processing all of that data wouldn’t be so bad if there were just three or four different sensors, but we counted almost a dozen disparate systems—Datatrax for match day tracking, Prozone, Philips Actiware biosensors, StatSports GPS tracking, OptoGait gait analysis, Biodex physiotherapy machines, the list goes on—and each one outputs data in a different format, or has to be connected to its own proprietary base station.

Benfica uses a custom middleware layer that sanitises the output from each sensor into a single format (yes, XKCD 927 is in full force here). The sanitised data is then ingested into a giant SQL data lake hosted on the team’s own data centre. There might even be

Joao Copeto, chief information officer of S.L. Benfica.

a few Excel spreadsheets along the way, Benfica’s chief information officer Joao Copeto tells Ars—”they exist in every club,” he says with a laugh—but they are in the process of moving everything to the cloud with Dynamics 365 and Microsoft Azure.

Once everything is floating around in the data lake, maintaining the security and privacy of that data is very important. “Access to the data is segregated, to protect confidentiality,” says Copeto. “Detailed information is only available to a very restricted group of professionals.” Benfica’s data scientists, which are mostly interested in patterns in the data, only have access to anonymised player data—they can see the player’s position, but not much else.

Players have full access to their own data, which they can compare to team or position averages, to see how they’re doing in the grand scheme of things. Benfica is very careful to comply with existing EU data protection laws and is ready to embrace the even-more-stringent General Data Protection Regulation (GPDR) when it comes into force in 2018.

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Life-like realism, a Pixel AR and going mainstream: catching up with Project Tango

Project Tango has been around for a while, from the developer tablet and demos at MWC and Google I/O to the Lenovo Phab2 Pro and the new Asus Zenfone AR. While we’ve seen it progress quite steadily in that time, we’ve never seen it look as convincing as it did at I/O this week.

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Using Machine Learning to Explore Neural Network Architecture

At Google, we have successfully applied deep learning models to many applications, from image recognition to speech recognition to machine translation. Typically, our machine learning models are painstakingly designed by a team of engineers and scientists. This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large — a typical 10-layer network can have ~1010 candidate networks! For this reason, the process of designing networks often takes a significant amount of time and experimentation by those with significant machine learning expertise.

Our GoogleNet architecture. Design of this network required many years of careful experimentation and refinement from initial versions of convolutional architectures.

To make this process of designing machine learning models much more accessible, we’ve been exploring ways to automate the design of machine learning models. Among many algorithms we’ve studied, evolutionary algorithms [1] and reinforcement learning algorithms [2] have shown great promise. But in this blog post, we’ll focus on our reinforcement learning approach and the early results we’ve gotten so far.

In our approach (which we call “AutoML”), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task. That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from. Eventually, the controller learns to assign a high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly. Here’s what the process looks like:

We’ve applied this approach to two heavily benchmarked datasets in deep learning: image recognition with CIFAR-10 and language modeling with Penn Treebank.

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Musings about when and how this research might manifest in human products are welcome. Tweet to me @dilan_shah.