My Experience Working Out At-Home During the Global COVID-19 Outbreak

At first glance, this post might sound pedantic, for comprehensive info on the Corona Virus visit the WHO Q&As or CDC. This post is in regard to immunological fitness and how the virus is spread and my personal method of using virtual reality as an additional form of exercise:

The disease can spread from person to person through small droplets from
 the nose or mouth which are spread when a person with COVID-19 coughs or 
exhales. These droplets land on objects and surfaces around the person. 
Other people then catch COVID-19 by touching these objects or surfaces, 
then touching their eyes, nose or mouth


source: https://www.who.int/news-room/q-a-detail/q-a-coronaviruses

For healthy populations, during this time exercise is still a key part of staying healthy, more on this further down. For this reason, gyms and other typically crowded workout facilities are out. I’ve been using an at-home workout strategy using virtual reality for over two weeks and I’d like to share why this is working for me.

TLDR

If you own a VR headset; some titles that could be used for cardio are:

- Beat Saber
- Box VR
- OhShape
- Thrill of the Fight
- Synth Riders
- Creed: Rise to Glory

Active titles that can be modified to be more of a workout:

- Rec Room
- RacketNX
- Pistol Whip
- Lone Echo
- Superhot VR

For general standing activity to afford you some low intensity movement:

- Racket Fury
- Until You Fall
- Sports Scramble
- VRChat

Virtual reality is a little known option for folks as it relates to fitness, but now we know at my company YUR that thousands of people use VR games daily to workout in a fun and efficient way. The big difference is that while wearing a VR headset you are completely immersed in playing the role of a player in a game. It’s important to note that this trend towards immersive fitness is visible with Peloton, Les Mills, and other fitness names.

YUR monthly view

My month so far has been characterized by workouts between 250 kcals and 750 kcals as you can see, every day (except for March 4th). I’ll tend to use games such as Box VR or Beat Saber, and with YUR the cool part about this is any game can be played and tracked which allows for constant novelty the moment you feel bored of your current exercise regime. This doubles as a benefit if you are feeling cooped up at home.

… with YUR any game can be played and tracked which allows for constant novelty the moment you feel bored of your current exercise regime

I would characterize the kind of workouts I do in VR as plyometric, and explosive in nature similar to a HIIT workout. However, this is up to your personal preference.

As a perennial gym-goer, I have to point out here what VR workouts are not providing me and others. Hypertrophic or strength benefits from lifting weights, cycling, rowing, calisthenics, and running are all different from VR workouts.

So how does staying immunologically fit factor into this as well as COVID-19? I’m not posing a risk to others (as long as I am the only one using my VR headset). By doing this I’m participating in a community.

To be immunologically fit, you need to be physically fit. “White blood 
cells can be quite sedentary,” says Akbar. “Exercise mobilises them by increasing your blood flow, so they can do their surveillance jobs and seek
 and destroy in other parts of the body.” The NHS says adults should be 
physically active in some way every day, and do at least 150 minutes a week 
of moderate aerobic activity (hiking, gardening, cycling) or 75 minutes of 
vigorous activity (running, swimming fast, an aerobics class).

`source: https://www.theguardian.com/lifeandstyle/2020/mar/08/how-to-boost-your-immune-system-to-avoid-colds-and-coronavirus`

So basically, in the middle of my day between 1 pm or 6 pm, I throw my Oculus Quest on and workout for maybe half an hour or so. I hope that this has been insightful to you and if you have a VR headset perhaps this can factor into your virus response.

This post initially appeared on my Linkedin.

Oculus Connect 6 Takeaways

Ahead of Oculus Connect 6 (OC6), I attended the Oculus Launchpad and Start dinner tonight. I saw a ton of vibrant communication and hopes for the next few days. In no small order, developers were internationally based, from places such as Canada and New Zealand. I noticed a pattern of developers who seem to be holding full-time jobs all the while in pursuit of publishing an app to the Oculus Store.

RealityKit Motion Capture and Apple’s future iPhone including a time-of-flight camera

Apple analyst Ming-Chi Kuo claiming in his latest report that two of the 2020 iPhones will feature a rear time-of-flight (ToF) 3D depth sensor for better augmented reality features and portrait shots, via MacRumors.

“It’s not the first we’ve heard of Apple considering a ToF camera for its 2020 phones, either. Bloomberg reported a similar rumor back in January, and reports of a 3D camera system for the iPhone have existed since 2017. Other companies have beaten Apple to the punch here, with several phones on the market already featuring ToF cameras. But given the prevalence of Apple’s hardware and the impact it tends to have on the industry, it’s worth taking a look at what this camera technology is and how it works.

What is a ToF sensor, and how does it work?

Time-of-flight is a catch-all term for a type of technology that measures the time it takes for something (be it a laser, light, liquid, or gas particle) to travel a certain distance.

In the case of camera sensors, specifically, an infrared laser array is used to send out a laser pulse, which bounces off the objects in front of it and reflects back to the sensor. By calculating how long it takes that laser to travel to the object and back, you can calculate how far it is from the sensor (since the speed of light in a given medium is a constant). And by knowing how far all of the different objects in a room are, you can calculate a detailed 3D map of the room and all of the objects in it.

The technology is typically used in cameras for things like drones and self-driving cars (to prevent them from crashing into stuff), but recently, we’ve started seeing it pop up in phones as well.”

The current state of ARKit 3 and an observation

Screen Shot 2019-07-29 at 12.07.16 PM.png

ARKit 3 has an ever-increasing scope, and of particular interest to me are those AR features which under the hood rely upon machine learning, namely Motion Capture.

Today, ARKit 3 uses raycasting as well as ML Based Plane Detection on awake or when the app using ARKit 3 is initially opened in order to place the floor, for example.

Check the video below. In it, I’m standing in front of my phone which is propped up on a table.

In this video, I’m using motion capture via an iPhone XR. My phone is sitting on a surface (namely the table) that it has determined is the floor plane, and as a result, you’ll notice that our avatar, once populated into the scene, has an incorrect notion of where the ground is.

Screen Shot 2019-07-30 at 1.27.20 PM

It’s the hope that new ToF sensor technology will allow for a robust and complete understanding of the layout of objects in the room and the floor. Such that, for the same context, the device is able to tell that it is sitting on a table yet, the floor is not that plane but the one further away in the real world scene before it.

 

Source:
The Verge, “Apple’s future iPhone might add a time-of-flight camera — here’s what it could do”

Reblog: Thoughts on SwiftUI from WWDC 19

SwiftUI

So what’s the big deal with SwiftUI? Well here’s why I think it’s great.

  1. One UI framework for all platforms It has always baffled me why Apple never made UIKit work on the Mac. If it worked for iOS and tvOS it could certainly also work on the Mac (which it does now thanks to Project Catalyst). For me, this means having double the work on many parts of the UI on Secrets for Mac and iOS. Now and then, you would see rumors that would give you hope. “Maybe next year” you’d think… but the years passed and nothing. Looking back, I can’t help but wonder if this was Apple’s plan all along. SwiftUI is certainly a multi-year effort. The underpinnings of the combined framework are at least 5 years old:

    Joe Groff@jckarter

    Combine goes back before even Swift existed. I’ve been helping the SwiftUI folks for at least three years, and they were probably working on stuff before I knew about it

    David Smith@Catfish_Man

    I was curious what the earliest Combine-related file I have on my computer is, and it turns out it’s August 14th 2013. I filed the radar it references on 10/23/2012.

    Also apparently yet another short-lived project name I forgot about?? pic.twitter.com/FR6NADWrs5

    View image on Twitter

    And although I haven’t played much with it yet certainly there’ll be a lot of bugs/shortcomings to iron out for next few years.

  2. Declarative To put it succinctly, this means that instead of telling the framework what to do you tell it what you want. The framework then figures out how to achieve. And you’ve seen this style of coding already with Auto Layout. It offloads much of the complexity to the framework.By introducing this abstraction and letting the framework do the job of composing the UI for you we get:
    • Automatic support for many of the system features: dynamic type, accessibility, dark mode, etc;
    • Adaptive layouts on different platforms (a switch on the iPhone becomes a checkbox on the Mac);
    • Freedom from having to adapt our UI whenever Apple needs to evolve it (what SwiftUI uses to satisfy a Text element may change on the next release).

    I had a professor that used to say:

    All problems in CS can be solved with one more level of indirection.

    It still holds.

  3. Reactive I’ve never invested much time with any of the reactive frameworks out there. I definitely appreciated the principles behind them but I’ve always been very critical of frameworks1 or technologies2 that are too invasive.With what I’ve already seen on the sessions and demos, I’m just about ready to forgive Apple for abandoning development of the controversial Cocoa Bindings3.

    We write so much glue code that I’ve got no problems accepting the learning curve of all the new stuff that is driving this both in the Swift language and the new Combine framework.

I’m cautiously excited about SwiftUI and sincerely hope it will live up to expectations.
Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Suffering-oriented programming

Suffering-oriented programming can be summarized like so: don’t build technology unless you feel the pain of not having it. It applies to the big, architectural decisions as well as the smaller everyday programming decisions. Suffering-oriented programming greatly reduces risk by ensuring that you’re always working on something important, and it ensures that you are well-versed in a problem space before attempting a large investment.

[Nathan Marz has] a mantra for suffering-oriented programming: “First make it possible. Then make it beautiful. Then make it fast.”

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Adventure at the 5th Oculus Connect Conference

oculus_connect_5_dilan.png

The following is a write up from a friend, Kathryn Hicks, on the Danse blog. The link to the original is at the bottom. 

Last week I attended the 5th Oculus Connect Conference held at the San Jose McEnery Convention Center. This two-day conference is held annually during the fall, which showcases the new virtual reality technology from Oculus. It was my second time attending, and it felt even better than the last one.

During the Keynote address, Zuckerberg announced a wireless headset that doesn’t need a cell phone, and an external computer. The Quest, a standalone headset with 6 degrees of freedom, touch controllers and is a potential game-changer for the VR industry. If you are familiar with the Rift and the Oculus Go, the Quest would be a marriage of the two. The Quest is scheduled to come out this spring and will be $399, and a lot of the Rift titles will be available on the Quest. While unfortunately, I was not able to try it, the feedback that I heard from others was positive. The tetherless aspect of the headset creates a more immersive experience and doesn’t feel confined. While the graphics capabilities of the headset are not as high as the Rift, they are good enough and don’t hinder the experience. Plus the optics, as well as the sound, have improved from the Oculus Go. On the downside, the Quest is reportedly top heavy and a denser headset than the Go, which I find the Go to be more substantial than the lightweight Rift. Since the Quest has four inside out cameras on the front of you, if you move the controllers behind you, you could potentially lose tracking. Hopefully, they will make these adjustments before it launches in the spring and add tracking on the strap. I can see much potential with the Quest, such as eSports, education, businesses, medical, engineering, set design; the list goes on. The possibilities are endless, and for the price point, it could substantially increase VR users. Considering that the Quest will be the price of most gaming consoles, without the need of television or home set up.

Walking around the conference was lovely, I felt like a kid in a candy store seeing people putting their full body into the Quest. The well-orchestrated design layouts and theme of the different experiences were terrific. It was a pleasure hearing eSports commentary and cheers as competitors go head to head playing Echo Arena and Onward. Seeing the VR community connect, share laughs, smile, and have a good time, warmed my heart. I enjoyed watching people play the Dead & Buried Quest experience in a large arena and seeing their digital avatars battle each other on screen. I can see more VR arenas being built specifically for the Quest, kind of like skate parks, or soccer parks, but with a sports stadium vibe.

While I was at the conference, I tried a few experiences like The Void – Star Wars Secrets of the Empire, which is a full sensory VR experience. You are an undercover Rebel fighter disguised as a Stormtrooper, as a user you get to interact with your teammates fully, feel, and smell the environment around you. It was a fantastic experience, and I would encourage others to try it at one of the nine locations.

Another experience I tried was the Wolves in the Walls a VR adaptation of Neil Gaiman’s book and created by the company Fable. The audience explores parts of Lucy’s house to try and find hidden wolves in the walls. It was a more intimate experience, and Lucy’s performance felt pretty lifelike. The environments and character designs were beautifully portrayed. Overall it was an enjoyable VR experience.

I also played a multiplayer combat experience called Conjure Strike by The Strike Team. It’s an engaging multiplayer experience, which you can play as a different rock like characters that have different classes like an Elementalist, Mage Hunter, Earth Warden and more. The multiplayer session I had played was similar to capture the flag game. One player has to push a box toward the other side while the opposing player stops the player. It was a fun experience similar to that of Overwatch but in VR. The multiplayer mechanics were excellent, but some of the controls felt foreign to me. Overall it’s an engaging game that seems like it would be popular amongst most VR users.

While I didn’t get to play as many demos as I would have liked, I enjoyed the ones I experienced, especially The Void. It was the most immersive experience I tried, the few things I would change are: update the headset and enhance the outside temperature and wind strength.

I’m looking forward to more development put towards, the Quest and I’m optimistic about the future of VR. As a team member at The Danse, I am excited to work on projects utilizing immersive technology such as virtual & augmented reality. Also, to work in an industry, the is ever changing and improving. It’s nice coming back to the Oculus Connect Conference and see the community excited about the future of VR.