Snap Lens Studio

Hello World Building Augmented Reality for Snapchat

Fun fact – 30000+ lenses created by Snapchatters, leading to over a billion views of lense content
Table of Contents
Lens Studio
Hello World! Lens Studio Live Demo
High-Quality Rendering with Allegorithmic
Chester Fetch with Klei Entertainment
Cuphead and Mugman with Studio MDHR

Travis Chen

  • Worked at Bad Robot, Neversoft, and Blizzard

Lens Studio

  • Snapchat has always opened up to the camera, which has positively affected their engagement
  • Pair your phone with Lens Studio
  • A community forum on the site exists where devs q & a
  • Has been out for less than 4 months today, and the lenses have resulted in over a billion experiences
  • The tool has been used for a variety of things hamburger photogrammetry, full-screen 2d experiences,
    r/snaplenses
  • Distributing your lenses is really easy

  • Within, snap you can discover a lense where you can see more lenses by the same creators or you
    can pull up on the base of a story to figure out what lense was used.

Lense Boost – All users see the Snapchat carousel, a Lens Boost to get your lense into this carousel

Find which template best fits your creative intent

Templates

  • Static object
  • Animated object
  • Interactive templates (tap, approach, look at)
  • Immersive (look around, window)
  • For 2D creators (cutout, picture frame, fullscreen, soundboard)
  • Interactive path (idle, walk, and arrival states necessary) coming soon

Examples

  • Brian Garcia, Neon Book
  • Pinot, 2D textures, cutout template, then character animator to animate
  • DFace, DDog, imported into lenstudio (from camera reflections feature)
  • Jordan & Snapchat, ‘88 static Jordan 3D model
  • Netflix & Snapchat, Stranger Things – turning on the TV, or spelling your name out,
    awakiening the demi gorgon

Hello World

Lens Studio is made up of panels:

  • Live Preview, to see what it will be like, it includes tracked content and interaction support
  • Objects panel, like the Unity scene view, it shows you what is in the preview
  • Resources panel, all your resources and where you’d import stuff

Workflow

Start with the animated object template
Select an object in the resources panel and move it to the objects panel
Google blocks + mixamo + export free animations from Adobe and import your character animated from
mixamo
File import monitor and astronaut
Child the imported 3D model to the fox as a child and delete the fox
Add a shadow
Sprite

High-Quality Rendering

Substance Painter is an app to apply materials or paint textures for 2D or 3D.
Any material you bring in you can apply to an object, they apply uniformly, but there’s
also, smart materials which applies intelligently to geometry (rust example).

The layers tab is like the scene view the place to drag and drop

Alphas provides a cutout, you can apply materials to the cutout

Upon clicking export

lensstudio
Challenge: Rubber Ducky

Chester Fetch with Klei Entertainment

Games studio since 2005

Why is AR interesting for Klei?

  • AR is about bringing the virtual world out to the player.
  • Shareable
  • Limited bandwidth
  • Seems hard
  • Would require too much time from others at the Studio

Cuphead and Mugman with Studio MDHR

  • Cuphead and Mugman wanted to build and snap a boss battle
  • All of the lenses used in cuphead were from assets created directly from the game
  • Chains together 5 2D animations

 

Questions


  • Within the Snap app, I noticed you can rent/create a lense “as a service” how does this pertain to lens studio?
  • A question I had was, looking forward to a day when you can use targets like people for further interactable and shareable content like the examples shown in Mugman, when will person/object recognition be available to developers and users of Snap?   
  • What is the github account for Snap?

SVVR #49 Summary Notes

SVVR Meetup #49

SVVR Passport – A membership program

This is SVVR’s new shared co-working space for demonstration

  • 24 hour access
  • Demo equipment and library
  • Digital benefits

SVVR VR Mixer 2018

  • March 21st, 2018

Lumus Optics

  • Israeli company doing reflective wave-guide optics whose mission is to be your
    leader in AR displays––more than 60 patents
  • Highest performance for smallest form factor
  • What is wave guide tech?

This boasts

  • Wide FOV (40˚ – 55˚)
  • Compact 1.7mm Form Factor
  • Life-like Image
  • True See-Through
  • Daylight Readable

Founded in 2000
Partnered with Quanta Computer (going to produce the optics engine),
Flex (OEM using Lumus reference), and Deepoptics (vergence accommodation)

They have debuted their new prototype at CES––looks like Dragon Ball equipment.

Developers

Will be able to deploy Vuforia applications or demos using other AR libraries via this.

siram@lumus-optical.com

High Fidelity – Philip Rosedale

  • Probably gonna need
    • Identity
    • IP rights
    • Proof-of-purchase
    • Payment system

Full Decentralization?

Transaction expenses are high (syncing, VR transactions need to happen quickly,
must pay gas)
Federated Consensus – near-zero transaction fees, etc.

Bad Monetary Policy

  • Bitcoin for being usable as a currency isn’t viable. Because Bitcoin is going up in
    price so much and it’s fixed in circulation.
  • Increase circulation as people join
  • Second Life – made more money at roughly the same rate that people come online
    • Use a smart contract to create an exchange rate scaling

High Fidelity Coin (HFC)

  • Stable
  • Blockchain cryptocurrency
  • Easy to use, initial grants for proof-of-identity, and multiple currency exchange

Philip waxing about tech, says something to the effect of “with any big shift in technology
what often comes as compelling is things that are currently done in the world i.e. payment”

Philip also mentioned via s-contracts allowing duels to occur for identity

Cymatic-Bruce

Currently at 6D.AI

Came back from Japan
Saw so many vr experiences, met a ton of the community, discovered VTubing which
can also be found through IMVU, etc.

Learned a lot and PC VR isn’t a thing with Japan due to space and culture

PSVR location based VR situation

VR Zone Shinjuku-–world class location based experience. Tickets range from $10-15 per experience.
They take first-timers very seriously and interestingly take matters of safety really seriously.
Bruce really admired the standard of using motion platforms for everything, tie everything into an
IP, and did a great job of executing. Looks like he did a DBZ experience, and learned how to shoot a
Kamehamehameha. Mario Kart––on a motion platform, etc.

How long is the experience?
How much of what you saw in Japan will likely have an audience in EN?
How is FOVE doing?
VR experiences dealing with food?

view raw

SVVR-49.mdown

hosted with ❤ by GitHub

Developer Blog Post: ARKit #1

When developing AR applications for Apple phones there are two cameras that we speak about. One is the physical camera on the back of the phone. The other is the virtual camera that you will have in your Unity scene to in turn, match the position and orientation of the real world camera.

A camera in Unity (virtual) has a component called Clear Flags which determines which parts of the screen will be cleared. On your main virtual camera setting this to “Depth Only” will instruct the renderer to clear the layer of the virtual background environment. Allowing for the seamless overlay of virtual objects on the (physical) camera feed as a backdrop for your virtual objects.

More to come on differences between hit testing and ray casting in the context of ARKit and a broader look at intersection testing approaches in the next post.

Reblog: The Light Field Stereoscope | SIGGRAPH 2015

Inspired by Wheatstone’s original stereoscope and augmenting it with modern factored light field synthesis, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] present a new near-eye display technology that supports focus cues. These cues are critical for mitigating visual discomfort experienced in the commercially-available head mounted displays and providing comfortable, long-term immersive experiences.

 

ABSTRACT

Over the last few years, virtual reality has re-emerged as a technology that is now feasible at low cost via inexpensive cellphone components. In particular, advances of high-resolution micro displays, low-latency orientation trackers, and modern GPUs facilitate extremely immersive experiences. To facilitate comfortable long-term experiences and wide-spread user acceptance, however, the vergence-accommodation conflict inherent to all stereoscopic displays will have to be solved. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] present the first factored near-eye display technology supporting high image resolution as well as focus cues: accommodation and retinal blur. To this end, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] build on Wheatstone’s original stereoscope but augment it with modern factored light field synthesis via stacked liquid crystal panels. The proposed light field stereoscope is conceptually closely related to emerging factored light field displays, but it has very unique characteristics compared to the television-type displays explored thus far. Foremost, the required field of view is extremely small – just the size of the pupil – which allows for rank-1 factorizations to produce correct or nearly-correct focus cues. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze distortions of the lenses in the near-eye 4D light fields and correct them using the high-dimensional image formation afforded by our display. [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] demonstrate significant improvements in resolution and retinal blur quality over previously-proposed near-eye displays. Finally, [Fu-Chung Huang, Kevin Chen, Gordon Wetzstein] analyze diffraction limits of these types of displays along with fundamental resolution limits.

FILES

  • technical paper (pdf)
  • technical paper supplement (zip)
  • presentation slides (slideshare)