Useful Resources for AI

Newsletters/blogs:
– TLDR AI (https://tldr.tech/ai) – Andrew Tan
– Ben’s Bites (https://lnkd.in/gNY8Dmme)
– The Information *paid subscription required (https://lnkd.in/gbkaFbvf)
– Last week in AI (https://lastweekin.ai/)
– Eric Newcomer (https://www.newcomer.co/)

Podcasts:
– No priors with Sarah Guo + Elad Gil (https://lnkd.in/g7Wmr6XT)
– All-in podcast – not AI specific but they talk a lot about it (https://lnkd.in/gH35UeUy)
– Lex Fridman (https://lnkd.in/gjw7zsWX)

Online courses:
DeepLearning.ai by Andrew Ng – https://lnkd.in/gWcn5UTK

Institutional VC writing: 
– Sequoia (https://lnkd.in/g-cKpn8Y)
– A16z (https://lnkd.in/g6JxqwZA)
– Lightspeed (https://lnkd.in/gczzdEcd)
– Bessemer (https://www.bvp.com/ai)
– Radical Ventures (https://lnkd.in/guCe5Mnt); Rob Toews (https://lnkd.in/ggH8HfT8) and Ryan Shannon (https://lnkd.in/gRrBzePx)
– Madrona (https://lnkd.in/gy5D8yNG)

Industry Conferences:
– Databricks Data + AI Summit (https://lnkd.in/gF5QyXYv)
– Snowflake (https://lnkd.in/gavqzw65)
– Salesforce Dreamforce (https://lnkd.in/gJk4r58N)

Academic Conferences:
– NeurIPS (https://neurips.cc/)
– CVPR (https://cvpr.thecvf.com/)
– ICML (https://icml.cc/)
– ICLR (https://iclr.cc/)

Books:
– Genius makers, by Cade Metz (https://lnkd.in/gr_78MB9)
– A Brief History of Intelligence, by Max Bennett (https://lnkd.in/g2uCrPzS)
– The worlds I see, by Fei-Fei Li (https://lnkd.in/gY8Qsvis)
– Chip Wars, by Chris Miller (https://lnkd.in/g6ZAZSCG)

The original author of this post was Kelvin Mu on Linkedin.

Traversal of Immersive Environments | HoloTile Floor from Disney

If you’re new to The Latent Element, I write about future market development narratives or things of interest to me, hence the name “latent” element. These views are not representative of any company nor do they contain privileged info.

More details and contact info are in the about section.

Post Details

Read time:

3–4 minutes

Table of Contents:

  1. The Challenge
  2. Early Solutions
  3. Disney Research HoloTile Floor
  4. Closing Thoughts
  5. Sources

Subscribe to get access

Read more of this content when you subscribe today.

Reblog: Life-like realism, a Pixel AR and going mainstream: catching up with Project Tango

Project Tango has been around for a while, from the developer tablet and demos at MWC and Google I/O to the Lenovo Phab2 Pro and the new Asus Zenfone AR. While we’ve seen it progress quite steadily in that time, we’ve never seen it look as convincing as it did at I/O this week.

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Reblog: Using Machine Learning to Explore Neural Network Architecture

At Google, we have successfully applied deep learning models to many applications, from image recognition to speech recognition to machine translation. Typically, our machine learning models are painstakingly designed by a team of engineers and scientists. This process of manually designing machine learning models is difficult because the search space of all possible models can be combinatorially large — a typical 10-layer network can have ~1010 candidate networks! For this reason, the process of designing networks often takes a significant amount of time and experimentation by those with significant machine learning expertise.

Our GoogleNet architecture. Design of this network required many years of careful experimentation and refinement from initial versions of convolutional architectures.

To make this process of designing machine learning models much more accessible, we’ve been exploring ways to automate the design of machine learning models. Among many algorithms we’ve studied, evolutionary algorithms [1] and reinforcement learning algorithms [2] have shown great promise. But in this blog post, we’ll focus on our reinforcement learning approach and the early results we’ve gotten so far.

In our approach (which we call “AutoML”), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task. That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from. Eventually, the controller learns to assign a high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly. Here’s what the process looks like:

We’ve applied this approach to two heavily benchmarked datasets in deep learning: image recognition with CIFAR-10 and language modeling with Penn Treebank.

from Pocket

via Did you enjoy this article? Then read the full version from the author’s website.

Musings about when and how this research might manifest in human products are welcome. Tweet to me @dilan_shah.