BIT OF A TANGENT

012 | How Deep Learning Does Magic

Episode 12, released 2019-08-26

This is a discussion about why deep neural nets are unreasonably effective. Gianluca and Jared examine the relationships between neural architectures and the laws of physics that govern our Universe—exploring brains, human language, and linear functions. Nothing could have prepared them for the territories this episode expanded to, so strap yourself in!

Duration: 01:33:49

Author: Gianluca Truda and Jared Tumiel

Shownotes

AlphaGo beating Lee Sedol at Go: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol

OpenAI Five: https://openai.com/blog/openai-five/

Taylor series/expansions video from 3Blue1Brown: https://www.youtube.com/watch?v=3d6DsjIBzJ4

Physicist Max Tegmark: https://en.wikipedia.org/wiki/Max_Tegmark

Tegmark’s great talk on connections between physics and deep learning (which formed much of the inspiration for this conversation): https://www.youtube.com/watch?v=5MdSE-N0bxs

Universal Approximation Theorem: https://en.wikipedia.org/wiki/Universal_approximation_theorem

A refresher on “Map vs. Territory”: https://fs.blog/2015/11/map-and-territory/

Ada Lovelace (who worked on Babbage’s Analytical Engine): https://en.wikipedia.org/wiki/Ada_Lovelace

Manifolds and their topology: http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/

Binary trees: https://en.wikipedia.org/wiki/Binary_tree

Markov process: http://mathworld.wolfram.com/MarkovProcess.html

OpenAIs GPT-2: https://openai.com/blog/better-language-models/

Play with GPT-2 in your browser here: https://talktotransformer.com/

Lex Fridman’s MIT Artificial Intelligence podcast: https://lexfridman.com/ai/

The Scientific Odyssey podcast: https://thescientificodyssey.libsyn.com/