012 | How Deep Learning Does Magic
Episode 12, released 2019-08-26
Duration: 01:33:49
Author: Gianluca Truda and Jared Tumiel
Shownotes
AlphaGo beating Lee Sedol at Go: https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol
OpenAI Five: https://openai.com/blog/openai-five/
Taylor series/expansions video from 3Blue1Brown: https://www.youtube.com/watch?v=3d6DsjIBzJ4
Physicist Max Tegmark: https://en.wikipedia.org/wiki/Max_Tegmark
Tegmark’s great talk on connections between physics and deep learning (which formed much of the inspiration for this conversation): https://www.youtube.com/watch?v=5MdSE-N0bxs
Universal Approximation Theorem: https://en.wikipedia.org/wiki/Universal_approximation_theorem
A refresher on “Map vs. Territory”: https://fs.blog/2015/11/map-and-territory/
Ada Lovelace (who worked on Babbage’s Analytical Engine): https://en.wikipedia.org/wiki/Ada_Lovelace
Manifolds and their topology: http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
Binary trees: https://en.wikipedia.org/wiki/Binary_tree
Markov process: http://mathworld.wolfram.com/MarkovProcess.html
OpenAIs GPT-2: https://openai.com/blog/better-language-models/
Play with GPT-2 in your browser here: https://talktotransformer.com/
Lex Fridman’s MIT Artificial Intelligence podcast: https://lexfridman.com/ai/
The Scientific Odyssey podcast: https://thescientificodyssey.libsyn.com/