Learning Happens in Layers
[ ]
-
Reading Skin in the Game for the second time. Inspired by someone else reading it for the third time and finding he was in the acknowledgments.
-
Naval Ravikant says rereading the classics is sometimes the best thing.
-
Going through the Tang Dyansty Poetry Project. Layers: 1. Iniital read through. 2. Blog post. 3. Supplement blog post with active recall (e.g., recitation, adding layered thoughts).
For those familiar with the idea of the linear effects from Antifragile, learning is rooted in repetition and convexity, meaning that the reading of a single text twice is more profitable than reading two different things once, provided of course that the text has some depth of content.
Skin in the Game, p.44
-
Literal sense: Deep Learning literally has layers
-
Aviation: Repetition for students.
-
Classical Chinese: Going back to read the Elementary Classical Textbook
-
Tang Dynasty Poetry Project: Going back through posts as review.
Related
Archive
chinese
tang-dynasty-poetry
李白
python
王维
rl
pytorch
numpy
emacs
杜牧
spinningup
networking
deep-learning
贺知章
白居易
王昌龄
杜甫
李商隐
tips
reinforcement-learning
macports
jekyll
骆宾王
贾岛
孟浩然
xcode
time-series
terminal
regression
rails
productivity
pandas
math
macosx
lesson-plan
helicopters
flying
fastai
conceptual-learning
command-line
bro
黄巢
韦应物
陈子昂
王翰
王之涣
柳宗元
杜秋娘
李绅
张继
孟郊
刘禹锡
元稹
youtube
visdom
system
sungho
stylelint
stripe
softmax
siri
sgd
scipy
scikit-learn
scikit
safari
research
qtran
qoe
qmix
pyhton
poetry
pedagogy
papers
paper-review
optimization
openssl
openmpi
nyc
node
neural-net
multiprocessing
mpi
morl
ml
mdp
marl
mandarin
macos
machine-learning
latex
language-learning
khan-academy
jupyter-notebooks
ios-programming
intuition
homebrew
hacking
google-cloud
github
flashcards
faker
docker
dme
deepmind
dec-pomdp
data-wrangling
craftsman
congestion-control
coding
books
book-review
atari
anki
analogy
3brown1blue
2fa