Study log
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

View only
 
ABCDEFGHIJKLMNOPQRSTUVWXYZAA
1
LinkCatScReaddate begindate endlenTitleAuthorNotes
2
https://probmods.org/inference-about-inference.htmlAI10602016-10-01Propmods
3
https://youtu.be/CQzQJBpvyts?t=5m33sAI101002016-10-012016-10-30NIPS 2011 Learning Semantics Workshop: Towards More Human-like Machine Learning of Word Meanings
4
https://arxiv.org/pdf/1604.00289.pdfAI112502016-10-012016-10-10Building Machines That Learn and Think Like People
5
http://web.mit.edu/cocosci/Papers/gt-grammar.pdfAI10102016-10-22Two Proposals for Causal Grammars
6
http://dippl.org/chapters/04-factorseq.htmlAI92016-10-01dippl
7
https://www.semanticscholar.org/paper/Theory-based-Bayesian-Models-of-Inductive-Learning-Tenenbaum-Griffiths/5d8d5a767704d69ab81b26ce97c2f3065cd139bb/pdfAI92016-10-01Theory-based Bayesian models of inductive learning and reasoning
8
https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/lecture-17-learning-boosting/AI92016-10-01MIT AI 6-034-artificial-intelligence-fall-2010
9
https://www.youtube.com/playlist?list=PLLvH2FwAQhnpj1WEB-jHmPuUeQ8mX-XXGAI91002016-10-012016-11-25CS231n Winter 2016 Stanford
10
https://news.ycombinator.com/item?id=12713056AI9102016-10-01Intro to ML
11
https://www.princeton.edu/~yael/Publications/GershmanNiv2010.pdfAI91002016-10-012016-10-21Learning latent structure: carving nature at its joints
12
http://arxiv.org/pdf/1608.08225v1.pdfAI82016-10-01Why does cheap learning work so well
13
http://www.pnas.org/content/110/45/18327.fullAI82016-10-01Simulation as an engine of physical scene understanding
14
http://www.utstat.toronto.edu/~rsalakhu/papers/LakeEtAl2011CogSci.pdfAI82016-10-01One shot learning of simple visual concepts
15
https://youtu.be/jE9zfj3TghQ?t=26m50sAI82016-10-01Compositional Models: Complexity of Representation and Interference
16
https://www.semanticscholar.org/paper/The-Large-Scale-Structure-of-Semantic-Networks-Steyvers-Tenenbaum/231e89e8d8917706b4797f7e21c7e9d4b12f439aAI82016-10-01The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth
17
https://pdfs.semanticscholar.org/0f13/92c1180582a45b42e621e1526f03cc6e9ca6.pdfAI82016-10-01Learning with Hierarchical-Deep Models
18
https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdfAI82016-10-01Distributed Representations of Words and Phrases and their Compositionality
19
https://github.com/clojure-emacs/cider
https://github.com/magomimmo/modern-cljs/blob/master/doc/second-edition/tutorial-01.md
emacs82016-10-01cider
20
http://xyclade.github.io/MachineLearning/#the-global-idea-of-machine-learningAI72016-10-01Machine Learning for Developers
21
http://fastml.com/bayesian-machine-learning/AI72016-10-01bayesian machine learning
22
https://www.semanticscholar.org/paper/Learning-Systems-of-Concepts-with-an-Infinite-Kemp-Tenenbaum/2452d5ce9dc467f44676893a99d14ee9f8a0da84AI72016-10-01Learning Systems of Concepts with an Infinite Relational Model
23
https://www.semanticscholar.org/paper/Modelling-Relational-Data-using-Bayesian-Clustered-Sutskever-Salakhutdinov/8f5450037cba1ba1f5c2f73fa4ffa66558eae5bdAI72016-10-01Modelling Relational Data using Bayesian Clustered Tensor Factorization
24
https://www.semanticscholar.org/paper/Learning-Physical-Intuition-of-Block-Towers-by-Lerer-Gross/b9dd7a59a101fcecc6fe0e7aed517e84a7df7d2eAI72016-10-01Learning Physical Intuition of Block Towers by Example
25
https://arxiv.org/pdf/1602.06822.pdfAI72016-10-01UNDERSTANDING VISUAL CONCEPTS WITH CONTINUATION LEARNING
26
https://youtu.be/qtm4JgbxuEc?t=27m35s
https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf
72016-10-01word2vec / semantic hashing
27
https://github.com/songrotek/Deep-Learning-Papers-Reading-RoadmapDL752016-10-01Deep learning papers reading roadmap
28
https://galeascience.wordpress.com/2016/04/27/markov-chain-monte-carlo-sampling/AI62016-10-01MARKOV CHAIN MONTE CARLO SAMPLING
29
https://pgexercises.com/questions/recursive/getupwardall.htmlDB6952016-10-01pg exercises
30
http://www.piupiano.com/dechiffrage-lectures.html#lectures_simplesMusic52016-10-01Piano exercices
31
http://zty.pe/Keyb52016-10-01ztype
32
https://pdfs.semanticscholar.org/b54a/54a2f33c24123c6943597462ef02928ec99f.pdf?_ga=1.226993885.1362895030.1463697430AI52016-10-01Single Image 3D Interpreter Network
Good references
33
http://cbmm.mit.edu/sites/default/files/publications/Intuitive%20Theories%20(Gerstenberg,%20Tenenbaum,%202016.pdfAI42016-10-01Intuitives theories - Gesrstenberg
34
http://cims.nyu.edu/~brenden/LakeEtAl2015Science.pdfAI101202016-10-222017-03-158Human-level concept learning through probabilistic program induction
35
https://research.googleblog.com/2016/10/supercharging-style-transfer.htmlDL6802016-10-262016-10-26Supercharging style transfer
36
https://www.quora.com/What-deep-learning-ideas-have-you-tried-that-didnt-work/answer/Benoit-EssiambreAI101002016-10-24What deep learning ideas have you tried that didn't work?
37
http://www.pnas.org/content/105/31/10687.full.pdfAI101502016-10-272016-10-27The discovery of structural form
38
https://colala.bcs.rochester.edu/papers/piantadosi2012modeling.pdfAI8502016-10-302016-10-30Modeling the acquisition of quantifier semantics: a case study in function word learnability
39
http://www.csee.umbc.edu/~sdoshi1/Inferring%20Graph%20Grammars.pdf2016-10-30
40
http://www.stats.ox.ac.uk/~doucet/anjum_doucet_holmes_boostingraphs.pdf2016-10-30
41
https://people.eecs.berkeley.edu/~bodik/cs294fa12PS92016-10-30https://people.eecs.berkeley.edu/~bodik/cs294fa12
42
https://arxiv.org/pdf/1605.07146v1.pdfDL81002016-11-262016-11-26Wide residual networks
43
https://arxiv.org/pdf/1611.05763v1.pdfRL852016-11-20Learning to RL
44
https://arxiv.org/pdf/1604.06057.pdfRL702016-11-20Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
45
http://openreview.net/pdf?id=B1M8JF9xxGM602016-11-20ON THE QUANTITATIVE ANALYSIS OF DECODERBASED GENERATIVE MODELS
46
http://openreview.net/pdf?id=Hk4_qw5xeGAN602016-11-20TOWARDS PRINCIPLED METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS
47
http://openreview.net/pdf?id=S1X7nhsxlGAN602016-11-20
IMPROVING GENERATIVE ADVERSARIAL NETWORKS WITH DENOISING FEATURE MATCHING
48
https://amundtveit.com/2016/11/12/unsupervised-deep-learning-iclr-2017-discoveries/UL8202016-11-10Unsupervised Deep Learning – ICLR 2017 Discoveries
49
https://www.youtube.com/watch?v=uXt8qF2ZzfoNN6602016-11-012016-11-26Neural nets - MIT course
50
https://arxiv.org/pdf/1606.04080v1.pdfNN902016-11-01Matching Networks for One Shot Learning
51
https://cocolab.stanford.edu/papers/GoodmanUllmanTenenbaum2009-Cogsci.pdfAI10602016-10-20Learning a Theory of Causality
52
http://www.jair.org/media/731/live-731-1898-jair.pdfAI802016-10-20A Model of Inductive Bias Learning
53
https://www.microsoft.com/en-us/research/publication/slicing-probabilistic-programs/PP702016-11-29Slicing Probabilistic Programs
54
ftp://ftp.idsia.ch/pub/juergen/icml2006.pdfRNN702016-11-29Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks
55
http://torch.ch/blog/2016/02/04/resnets.htmlTraining and investigating Residual Nets
56
http://gabgoh.github.io/ThoughtVectors/DL81002016-12-022016-12-02Thought vectors
57
http://mbmlbook.com/index.htmlML702016-12-04Model based machine learning
58
https://www.youtube.com/watch?v=ZyoOvmHuVRINS71002016-12-162016-12-16Sequential event memory formation and reactivation in the hippocampus and beyond
59
https://pdfs.semanticscholar.org/3cf6/9ec946048193d05273be6b16b6597bcf6e0d.pdfHM91002016-12-172016-12-17Explaining monkey face patch system as deep inverse graphics
60
https://blog.acolyer.org/2016/10/12/towards-deep-symbolic-reinforcement-learning/NN91002016-12-272016-12-28Towards deep symbolic reinforcement learning - review
61
https://arxiv.org/abs/1609.05518v2NN9102016-12-272016-12-28Towards deep symbolic reinforcement learning
62
https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/NLP7102016-12-28The amazing power of word vectors
63
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.htmlNN7102016-12-20Building powerful image classification models using very little data
64
https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/NLP81002016-12-172016-12-28The amazing power of word vectors
65
https://arxiv.org/pdf/1607.00662v1.pdfHM8602016-12-292016-12-29Unsupervised Learning of 3D Structure from Images
66
https://arxiv.org/pdf/1612.07828.pdfUL7402016-12-302016-12-30Learning from Simulated and Unsupervised Images through Adversarial Training
67
https://www.youtube.com/watch?v=oYRnRoGcM2U&t=0sAI71002016-12-312016-12-31Marvin Minsky & Noam Chomsky : Brains - Minds - Machines
68
http://gershmanlab.webfactional.com/pubs/terwo16.pdfAI81002016-12-292016-12-31Toward the neural implementation of structure learning
69
http://www.psy.cmu.edu/~ckemp/papers/Kemp07thesis.pdfAI72016-12-31The acquisition of inductive constraints
70
https://arxiv.org/pdf/1511.06464v4.pdfRNN702017-01-01Unitary Evolution Recurrent Neural Networks
71
https://papers.nips.cc/book/advances-in-neural-information-processing-systems-29-2016AI602017-01-01Advances in Neural Information Processing Systems 29 (NIPS 2016) pre-proceedings
72
https://arxiv.org/pdf/1611.00035v1.pdfRNN702017-01-01Full-Capacity Unitary Recurrent Neural Networks
73
https://papers.nips.cc/paper/6096-learning-a-probabilistic-latent-space-of-object-shapes-via-3d-generative-adversarial-modeling.pdfAI702017-01-01Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling
74
https://papers.nips.cc/paper/6082-sampling-for-bayesian-program-learning.pdfAI702017-01-01Sampling for Bayesian Program Learning
75
https://papers.nips.cc/paper/6233-hierarchical-deep-reinforcement-learning-integrating-temporal-abstraction-and-intrinsic-motivation.pdfAI702017-01-01Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
76
https://papers.nips.cc/paper/6230-attend-infer-repeat-fast-scene-understanding-with-generative-models.pdfAI702017-01-01
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
77
https://cocolab.stanford.edu/papers/StuhlmullerEtAl2010-Cogsci.pdfAI71002017-01-072017-01-07Learning Structured Generative Concepts
78
http://web.mit.edu/cocosci/Papers/cogsci00_FINAL.pdfAI71002017-01-072017-01-07Word learning as Bayesian inference
79
http://lsa.colorado.edu/papers/plato/plato.annote.htmlw2v8952017-01-082017-01-08A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction and Representation of Knowledge
80
http://www.cs.cmu.edu/~tom/pubs/science2008.pdfneuro91002017-01-082017-01-08Predicting Human Brain Activity Associated with the Meaning of nouns
81
https://www.youtube.com/watch?v=pRBf8BWAG3kneuro91002017-01-082017-01-08Neural Representations of Language Meaning
82
https://www.youtube.com/watch?v=8zcBr6bFk1A&t=176sneuro952017-01-082017-01-08Neural Representations of Language Meaning
83
https://news.ycombinator.com/item?id=13346104w2v8402017-01-092017-01-09Vector arithmetic
84
http://norvig.com/chomsky.htmlAI71002017-01-092017-01-09On Chomsky and the Two Cultures of Statistical Learning
85
http://www.offconvex.org/2015/12/12/word-embeddings-1/w2v101002017-01-092017-01-09Semantic Word Embeddings
86
http://lsa.colorado.edu/papers/JASIS.lsi.90.pdfLSA702017-01-09Indexing by Latent Semantic Analysis
87
https://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdfDL91002017-01-152017-01-15Deep learning (review)
88
https://news.ycombinator.com/item?id=13346104w2v8202017-01-15King – man + woman is queen; but why? (p.migdal.pl)
89
http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.htmlw2v8102017-01-15king - man + woman is queen; but why?
90
http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/w2v91002017-01-152017-01-15Deep Learning, NLP, and Representations
91
https://arxiv.org/pdf/1502.03520v7.pdfw2v602017-01-152017-01-15
RAND-WALK: A latent variable model approach to word embeddings
92
http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/DL71002017-01-152017-01-15Neural Networks, Manifolds, and Topology
93
http://cs.nyu.edu/~zaremba/docs/understanding.pdfDL71002017-01-152017-01-15Intriguing properties of neural networks
94
http://colah.github.io/posts/2015-09-NN-Types-FP/DL51002017-01-152017-01-15Neural Networks, Types, and Functional Programming
95
http://web.eecs.umich.edu/~honglak/nips2015-analogy.pdfDL752017-01-152017-01-15Deep Visual Analogy-Making
96
https://www.youtube.com/watch?v=c_hUBLsicSYAI92002017-01-152017-01-15
Modeling human intelligence with Probabilistic Programs and Program Induction
97
http://gershmanlab.webfactional.com/pubs.htmlAI1012017-01-172017-01-17GershmanLab
98
http://gershmanlab.webfactional.com/pubs/Tsividis17.pdfAI101002017-01-172017-01-17Human Learning in Atari
99
https://www.youtube.com/watch?v=Rte-y6ThwAQPPL71002017-01-192017-01-19Probabilistic Programming for Augmented Intelligence
100
http://web.mit.edu/cocosci/Papers/devsci07_gopnik_tenenbaum.pdfAI81002017-01-192017-01-19
Bayesian networks, Bayesian learning and cognitive development
Loading...