CS 6601 final study guide
CS 6601: Final Study Guide
Note: R&N = AI, A Modern Approach, by Russell & Norvig
- Observable games (e.g. isolation)
- Minimax
- Alpha-beta pruning
- Utility and evaluation functions
- Iterative deepening
- Multiplayer games
- Probabilistic games
- Partially observable games (e.g. poker)
- Breadth-first search
- Depth-first search
- Depth-limited search
- Iterative deepening depth-first search
- Uniform-cost search
- Greedy search
- A* search
- Heuristics
- Consistency/admissibility
- Dominance
- Derivation by relaxation
- Bidirectional
- Tridirectional
- Tree vs. graph search
- Completeness, space/time complexity, path optimality
- Agent design (R&N Chapter 2)
- Observability
- Deterministic/stochastic
- Episodic/sequential
- Static/dynamic
- Discrete/continuous
- Single/multi-agent
- Reflex
- Reflex with state
- Goal-based
- Utility-based
- Learning
- Random algorithms (part of R&N Chapter 4)
- Hill-climbing
- Beam search
- Iterative improvement
- Simulated annealing
- Genetic algorithms
- Local vs. global maximum
- Local stochastic search
- Constraint satisfaction problems (R&N Chapter 6)
- Variables, domains, constraints
- Standard search
- Backtracking
- Heuristics
- Minimum remaining values
- Least constraining value
- Forward-checking
- Arc consistency
- Path consistency
- Problem re-structuring
- Probability (R&N Chapters 13 and 14a, 14b)
- Independence/dependence
- Discrete/continuous variables
- Joint distribution
- Central Limit Theorem
- Conditional probabilities
- Bayes’ Rule
- Chain Rule
- Conditional independence
- How to construct
- Local independence
- Inference
- Enumeration
- Variable elimination
- Rejection sampling
- Stochastic simulation
- MCMC simulation
- Machine learning (part of R&N Chapters 18 and 20)
- Classification problems
- Gaussian distribution
- Maximum likelihood estimate
- Decision boundaries
- Distance metrics: Euclidean vs. Mahalanobis
- Training and testing
- Cross-validation, LOOCV
- Over/underfitting
- k nearest neighbors
- Naive Bayes
- Mixture of Gaussians
- No free lunch
- Entropy
- Information gain
- C4.5
- Random forests
- Decision “stumps”
- Boosting
- Clustering (i.e. unsupervised learning)
- k-means
- Inter/intra-class variance
- Descriptive length
- Gaussian mixture models
- Expectation Maximization
- Single-layer perceptron
- Multilayer perceptron
- Backpropagation
- Reasoning over time (R&N Chapter 15)
- N-gram models
- State tying
- Pruning, beam search
- Context-Free Grammars
- Segmentally-boosted HMMs
- Complex decisions (R&N Chapter 17 and 21)
- Utility functions
- Policy (vs. path)
- Finding optimal policy
- Bellman equation
- Value iteration
- Policy Iteration
- Q-learning
- SARSA
- Local consistency vs. global optimality
- Partially-observed Markov Decision Processes
- Belief state
- Particle filter
- Logic (R&N Chapters 7, 8 and 9)
- Propositional knowledge
- First-order logic
- Operators
- Existential quantifiers
- Knowledge base
- Entailment
- Inference
- Soundness vs. completeness
- Enumeration
- Logical equivalence
- Validity vs. satisfiability
- Forward/backward chaining
- Horn clauses
- Conjunctive Normal Form
- Resolution
- Universal/existential instantiation
- Reduction
- Unification
- Planning (R&N Chapter 11 plus this)
- States
- Conditions
- Operators
- Goals
- Actions
- Plan
- Partially-ordered plans
- Conditional planning
- Monitoring
- Execution vs. action monitoring
- Clobbering
- Demotion/promotion