Dayton AI Class 3

In which we describe agents that can improve their behavior through diligent study of their own experiences.

The last unit was the introduction to Bayes networks. This week covers Machine Learning, which is the problem of figuring out the structure of the networks to begin with if they aren't known. There are two main sub-categories of machine learning.

  • Supervised Learning
  • Unsupervised Learning
Learning models from data is the area of AI that has had the most commercial success. Examples given are Google (web mining), Netflix (customer preferences), Amazon (product placement). The video for the self-driving car example is pretty neat.
  • What is learned? parameters, structure, hidden concepts
  • What from? target labels, replacement principles, feedback (reinforcement)
  • What for? prediction, diagnostics, summarize, etc.
  • How? passive (just observations), active (agent changes environment), online, off-line
  • Outputs. classification, regression
  • Details. generative, discriminative

Occam's Razor. Everything else equal, choose the less complex hypothesis. Or make things as simple as possible but not simpler. Want to minimize the generalization error, not the training data error.

Unsupervised learning is mainly about density estimation. There are several approaches like clustering or dimensionality reduction. Blind signal (or source) separation is another interesting application for unsupervised learning. The example given is how to separate a recording of two speakers into two separate streams. Some practical tips about choosing k (the number of clusters) in k-means learning.

  • Add some constant penalty per k to the log-likelihood
  • Guess initial k
  • Run Expectation Maximization
  • Remove unnecessary clusters
  • Create new random clusters near poorly represented data
  • Repeat from EM step
This approach helps overcome being trapped in local minimums since you randomly add klusters and do EM multiple times.

I did actually use Python to answer one of the quiz questions (how many unique words in a "bag of words"), and like last time I did arithmetic for Bayes rule in a spreadsheet. I haven't done any Lisp programming yet.

2 comments:

  1. Sebastian Thrun, one of the instructors for this course, is the guy behind the wining DARPA self-driving vehicle featured in the video linked in this post, and also the Google self-driving cars.

    ReplyDelete
  2. An interesting talk given by Victoria Stodden (by way of Nuit Blanche) on reproducibility in science, which includes some discussion of a survey she did of the machine learning research community.

    Her main conjecture is a bit over the top though: "Today's academic scientist probably has more in common with a large corporation's information technology manager than with a philosophy or English professor at the same university."

    ReplyDelete

Archive