SKEDSOFT

Artificial Intelligence

Mathematical formulation of the inductive learning problem

  • • Extrapolate from a given set of examples so that we can make accurate predictions about future examples.
  • • Supervised versus Unsupervised learning Want to learn an unknown function f(x) = y, where x is an input example and y is the desired output. Supervised learning implies we are given a set of (x, y) pairs by a "teacher." Unsupervised learning means we are only given the xs. In either case, the goal is to estimate f.

Inductive Bias

  • Inductive learning is an inherently conjectural process because any knowledge created by generalization from specific facts cannot be proven true; it can only be proven false. Hence, inductive inference is falsity preserving, not truth preserving.
  • To generalize beyond the specific training examples, we need constraints or biases on what f is best. That is, learning can be viewed as searching the Hypothesis Space H of possible f functions.
  • A bias allows us to choose one f over another one
  • A completely unbiased inductive algorithm could only memorize the training examples and could not say anything more about other unseen examples.
  • Two types of biases are commonly used in machine learning:
  1. Restricted Hypothesis Space Bias Allow only certain types of f functions, not arbitrary ones
  2. Preference Bias Define a metric for comparing fs so as to determine whether one is better than another

Inductive Learning Framework

  • Raw input data from sensors are preprocessed to obtain a feature vector, x, that adequately describes all of the relevant features for classifying examples.
  • Each x is a list of (attribute, value) pairs. For example,

x = (Person = Sue, Eye-Color = Brown, Age = Young, Sex = Female)
The number of attributes (also called features) is fixed (positive, finite). Each attribute has a fixed, finite number of possible values. Each example can be interpreted as a point in an n-dimensional feature space, where n is the number of attributes.