Composite Methods for Machine Learning

. Thursday, April 20, 2006
  • Agregar a Technorati
  • Agregar a Del.icio.us
  • Agregar a DiggIt!
  • Agregar a Yahoo!
  • Agregar a Google
  • Agregar a Meneame
  • Agregar a Furl
  • Agregar a Reddit
  • Agregar a Magnolia
  • Agregar a Blinklist
  • Agregar a Blogmarks

The Occam's razor is present everywhere, even in Machine Learning, where when some models fit the training data we usually select the simplest one as the model we are going to use. As said in [1] the Greek philosopher Epicurus defended a thesis far away from Occam's Razor, he defended that if two or more hypothesis fit the data, we should use all of them, not only one of them. Composite methods are machine learning techniques/algorithms where we use more than one model/hypothesis/classifier to obtain a better result in our machine learning task. Inside Composite Methods there are very different methods, resumed in the next more or less general techniques:

1.- Bayesian Model Averaging

2.- Ensemble Methods
2.1.- Bagging
2.2.- Boosting
2.3.- Other Fussion Methods

3.- Mixture of Experts
3.1.- Gating Networks

4.- Local Learning Algorithms
4.1.- RBF
4.2.- KNN

5.- Hybrid Methods
5.1.- Cascade
5.2.- Stacking

6.- Composite Methods for Scaling or Distributing ML algorithms

I'm going to spend some post describing, in a general way, all this techniques.

[1] "Introducción a la Minería de Datos" José Hernández Orallo, M.José Ramírez Quintana, Cèsar Ferri Ramírez, Pearson, 2004. ISBN: 84 205 4091 9

1 comments:

Anonymous said...

I think that at least, mixture of experts and RF can be seen as a special case of bayesian model averaging. Whatr about others ?