Mixture of Experts

. Saturday, June 10, 2006
  • Agregar a Technorati
  • Agregar a Del.icio.us
  • Agregar a DiggIt!
  • Agregar a Yahoo!
  • Agregar a Google
  • Agregar a Meneame
  • Agregar a Furl
  • Agregar a Reddit
  • Agregar a Magnolia
  • Agregar a Blinklist
  • Agregar a Blogmarks

Mixture of Experts is based on the "Divide and Conquer" doctrine. The problem is divided into manageable sizes for several experts and each expert learns locally from a part of the problem domain and then the outputs from these experts are combined to provide a global output.

Mixture of Experts are oriented to Neural Network, being each expert a neural network that learns only from a part of the problem and the outputs are combined by human knowledge or by gating networks. But Mixture of Experts seems to be an abstract paradigm and could be applied with other classifiers.

Basic References
[1] Jong-Hoon Oh and Kukjin Kang, "Experts or an Ensemble? A Statistical Mechanics Perspective of Multiple Neural Network Approaches" (Download)
[2] Jordan, M. I. "Hierarchical Mixtures of Experts and the EM algorithm" Neural Computation 6, 181-214, 1994 (Download)

2 comments:

Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...
This comment has been removed by a blog administrator.