MULTIPLE CLASSIFIERS AND HYBRID LEARNING PARADIGMS
(KES2011 invited session IS30)

12, 13, and 14 September 2011

Kaiserslautern, Germany

Joint event: PSL 2011 (Workshop on Partially Supervised Learning)
Ulm (Germany), September 15-16, 2011 (SEE BELOW)


SESSION DESCRIPTION

When facing difficult real-world applications, it is often unlikely that an individual learning paradigm can actually yield the solution sought (in spite of its theoretical generality) without a strong co-operation with other, profoundly different modules building up the overall system. For instance, artificial neural networks are known to be mathematically "universal" machines, but satisfactory solutions to complex tasks can hardly be achieved with a single feed-forward connectionist architecture. Historically, this led to the development of multiple neural network systems, namely mixtures of experts or neural ensembles, taking benefit from the specialization of individual nets over specific regions of the feature space, according to a divide-and-conquer strategy. As an alternative, multiple classifier systems were proposed, aiming at combining models that have different nature (e.g., generalized linear discriminants, parametric probabilistic models, neural nets) or aim (e.g., estimating a discriminant function, or a class-posterior probability, or a likelihood). In other circumstances, like in the case of hybrid hidden Markov model/connectionist approaches, the combination between the underlying paradigms relies on the idea of exploiting certain general properties of one of them (e.g., the capability of modeling the long-term dependencies in HMMs) with the strength of the other to accomplish local, specific tasks that occur within the former (e.g., the capability of flexible, discriminative modeling of the HMM emission probabilities via neural nets). Along a similar direction, hybrid random fields were introduced recently, They combine the overall, general structure of a Markov random field with the optimal fit of conditional probabilities of individual variables given their Markov blanket as obtained via Bayesian networks. Again, maximum echo-state likelihood networks (MESLiN) were proposed for sequence processing, relying on the combination of the reservoir of an echo-state architecture with a parametric model of the probability density function of the states of the reservoir. Strictly related areas concern the integration between symbolic and sub-symbolic learning machines, and the so-called information-fusion. In all these scenarios, researchers are mostly concerned with the development and investigation of plausible, mathematically sound techniques for combining the different learners in a feasible, robust manner (instead of just piling-up the different modules onto one another heuristically). Such research efforts are leading to training algorithms that split properly the original learning problem over the component machines, training the latter ones according to a joint, global criterion which fits the solution of the original, overall problem.

Ain of this Invited Session is to bring together researchers involved in any area of pattern recognition and machine learning that is related to these issues. Fellow scientists are invited to submit their paper(s) to this Session, according to the guidelines for Authors and the reviewing procedures which hold for the KES Conference hosting this Session. Novel, fresh ideas are particularly welcome (even though in preliminary form), although strong experimental analysis of established approaches to severe real-world tasks is encouraged as well.

Topics of interest include (but they are not limited to):

Submission:

Page formatting: For formatting information, please see Springer Information for LNCS Authors (See ``Proceedings and Other Multiauthor Volumes - Using Microsoft Word" etc.).

Please note that papers should be no longer than 10 pages in LNCS format. Papers longer than this will be subject to an additional page charge. All oral and poster papers must be presented by one of the authors who must register within the KES Early Registration deadline.

Please submit your paper through the KES submission system (PROSE), making sure you pick up the IS30 Invited Session item from the menu (NOTE: this item is listed in the "invited Sessions" table, not in the "General Sessions" list).

Important Dates:

Review process:

All submissions will be reviewed on the basis of relevance, originality, significance, soundness and clarity. At least two referees will review each submission independently.

Publication:

All accepted papers will be published in the KES2011 Proceedings (LNCS/LNAI, Springer-Verlag).
Extended versions of selected papers will be considered for publication in the KES Journal (International Journal of Knowledge-Based and Intelligent Engineering Systems) published by IOS Press, and other selected journals.

Joint event:

We are organizing PSL 2011 (Workshop on Partially Supervised Learning) in Ulm, Germany, on September 15-16, 2011. Submission deadline: May 6, 2011. If you are planning to attend KES, please consider taking advantage of the close-range between these events: your submissions to each of them are welcome!

Session Chairs:

Edmondo Trentin
Dipartimento di Ingegneria dell'Informazione
Universita' di Siena, I-53100 Siena (Italy)
E-mail: trentin AT dii DOT unisi DOT it

Friedhelm Schwenker
Department of Neural Information Processing
University of Ulm, D-89069 Ulm (Germany)
E-mail: friedhelm DOT schwenker AT uni-ulm DOT de