LPIS Home Page
Google Search

Title: Effective Voting of Heterogeneous Classifiers
Author(s): G. Tsoumakas, I. Katakis, I. Vlahavas.
Availability: Click here to download the PDF (Acrobat Reader) file (12 pages).
Keywords: Voting, Multiple Classifier Systems, Ensemble Methods, Classification.
Appeared in: Proc. European Conference on Machine Learning, ECML 04, Jean-Francois Boulicaut, Floriana Esposito, Fosca Giannoti, Dino Pedreschi (Ed.), LNAI 3201, pp. 465-476, Pisa, Italy, 2004.
Abstract: This paper deals with the combination of classification models that have been derived from running different (heterogeneous) learning algorithms on the same data set. We focus on the Classifier Evaluation and Selection (ES) method, that evaluates each of the models (typically using 10-fold cross-validation) and selects the best one.We examine the performance of this method in comparison with the Oracle selecting the best classifier for the test set and show that 10-fold cross-validation has problems in detecting the best classifier. We then extend ES by applying a statistical test to the 10-fold accuracies of the models and combining through voting the most significant ones. Experimental results show that the proposed method, Effective Voting, performs comparably with the state-of-the-art method of Stacking with Multi-Response Model Trees without the additional computational cost of meta-training.
See also :


        This paper has been cited by the following:

1 J. Sylvester and N. V. Chawla, "Evolutionary ensembles: Combining learning agents using genetic algorithms", In AAAI Workshop on Multiagent Learning, pp. 46-51, 2005.
2 C. Yang, S. Létourneau, "Learning to predict train wheel failures", In Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, USA, August 21-24, 2005.
3 A. Lallouet, A. Legtchenko, "Two Contributions of Constraint Programming to Machine Learning", In Proceedings of the 16th European Conference on Machine Learning (ECML 2005), Porto, Portugal, October 3-7, pp. 617-624, 2005.
4 Z. Wu, C. Li, "Learning Regularized Optimal Ensembles of Classifiers", In Proceedings of the Second HKBU-CSD Postgraduate Research Symposium, 2005, pp. 40-44.
5 Mitchell, S. “Machine assistance in collection building: New tools, research, issues, and reflections” (2006) Information Technology and Libraries, 25 (4), pp. 190-216.
6 Bian, S., Wang, W. (2006) Investigation on diversity in homogeneous and heterogeneous ensembles, Proceedings 2006 IEEE International Conference on Neural Networks, pp. 3078-3085
7 Bian, S., Wang, W. (2007) On diversity and accuracy of homogeneous and heterogeneous ensembles, International Journal of Hybrid Intelligent Systems 4, pp. 103–128.
8 P. Luo, H. Xiong, K. Lau and Z. Shi, "Distributed Classification in Peer-to-Peer Networks", Proc. KDD 2007, pp. 968-976.
9 Lallouet, A., Legtchenko, A. “Building consistencies for partially defined constraints with decision trees and neural networks” (2007) International Journal on Artificial Intelligence Tools, 16 (4), pp. 683-706.
10 Saha, S., Murthy, C.A., Pal, S.K. “Rough set based ensemble classifier for Web page classification” (2007) Fundamenta Informaticae, 76 (1-2), pp. 171-187.
11 W. Pedricz, P. Rai, J. Zurada, "Experience-conistent modeling for radial basis funtion neural networks", International Journal of Neural Systems (IJNS), 18(4), pp 279-292, August 2008.
12 C. Yang, S. Létourneau, (2009) "Two-stage classifications for improving time-to-failure estimates: a case study in prognostic of train wheels", Applied Intelligence, 31 (3) pp. 255-266
13 Martinez-Munoz, G., Hernandez-Lobato, D., Suarez, A. (2009) An analysis of Ensemble Pruning Techniques Based on Ordered Agrregation, IEEE Transactions on Pattern Analysis and Machine Intelligence, February 2009 (vol. 31 no. 2) pp. 245-259.
14 Hernandez-Lobato, D. (2009) Prediction Based on Averages over Automatically Induced Learners: Ensemble Methods and Bayesian Techniques, PhD Thesis, Computer Science Department, Autonomous University of Madrid.
15 Gifford, C.M., (2009) Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems, PhD Thesis, University of Kansas.
16 Yang, C., Letourneau, S., Zaluski, M., Scarlett, E. (201) APU FMEA validation and its application to fault identification, Proceedings of the ASME Design Engineering Technical Conference, 3 (PARTS A AND B), pp. 959-967.
17 Raeder, T., Hoens, T.R., Chawla, N.V. (2010) "Consequences of Variability in Classifier Performance Estimates," Data Mining, IEEE International Conference on, pp. 421-430, 2010 IEEE International Conference on Data Mining.
18 Troc, M., Unold, O. (2010) “Self–Adaptation of Parameters in a Learning Classifier System Ensemble Machine” Int. J. Appl. Math. Comput. Sci., Vol. 20, No. 1, 157–174
19 Lu, Z., Wu, X., Bongard, J. (2010) Adaptive Informative Sampling for Active Learning, The 2010 SIAM Conference on Data Mining (SDM 2010), Columbus, Ohio, USA, 2010
20 Peteiro-Barral, D., Guijarro-Berdiñas, B., Pérez-Sánchez, B. (2011) Dealing with "very large" datasets: An overview of a promising research line: Distributed learning, ICAART 2011 - Proceedings of the 3rd International Conference on Agents and Artificial Intelligence, 1, pp. 476-481.
21 Zaluski, M., Létourneau, S., Bird, J., Yang, C. (2011) Developing data mining-based prognostic models for CF-18 aircraft, Journal of Engineering for Gas Turbines and Power, 133 (10), art. no. 101601,
22 Snidaro, L., Visentini, I., Foresti, G.L. (2011) Data fusion in modern surveillance, Studies in Computational Intelligence, 336, pp. 1-21.
23 Zhang, L., Zhou, W.-D. (2011) Sparse ensembles using weighted combination methods based on linear programming, Pattern Recognition, 44 (1), pp. 97-106.
24 Kang, P., Cho, S., MacLachlan, D.L. Improved response modeling based on clustering, under-sampling, and ensemble (2012) Expert Systems with Applications, 39 (8), pp. 6738-6753.
25 Curiac, D.-I., Volosencu, C. Ensemble based sensing anomaly detection in wireless sensor networks (2012) Expert Systems with Applications, 39 (10), pp. 9087-9096.
26 Re,M.;Valentini,G. (2012) Ensemble methods: a review, In: Advances in Machine Learning and Data Mining for Astronomy, Chapman and Hall Data Mining and Knowledge Discovery Series, Chap. 26, pp. 563-594, 2012.
27 Zonglei, L., Tao, X. (2012) A new method to predict probability distribution based on heterogeneous ensemble learning, International Journal of Advancements in Computing Technology, 4 (14), pp. 17-24.
28 Buza, K., Nanopoulos, A., Horváth, T., Schmidt-Thieme, L. (2012) GRAMOFON: General model-selection framework based on networks, Neurocomputing, 75, pp. 163-170.


MLKD Home ISKP Home