Title: |
Effective Stacking of Distributed Classifiers |
Author(s): |
G. Tsoumakas, I. Vlahavas.
|
Availability: |
Click here to download the PDF (Acrobat Reader) file (4 pages).
|
Keywords: |
|
Appeared in: |
Proc. 15th European Conference on Artificial Intelligence (ECAI '02), Frank van Harmelen (Ed.), IOS Press, pp. 340-344, 2002.
|
Abstract: |
One of the most promising lines of research towards discovering global
predictive models from physically distributed data sets is local learning and
model integration. Local learning avoids moving raw data around the distributed
nodes and minimizes communication, coordination and synchronization cost.
However, the integration of local models is not a straightforward process.
Majority Voting is a simple solution that works well in some domains, but it
does not always offer the best predictive performance. Stacking on the other
hand, offers flexibility in modelling, but brings along the problem of how to
train on sufficient and at the same time independent data without the cost of
moving raw data around the distributed nodes. In addition, the scalability of
Stacking with respect to the number of distributed nodes is another important
issue that has not yet been substantially investigated. This paper presents a
framework for constructing a global predictive model from local classifiers
that does not require moving raw data around, achieves high predictive accuracy
and scales up efficiently with respect to large numbers of distributed data
sets. |
See also : |
|
This paper has been cited by the following:
1 |
D.P. Solomatine and M.B. Siek. "Modular learning models in forecasting natural phenomena", Neural Networks 19, pp 215–224, 2006.
|
2 |
Niall Rooney, “Stacking for Supervised Learning”, Expert Update 9(3), pp. 21-25, 2007. |
3 |
P. Luo, H. Xiong, K. Lau and Z. Shi, "Distributed Classification in Peer-to-Peer Networks", Proc. KDD 2007, pp. 968-976.
|
|