Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10259/5766
Título
Experimental evaluation of ensemble classifiers for imbalance in Big Data
Publicado en
Applied Soft Computing. 2021, V. 108, 107447
Editorial
Elsevier
Fecha de publicación
2021-09
ISSN
1568-4946
DOI
10.1016/j.asoc.2021.107447
Abstract
Datasets are growing in size and complexity at a pace never seen before, forming ever larger datasets known as Big Data. A common problem for classification, especially in Big Data, is that the numerous examples of the different classes might not be balanced. Some decades ago, imbalanced classification was therefore introduced, to correct the tendency of classifiers that show bias in favor of the majority class and that ignore the minority one. To date, although the number of imbalanced classification methods have increased, they continue to focus on normal-sized datasets and not on the new reality of Big Data. In this paper, in-depth experimentation with ensemble classifiers is conducted in the context of imbalanced Big Data classification, using two popular ensemble families (Bagging and Boosting) and different resampling methods. All the experimentation was launched in Spark clusters, comparing ensemble performance and execution times with statistical test results, including the newest ones based on the Bayesian approach. One very interesting conclusion from the study was that simpler methods applied to unbalanced datasets in the context of Big Data provided better results than complex methods. The additional complexity of some of the sophisticated methods, which appear necessary to process and to reduce imbalance in normal-sized datasets were not effective for imbalanced Big Data.
Palabras clave
Unbalance
Imbalance
Ensemble
Resampling
Big Data
Spark
Materia
Informática
Computer science
Versión del editor
Aparece en las colecciones
Documento(s) sujeto(s) a una licencia Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Internacional