Universidad de Burgos RIUBU Principal Default Universidad de Burgos RIUBU Principal Default
  • español
  • English
  • français
  • Deutsch
  • português (Brasil)
  • italiano
Universidad de Burgos RIUBU Principal Default
  • Ayuda
  • Contact Us
  • Send Feedback
  • Acceso abierto
    • Archivar en RIUBU
    • Acuerdos editoriales para la publicación en acceso abierto
    • Controla tus derechos, facilita el acceso abierto
    • Sobre el acceso abierto y la UBU
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of RIUBUCommunities and CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    Statistics

    View Usage Statistics

    Compartir

    View Item 
    •   RIUBU Home
    • E-Prints
    • Untitled
    • Untitled
    • Untitled
    • Untitled
    • View Item
    •   RIUBU Home
    • E-Prints
    • Untitled
    • Untitled
    • Untitled
    • Untitled
    • View Item

    Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10259/6192

    Título
    When is resampling beneficial for feature selection with imbalanced wide data?
    Autor
    Ramos Pérez, IsmaelUBU authority Orcid
    Arnaiz González, ÁlvarUBU authority Orcid
    Rodríguez Diez, Juan JoséUBU authority Orcid
    García Osorio, CésarUBU authority Orcid
    Publicado en
    Expert Systems with Applications. 2022, V. 188, 116015
    Editorial
    Elsevier
    Fecha de publicación
    2022-02
    ISSN
    0957-4174
    DOI
    10.1016/j.eswa.2021.116015
    Abstract
    This paper studies the effects that combinations of balancing and feature selection techniques have on wide data (many more attributes than instances) when different classifiers are used. For this, an extensive study is done using 14 datasets, 3 balancing strategies, and 7 feature selection algorithms. The evaluation is carried out using 5 classification algorithms, analyzing the results for different percentages of selected features, and establishing the statistical significance using Bayesian tests. Some general conclusions of the study are that it is better to use RUS before the feature selection, while ROS and SMOTE offer better results when applied afterwards. Additionally, specific results are also obtained depending on the classifier used, for example, for Gaussian SVM the best performance is obtained when the feature selection is done with SVM-RFE before balancing the data with RUS.
    Palabras clave
    Feature selection
    Wide data
    High dimensional data
    Very low sample size
    Unbalanced
    Machine learning
    Materia
    Informática
    Computer science
    URI
    http://hdl.handle.net/10259/6192
    Versión del editor
    https://doi.org/10.1016/j.eswa.2021.116015
    Collections
    • Artículos ADMIRABLE
    • Untitled
    Attribution-NonCommercial-NoDerivatives 4.0 Internacional
    Documento(s) sujeto(s) a una licencia Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Internacional
    Files in this item
    Nombre:
    Ramos-esa_2022.pdf
    Tamaño:
    1.593Mb
    Formato:
    Adobe PDF
    Thumbnail
    FilesOpen

    Métricas

    Citas

    Academic Search
    Ver estadísticas de uso

    Export

    RISMendeleyRefworksZotero
    • edm
    • marc
    • xoai
    • qdc
    • ore
    • ese
    • dim
    • uketd_dc
    • oai_dc
    • etdms
    • rdf
    • mods
    • mets
    • didl
    • premis
    Show full item record