RT info:eu-repo/semantics/article T1 An experiment on animal re-identification from video A1 Kuncheva, Ludmila I. . A1 Garrido Labrador, José Luis A1 Ramos Pérez, Ismael A1 Hennessey, Samuel L. A1 Rodríguez Diez, Juan José K1 Animal re-identification K1 Computer vision K1 Classification K1 Convolutional networks K1 Comparative study K1 Informática K1 Computer science K1 Biología K1 Biology AB In the face of the global concern about climate change and endangered ecosystems, monitoring individual animals is of paramount importance. Computer vision methods for animal recognition and re-identification from video or image collections are a modern alternative to more traditional but intrusive methods such as tagging or branding. While there are many studies reporting results on various animal re-identification databases, there is a notable lack of comparative studies between different classification methods. In this paper we offer a comparison of 25 classification methods including linear, non-linear and ensemble models, as well as deep learning networks. Since the animal databases are vastly different in characteristics and difficulty, we propose an experimental protocol that can be applied to a chosen data collections. We use a publicly available database of five video clips, each containing multiple identities (9 to 27), where the animals are typically present as a group in each video frame. Our experiment involves five data representations: colour, shape, texture, and two feature spaces extracted by deep learning. In our experiments, simpler models (linear classifiers) and just colour feature space gave the best classification accuracy, demonstrating the importance of running a comparative study before resorting to complex, time-consuming, and potentially less robust methods. PB Elsevier SN 1574-9541 YR 2023 FD 2023-05 LK http://hdl.handle.net/10259/7541 UL http://hdl.handle.net/10259/7541 LA eng NO This work is supported by the UKRI Centre for Doctoral Training in Artificial Intelligence, Machine Learning and Advanced Computing (AIMLAC), funded by grant EP/S023992/1. This work is also supported by the Junta de Castilla León under project BU055P20 (JCyL/FEDER, UE), and the Ministry of Science and Innovation under project PID2020-119894 GB-I00 co-financed through European Union FEDER funds. J.L. Garrido-Labrador is supported through Consejería de Educación of the Junta de Castilla y León and the European Social Fund through a pre-doctoral grant (EDU/875/2021). I. Ramos-Perez is supported by the predoctoral grant (BDNS 510149) awarded by the Universidad de Burgos, Spain. J.J. Rodríguez was supported by mobility grant PRX21/00638 of the Spanish Ministry of Universities. DS Repositorio Institucional de la Universidad de Burgos RD 28-abr-2024