Universidad de Burgos RIUBU Principal Default Universidad de Burgos RIUBU Principal Default
  • español
  • English
  • français
  • Deutsch
  • português (Brasil)
  • italiano
Universidad de Burgos RIUBU Principal Default
  • Ayuda
  • Contacto
  • Sugerencias
  • Acceso abierto
    • Archivar en RIUBU
    • Acuerdos editoriales para la publicación en acceso abierto
    • Controla tus derechos, facilita el acceso abierto
    • Sobre el acceso abierto y la UBU
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    • español
    • English
    • français
    • Deutsch
    • português (Brasil)
    • italiano
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Listar

    Todo RIUBUComunidadesFechaAutor / DirectorTítuloMateria / AsignaturaEsta colecciónFechaAutor / DirectorTítuloMateria / Asignatura

    Mi cuenta

    AccederRegistro

    Estadísticas

    Ver Estadísticas de uso

    Compartir

    Ver ítem 
    •   RIUBU Principal
    • E-Prints y Datos de investigación
    • Grupos de investigación
    • Grupo de Investigación en Automatización, Robótica, Control y Optimización (ARCO)
    • Artículos ARCO
    • Ver ítem
    •   RIUBU Principal
    • E-Prints y Datos de investigación
    • Grupos de investigación
    • Grupo de Investigación en Automatización, Robótica, Control y Optimización (ARCO)
    • Artículos ARCO
    • Ver ítem

    Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10259/9278

    Título
    Federated Discrete Reinforcement Learning for Automatic Guided Vehicle Control
    Autor
    Sierra Garcia, Jesús EnriqueAutoridad UBU Orcid
    Santos, Matilde
    Publicado en
    Future Generation Computer Systems. 2024, V. 150, p. 78-89
    Editorial
    Elsevier
    Fecha de publicación
    2024-01
    ISSN
    0167-739X
    DOI
    10.1016/j.future.2023.08.021
    Resumen
    Under the federated learning paradigm, the agents learn in parallel and combine their knowledge to build a global knowledge model. This new machine learning strategy increases privacy and reduces communication costs, some benefits that can be very useful for industry applications deployed in the edge. Automatic Guided Vehicles (AGVs) can take advantage of this approach since they can be considered intelligent agents, operate in fleets, and are normally managed by a central system that can run in the edge and handles the knowledge of each of them to obtain a global emerging behavioral model. Furthermore, this idea can be combined with the concept of reinforcement learning (RL). This way, the AGVs can interact with the system to learn according to the policy implemented by the RL algorithm in order to follow specified routes, and send their findings to the main system. The centralized system collects this information in a group policy to turn it over to the AGVs. In this work, a novel Federated Discrete Reinforcement Learning (FDRL) approach is implemented to control the trajectories of a fleet of AGVs. Each industrial AGV runs the modules that correspond to an RL system: a state estimator, a rewards calculator, an action selector, and a policy update algorithm. AGVs share their policy variation with the federated server, which combines them into a group policy with a learning aggregation function. To validate the proposal, simulation results of the FDRL control for five hybrid tricycle-differential AGVs and four different trajectories (ellipse, lemniscate, octagon, and a closed 16-polyline) have been obtained and compared with a Proportional Integral Derivative (PID) controller optimized with genetic algorithms. The intelligent control approach shows an average improvement of 78% in mean absolute error, 75% in root mean square error, and 73% in terms of standard deviation. It has been shown that this approach also accelerates the learning up to a 50 % depending on the trajectory, with an average of 36% speed up while allowing precise tracking. The suggested federated-learning based technique outperforms an optimized fuzzy logic controller (FLC) for all of the measured trajectories as well. In addition, different learning aggregation functions have been proposed and evaluated. The influence of the number of vehicles (from 2 to 10) on the path following performance and on network transmission has been analyzed too.
    Palabras clave
    Automated guided vehicle (AGV)
    Federated learning
    Industry 4.0
    Intelligent control
    Path following
    Reinforcement learning
    Materia
    Electrotecnia
    Electrical engineering
    Vehículos
    Vehicles
    URI
    http://hdl.handle.net/10259/9278
    Versión del editor
    https://doi.org/10.1016/j.future.2023.08.021
    Aparece en las colecciones
    • Artículos ARCO
    Attribution-NonCommercial-NoDerivatives 4.0 Internacional
    Documento(s) sujeto(s) a una licencia Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 Internacional
    Ficheros en este ítem
    Nombre:
    Sierra-fgcs_2024.pdf
    Tamaño:
    1.980Mb
    Formato:
    Adobe PDF
    Thumbnail
    Visualizar/Abrir

    Métricas

    Citas

    Academic Search
    Ver estadísticas de uso

    Exportar

    RISMendeleyRefworksZotero
    • edm
    • marc
    • xoai
    • qdc
    • ore
    • ese
    • dim
    • uketd_dc
    • oai_dc
    • etdms
    • rdf
    • mods
    • mets
    • didl
    • premis
    Mostrar el registro completo del ítem