Mostrar el registro sencillo del ítem

dc.contributor.authorCabrejas Egea, Álvaro
dc.contributor.authorZhang, Raymond
dc.contributor.authorWalton, Neil
dc.date.accessioned2022-09-22T06:41:43Z
dc.date.available2022-09-22T06:41:43Z
dc.date.issued2021-07
dc.identifier.isbn978-84-18465-12-3
dc.identifier.urihttp://hdl.handle.net/10259/7002
dc.descriptionTrabajo presentado en: R-Evolucionando el transporte, XIV Congreso de Ingeniería del Transporte (CIT 2021), realizado en modalidad online los días 6, 7 y 8 de julio de 2021, organizado por la Universidad de Burgoses
dc.description.abstractIn recent years, Intelligent Transportation Systems are leveraging the power of increased sensory coverage and available computing power to deliver data-intensive solutions achieving higher levels of performance than traditional systems. Within Traffic Signal Control (TSC), this has allowed the emergence of Machine Learning (ML) based systems. Among this group, Reinforcement Learning (RL) approaches have performed particularly well. Given the lack of industry standards in ML for TSC, literature exploring RL often lacks comparison against commercially available systems and straightforward formulations of how the agents operate. Here we attempt to bridge that gap. We propose three different architectures for RL based agents and compare them against currently used commercial systems MOVA, SurTrac and Cyclic controllers and provide pseudo-code for them. The agents use variations of Deep Q-Learning (Double Q Learning, Duelling Architectures and Prioritised Experience Replay) and Actor Critic agents, using states and rewards based on queue length measurements. Their performance is compared in across different map scenarios with variable demand, assessing them in terms of the global delay generated by all vehicles. We find that the RL-based systems can significantly and consistently achieve lower delays when compared with traditional and existing commercial systems.en
dc.description.sponsorshipThis work was part funded by EPSRC Grant EP/L015374 and part funded by The Alan Turing Institute and the Toyota Mobility Foundation. The authors thank Dr. W. Chernicoff for the initial discussions and drive that made this project possible.en
dc.format.mimetypeapplication/pdf
dc.language.isoenges
dc.publisherUniversidad de Burgos. Servicio de Publicaciones e Imagen Institucionales
dc.relation.ispartofR-Evolucionando el transportees
dc.relation.urihttp://hdl.handle.net/10259/6490
dc.subjectTráficoes
dc.subjectTrafficen
dc.subjectInfraestructurases
dc.subjectInfrastructuresen
dc.subject.otherIngeniería civiles
dc.subject.otherCivil engineeringen
dc.subject.otherTransportees
dc.subject.otherTransportationen
dc.subject.otherTecnologíaes
dc.subject.otherTechnologyen
dc.titleReinforcement learning for Traffic Signal Control: Comparison with commercial systemsen
dc.typeinfo:eu-repo/semantics/conferenceObjectes
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses
dc.relation.publisherversionhttps://doi.org/10.36443/9788418465123es
dc.identifier.doi10.36443/10259/7002
dc.relation.projectIDinfo:eu-repo/grantAgreement/EPSRC//EP%2FL015374
dc.page.initial2673es
dc.page.final2692es
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones


Ficheros en este ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem