Por favor, use este identificador para citar o enlazar este ítem: http://hdl.handle.net/10259/9018
Título
Exploiting visual cues for safe and flexible cyber-physical production systems
Autor
Publicado en
Advances in Mechanical Engineering. 2019, V. 11, n. 12
Editorial
SAGE Publications
Fecha de publicación
2019-12
ISSN
1687-8140
DOI
10.1177/1687814019897228
Résumé
Human workers are envisioned to work alongside robots and other intelligent factory modules, and fulfill supervision tasks in future smart factories. Technological developments, during the last few years, in the field of smart factory automation have introduced the concept of cyber-physical systems, which further expanded to cyber-physical production systems. In this context, the role of collaborative robots is significant and depends largely on the advanced capabilities of collision detection, impedance control, and learning new tasks based on artificial intelligence. The system components, collaborative robots, and humans need to communicate for collective decision-making. This requires processing of shared information keeping in consideration the available knowledge, reasoning, and flexible systems that are resilient to the real-time dynamic changes on the industry floor as well as within the communication and computer network infrastructure. This article presents an ontology-based approach to solve industrial scenarios for safety applications in cyber-physical production systems. A case study of an industrial scenario is presented to validate the approach in which visual cues are used to detect and react to dynamic changes in real time. Multiple scenarios are tested for simultaneous detection and prioritization to enhance the learning surface of the intelligent production system with the goal to automate safety-based decisions.
Palabras clave
Cyber-physical production system
Human-robot collaboration
Smart factory
Social safety
Collaborative robot
Materia
Tecnología
Technology
Ingeniería mecánica
Mechanical engineering
Versión del editor