<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="static/style.xsl"?><OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd"><responseDate>2026-04-19T12:59:08Z</responseDate><request verb="GetRecord" identifier="oai:riubu.ubu.es:10259/10992" metadataPrefix="oai_dc">https://riubu.ubu.es/oai/request</request><GetRecord><record><header><identifier>oai:riubu.ubu.es:10259/10992</identifier><datestamp>2025-10-29T12:15:14Z</datestamp><setSpec>com_10259_4219</setSpec><setSpec>com_10259_5086</setSpec><setSpec>com_10259_2604</setSpec><setSpec>col_10259_4220</setSpec></header><metadata><oai_dc:dc xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:doc="http://www.lyncode.com/xoai" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:dc="http://purl.org/dc/elements/1.1/" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:title>Identifying users of immersive virtual-reality serious games through machine-learning techniques</dc:title>
<dc:creator>Miguel Alonso, Inés</dc:creator>
<dc:creator>Rodríguez Diez, Juan José</dc:creator>
<dc:creator>Serrano Mamolar, Ana</dc:creator>
<dc:creator>Bustillo Iglesias, Andrés</dc:creator>
<dc:subject>Virtual Reality</dc:subject>
<dc:subject>Random Forest</dc:subject>
<dc:subject>Head Mounted Display</dc:subject>
<dc:subject>User identification</dc:subject>
<dc:subject>Machine Learning</dc:subject>
<dc:subject>Open-Access Datasets</dc:subject>
<dc:subject>Realidad virtual</dc:subject>
<dc:subject>Virtual reality</dc:subject>
<dc:description>User identification is currently an open issue in immersive Virtual Reality (iVR) environments. Three main goals are usually associated with the use of tracking-data and Machine-Learning (ML) techniques: safeguarding privacy, user authentication, and user-experience customization. However, research to date has only involved very limited recordings of user data (e.g., on a single session and for low-interactive situations), rare in real iVR environments. So, the research gap between real iVR data and ML techniques for user identification is addressed in this paper. To do so, a 3-session iVR experience of operating a bridge crane is considered. In this simple yet highly interactive learning action, the dataset records of user performance show rapid changes between one experience and another. Eye, head, and hand movements of 64 users of similar age and with comparable previous experience were all recorded while engaged with the experience. The final raw dataset had a size of approximately 50M data points with 25 attributes that were mainly temporal series values. Secondly, different ML algorithms were used for user identification: Decision Tree, Random Forest, XGBoost, k-Nearest Neighbors, Support Vector Machines, and Multilayer Perceptron. The results showed that ML ensemble learning techniques, particularly Random Forest, were the most suitable solutions on the basis of different measures for the prediction of user identity. Additionally, the inclusion of stress and no-stress conditions significantly enhanced model performance, highlighting the importance of data diversity. Temporal segmentation revealed that user identification during later phases of the exercise was slightly more effective, due to increased individual variability. Finally, a minimum duration of the iVR experience was identified as a requirement to assure high identification rates.</dc:description>
<dc:description>his study was partially funded through the ACIS project (Reference Number: INVESTUN/21/BU/0002) of the Consejería de Empleo e Industria of the Junta de Castilla y León (Spain); the REMAR Project (Reference Number: CPP2022-&#xd;
009724) supported by the Ministry of Science and Innovation of Spain (MCIN/AEI/10.13039/501100011033) and&#xd;
through “ERDF A way of making Europe” or European Union NextGenerationEU/PRTR funding; the HumanAid&#xd;
Project (Reference Number: TED2021-129485B-C43) funded through the Spanish Ministry of Science and Innovation&#xd;
and the Ministry of Science, Innovation and Universities (FPU21/01978).</dc:description>
<dc:date>2025-10-24T07:30:48Z</dc:date>
<dc:date>2025-10-24T07:30:48Z</dc:date>
<dc:date>2025-09</dc:date>
<dc:type>info:eu-repo/semantics/article</dc:type>
<dc:type>info:eu-repo/semantics/publishedVersion</dc:type>
<dc:identifier>1359-4338</dc:identifier>
<dc:identifier>https://hdl.handle.net/10259/10992</dc:identifier>
<dc:identifier>10.1007/s10055-025-01232-y</dc:identifier>
<dc:identifier>1434-9957</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>Virtual Reality. 2025. V. 29. n. 164</dc:relation>
<dc:relation>https://doi.org/10.1007/s10055-025-01232-y</dc:relation>
<dc:rights>Atribución 4.0 Internacional</dc:rights>
<dc:rights>http://creativecommons.org/licenses/by/4.0/</dc:rights>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:format>application/pdf</dc:format>
<dc:publisher>Springer</dc:publisher>
</oai_dc:dc></metadata></record></GetRecord></OAI-PMH>