Generosi Andrea
Assegnista
Università Politecnica delle Marche
a.generosi@staff.univpm.it
Sito istituzionale
SCOPUS ID: 57201216842
Orcid: 0000-0001-6173-9616
Pubblicazioni scientifiche
Abstract: The perception of sound greatly impacts users’ emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors’ responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors’ emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors’ questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the ‘distracted’ state and 62 percent in the ‘heavy-eyed’ state. On the other hand, regression analysis shows that the correlation between jurors’ valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants’ reactions to auditory stimuli.
Keywords: affective computing | attention recognition | deep learning | facial expression recognition | jury testing
Abstract: This study introduces a multimodal approach for enhancing the accuracy of Driver Monitoring Systems (DMS) in detecting driver distraction. By integrating data from vehicle control units with vision-based information, the research aims to address the limitations of current DMS. The experimental setup involves a driving simulator and advanced computer vision, deep learning technologies for facial expression recognition, and head rotation analysis. The findings suggest that combining various data types—behavioral, physiological, and emotional—can significantly improve DMS’s predictive capability. This research contributes to the development of more sophisticated, adaptive, and real-time systems for improving driver safety and advancing autonomous driving technologies.
Keywords: Attention Recognition | Deep Learning | Driver Monitoring Systems
Abstract: This paper discusses the integration of Industry 4.0 technologies in manufacturing facilities to implement the Smart Factory paradigm exploiting Cyber-Physical Systems, Internet of Things, Big Data and Cloud Computing as key enabling technologies (KET). The study aims to define a methodological framework to design and implement connected machines to realize the Smart Factory model by interconnecting the KETs through the pragmatically implementation of digital twins from design to production. The study fills a literature gap by formalizing an approach for studying and implementing digital twins for value creation from a servitization perspective. The approach is illustrated through a successful case study and represents the first results of a long-term research project funded by the Italian Minister of Economic Development. Future works will expand the approach to more case studies to provide formalized guidelines of general applicability.
Keywords: CPS | Digital Twin | Smart Factory | User Centered Design
Abstract: eXtended Reality (XR) technology can enhance the visitors’ experience of museums. Due to the variety of XR technologies available that differ in performance, quality of the experience they provide, and cost, it is helpful to refer to the evaluation of the various technologies performed through user studies to select the most suitable ones. This paper presents a set of empirical studies on XR application for museums to select the appropriate technologies to meet visitors’ expectations and maximise the willingness of repeating and recommending the experience. They provide valuable insights for developing Virtual Museum applications increasing the level of presence and experience economy.
Keywords: Extended reality | Multisensory experience | Sense of smell | User experience | Virtual museum
Abstract: This paper aims to explore the potential offered by emotion recognition systems to provide a feasible response to the growing need for audience understanding and development in the field of arts organizations. Through an empirical study, it was investigated whether the emotional valence measured on the audience through an emotion recognition system based on facial expression analysis can be used with an experience audit to: (1) support the understanding of the emotional responses of customers toward any clue that characterizes a staged performance; and (2) systematically investigate the customer’s overall experience in terms of their overall satisfaction. The study was carried out in the context of opera live shows in the open-air neoclassical theater Arena Sferisterio in Macerata, during 11 opera performances. A total of 132 spectators were involved. Both the emotional valence provided by the considered emotion recognition system and the quantitative data related to customers’ satisfaction, collected through a survey, were considered. Results suggest how collected data can be useful for the artistic director to estimate the audience’s overall level of satisfaction and make choices about the specific characteristics of the performance, and that emotional valence measured on the audience during the show can be useful to predict overall customer satisfaction, as measured using traditional self-report methods.
Keywords: artificial intelligence | customer experience | customer satisfaction | emotion recognition | facial expression recognition
Abstract: Several emotion-adaptive systems frameworks have been proposed to enable listeners’ emotional regulation through music reproduction. However, the majority of these frameworks has been implemented only under in-Lab or in-car conditions, in the second case focusing on improving driving performance. Therefore, to the authors’ best knowledge, no research has been conducted for mobility settings, such as trains, planes, yacht, etc. Focusing on this aspect, the proposed approach reports the results obtained from the study of relationship between listener’s induced emotion and music reproduction exploiting an advanced audio system and an innovative technology for face expressions’ recognition. Starting from an experiment in a university lab scenario, with 15 listeners, and a yacht cabin scenario, with 11 listeners, participants’ emotional variability has been deeply investigated reproducing 4 audio enhanced music tracks, to evaluate the listeners’ emotional “sensitivity” to music stimuli. The experimental results indicated that, during the reproduction in the university lab, listeners’ “happiness” and “anger” states were highly affected by the music stimuli and highlighted a possible relationship between music and listeners’ compound emotions. Furthermore, listeners’ emotional engagement was proven to be more affected by music stimuli in the yacht cabin, rather than the university lab.
Keywords: Affective Computing | Audio Enhancement | Deep Learning | Facial Emotion Recognition
Abstract: Immersive audio rendering techniques allow for generating a 3D scenario where the listener can perceive the sound from all directions. An important aspect of these approaches is the subjective perception of the listener and how these types of systems are perceived from the emotional point of view and how they can influence the listener's mood. In this context, a deep investigation of immersive sound perception considering subjective perception in terms of flowing emotion is performed. Starting from a 4-channels immersive audio system and an emotion-aware system based on the analysis of the user's facial expressions, several experiments have been performed to investigate a correlation between immersive perception and the listener's emotions.
Keywords: emotion recognition | immersive audio | subjective perception
Abstract: The use of eXtended Reality (XR) technologies, including augmented reality (AR), virtual reality (VR), and mixed reality (MR), has become increasingly popular in museums to enhance the visitor experience. However, the impact of XR technologies on Learning Performance in the context of archeological museums needs to be better understood. This study aims to investigate the relationships between Usability, Presence and Learning Performance by developing XR experiences showcasing archeological artefacts and conducting user testing to evaluate their effectiveness. A laboratory test is conducted to compare a VR application with a mobile AR one, presenting the digital models of five archeological findings. Descriptive statistics are used to compare the two case studies, providing valuable insights into the impact of XR technologies on the visitor experience from a learning perspective. The study confirms that Usability has a more significant effect on learning than Presence and can help designers and museum managers better understand the factors contributing to a successful XR experience. The findings suggest that while Presence is an important factor in improving visitors’ experience, Usability should be the priority when designing XR experiences for museums.
Keywords: Cultural heritage | Technological Benchmarking | XR Technologies
Abstract: This paper introduces a system that enable the collection of relevant data related to the emotional behavior and attention of both student and professor during exams. It exploits facial coding techniques to enable the collection of a large amount of data from the automatic analysis of students and professors faces using video analysis, advanced techniques for gaze tracking based on deep Learning, and technologies and the principles related to the Affective Computing branch derived from the research of Paul Ekman. It provides tools that facilitates the interpretation of the collected data by means of a dashboard. A preliminary experiment has been carried out to investigate whether such a system may help in assessing the evaluation setting and support reflection on the evaluation processes in the light of the different situations, so as to improve the adoption of inclusive approaches. Results suggest that information provided by the proposed system can be helpful in assessing the setting and the evaluation process.
Keywords: Affective computing | Deep learning | E-leaning | Emotion recognition | Gaze tracking
Abstract: This article introduces at a conceptual level a system based on AI technologies, able to determine the customer profile, in order to support customer experience design and management accordingly to a customer-centered approach, by extracting information from video stream provided by the security cameras installed in a store. The system collects customer demographic and behavioral information (e.g., age, gender, time spent in determined areas of the store, time spent interacting with the salesperson, etc.) through Deep Learning algorithms, in a completely anonymous way, without saving bio-metric data. To predict the customer profile based on the collected data it exploits a Bayesian Belief Network (BBN). The paper describes the overall system architecture, details the method used to model the BBN and reports, through the description of a use case scenario, some examples of insights useful to guide the choice of possible actions to be taken to improve the customer experience strategy.
Keywords: Customer experience | Customer profiling | Machine learning | Predictive models | Video analysis
Abstract: This paper introduces a web-platform system that performs semi-automatic compute of several risk indexes, based on the considered evaluation method (e.g., RULA—Rapid Upper Limb Assessment, REBA—Rapid Entire Body Assessment, OCRA—OCcupational Repetitive Action) to support ergonomics risk estimation, and provides augmented analytics to proactively improve ergonomic risk monitoring based on the characteristics of workers (e.g., age, gender), working tasks, and environment. It implements a body detection system, marker-less and low cost, based on the use of RGB cameras, which exploits the open-source deep learning model CMU (Carnegie Mellon University), from the tf-pose-estimation project, assuring worker privacy and data protection, which has been already successfully assessed in standard laboratory conditions. The paper provides a full description of the proposed platform and reports the results of validation in a real industrial case study regarding a washing machine assembly line composed by 5 workstations. A total of 15 workers have been involved. Results suggest how the proposed system is able to significantly speed up the ergonomic assessment and to predict angles and perform a RULA and OCRA analysis, with an accuracy comparable to that obtainable from a manual analysis, even under the unpredictable conditions that can be found in a real working environment.
Keywords: Ergonomics risk assessment | Extended reality | Human-centered manufacturing | Machine learning | Motion capture
Abstract: This study introduces a new operational tool based on the AEIOU observational framework to support the design of adaptive human machine interfaces (HMIs) that aim to modify people’s behavior and support people’s choices, to improve safety using emotional regulation techniques, through the management of environmental characteristics (e.g., temperature and illumination), according to an approach based on the nudging concept within a design thinking process. The proposed approach focuses on research in the field of behavioral psychology that has studied the correlations between human emotions and driving behavior, pushing towards the elicitation of those emotions judged to be most suitable for safe driving. The main objective is to support the ideation of scenarios and/or design features for adaptive HMIs to implement a nudging strategy to increase driving safety. At the end, the results from a collaborative workshop, organized as a case study to collect concept ideas in the context of sports cars, will be shown and evaluated to highlight the validity of the proposed methodology, but also the limitations due to the requirement of prototypes to evaluate the actual effectiveness of the presented nudging strategies.
Keywords: adaptive HMI | affective computing | automotive | driving safety | emotion regulation | nudge
Abstract: Industry nowadays must deal with the so called “fourth industrial revolution”, i.e. Industry 4.0. This revolution is based on the introduction of new paradigms in the manufacturing industry such as flexibility, efficiency, safety, digitization, big data analysis and interconnection. However, human factors’ integration is usually not considered, although included as one of the paradigms. Some of these human factors’ most overlooked aspects are the customization of the worker’s user experience and on-board safety. Moreover, the issue of integrating state of the art technologies on legacy machines is also of utmost importance, as it can make a considerable difference on the economic and environmental aspects of their management, by extending the machine’s life cycle. In response to this issue, the Retrofitting paradigm, the addition of new technologies to legacy machines, has been considered. In this paper we propose a novel modular system architecture for secure authentication and worker’s log-in/log-out traceability based on face recognition and on state-of-the-art Deep Learning and Computer Vision techniques, as Convolutional Neural Networks. Starting from the proposed architecture, we developed and tested a device designed to retrofit legacy machines with such capabilities, keeping particular attention to the interface usability in the design phase, little considered in retrofitting applications along with other Human Factors, despite being one of the pillars of Industry 4.0. This research work’s results showed a dramatic improvement regarding machines on-board access safety.
Keywords: Face recognition | Industrial Internet of Things | Industry 4.0 | Safety | Smart retrofitting | Usability
Abstract: Nowadays, web designers are forced to have an even deeper perception of how users approach their products in terms of user experience and usability. Remote Usability Testing (RUT) is the most appropriate tool to assess the usability of web platforms by measuring the level of user attention, satisfaction, and productivity. RUT does not require the physical presence of users and evaluators, but for this very reason makes data collection more difficult. To simplify data collection and analysis and help RUT moderators collect and analyze user’s data in a non-intrusive manner, this research work proposes a low-cost comprehensive framework based on Deep Learning algorithms. The proposed framework, called Miora, employs facial expression recognition, gaze recognition, and analytics algorithms to capture data about other information of interest for in-depth usability analysis, such as interactions with the analyzed software. It uses a comprehensive evaluation methodology to elicit information about usability metrics and presents the results in a series of graphs and statistics so that the moderator can intuitively analyze the different trends related to the KPI used as usability indicators. To demonstrate how the proposed framework could facilitate the collection of large amounts of data and enable moderators to conduct both remote formative and summative tests in a more efficient way than traditional lab-based usability testing, two case studies have been presented: the analysis of an online shop and of a management platform. Obtained results suggest that this framework can be employed in remote usability testing to conduct both formative and summative tests.
Keywords: affective computing | deep learning | gaze detection | remote usability testing | usability | usability assessment
Abstract: This study aims at comparing three assembly training applications based on different XR technologies characterized by different degrees of immersion (i.e., an MR application based on Hololens 2, a desktop AR application and a digital handbook visualized on a monitor). A total of 54 subjects, recruited among students and personnel of Università Politecnica delle Marche, have been involved. They were assigned to 3 groups age and gender matching. Each group is asked to complete the training related to the assembly of a Lego commercial set (i.e., LEGO 10593), using one of the three considered applications. Results allows us to observe the effects of the immersion on the recall performances, assessed in terms of recall completion time, assembly mistakes, picking mistakes and sequence mistakes.
Keywords: Assembly training | Augmented Reality | Immersion | Manufacturing | Recall Performance
Abstract: This paper introduces a low-cost and low computational marker-less motion capture system based on the acquisition of frame images through standard RGB cameras. It exploits the open-source deep learning model CMU, from the tf-pose-estimation project. Its numerical accuracy and its usefulness for ergonomic assessment are evaluated by a proper experiment, designed and per-formed to: (1) compare the data provided by it with those collected from a motion capture golden standard system; (2) compare the RULA scores obtained with data provided by it with those obtained with data provided by the Vicon Nexus system and those estimated through video analysis, by a team of three expert ergonomists. Tests have been conducted in standardized laboratory conditions and involved a total of six subjects. Results suggest that the proposed system can predict angles with good consistency and give evidence about the tool’s usefulness for ergonomist.
Keywords: Ergonomic risk assessment | Industrial ergonomics | Motion capture | Postural analysis | RULA
Abstract: This article reports the results of a research aimed to evaluate the ability of a haptic interface to improve the user experience (UX) with virtual museum systems. In particular, two user studies have been carried out to (1) compare the experience aroused during the manipulation of a 3D printed replica of an artifact with a pen-like stylus with that aroused during the interaction (visual and tactile) with a 3D rendering application using a haptic interface and PC monitor, and (2) compare the users' perceived usability and UX among a traditional mouse-based desktop interface, haptic interface, and haptic gamified interface based on the SUS scale and the AttrakDiff2 questionnaire. A total of 65 people were involved. The considered haptic application is based on the haptic device Omega 6 produced by Force Dimension, and it is a permanent attraction of the Museo Archeologico Nazionale delle Marche. Results suggest that the proposed haptic interface is suitable for people who commonly use mouse-based computer interaction, but without previous experience with haptic systems, and provide some insights useful to better understand the role of haptic feedback and gamification in enhancing UX with virtual museums, and to guide the development of other similar applications in the future.
Keywords: haptic interface | user experience | Virtual museum | virtual reality
Abstract: Driver behaviour recognition is of paramount importance for in-car automation assistance. It is widely recognized that not only attentional states, but also emotional ones have an impact on the safety of the driving behaviour. This research work proposes an emotion-aware in-car architecture where it is possible to adapt driver’s emotions to the vehicle dynamics, investigating the correlations between negative emotional states and driving performances, and suggesting a system to regulate the driver’s engagement through a unique user experience (e.g. using music, LED lighting) in the car cabin. The relationship between altered emotional states induced through auditory stimuli and vehicle dynamics is investigated in a driving simulator. The results confirm the need for both types of information to improve the robustness of the driver state recognition function and open up the possibility that auditory stimuli can modify driving performance somehow.
Keywords: Driver monitoring system | Emotion recognition | Facial expression recognition
Abstract: This paper introduces an e-learning platform for the management of courses based on MOOCs, able to continuously monitoring student’s behavior through facial coding techniques, with a low computational effort client-side, and to provide useful insight for the instructor. The system exploits the most recent developments in Deep Learning and Computer Vision for Affective Computing, in compliance with the European GDPR. Taking as input the video capture by the webcam of the device used to attend the course, it: (1) performs continuous student’s authentication based on face recognition, (2) monitors the student’s level of attention through head orientation tracking and gaze detection analysis, (3) estimates student’s emotion during the course attendance. The paper describes the overall system design and reports the results of a preliminary survey, which involved a total of 14 subjects, aimed at investigating user acceptance, in terms of intention to continue using such a system.
Keywords: Affective Computing | Deep Learning | E-leaning | Facial Coding | Facial Recognition
Abstract: Nowadays, smartphones and laptops equipped with cameras have become an integral part of our daily lives. The pervasive use of cameras enables the collection of an enormous amount of data, which can be easily extracted through video images processing. This opens up the possibility of using technologies that until now had been restricted to laboratories, such as eye-tracking and emotion analysis systems, to analyze users' behavior in the wild, during the interaction with websites. In this context, this paper introduces a toolkit that takes advantage of deep learning algorithms to monitor user's behavior and emotions, through the acquisition of facial expression and eye gaze from the video captured by the webcam of the device used to navigate the web, in compliance with the EU General data protection regulation (GDPR). Collected data are potentially useful to support user experience assessment of web-based applications in the wild and to improve the effectiveness of e-commerce recommendation systems.
Keywords: Affective Computing | Convolutional Neural Networks | Deep Learning | Gaze detection | User Experience
Abstract: The paper describes the conceptual model of an emotion-aware car interface able to: map both the driver’s cognitive and emotional states with the vehicle dynamics; adapt the level of automation or support the decision-making process if emotions negatively affecting the driving performance are detected; ensure emotion regulation and provide a unique user experience creating a more engaging atmosphere (e.g. music, LED lighting) in the car cabin. To enable emotion detection, it implements a low-cost emotion recognition able to recognize Ekman’s universal emotions by analyzing the driver’s facial expression from stream video. A preliminary test was conducted in order to determine the effectiveness of the proposed emotion recognition system in a driving context. Results evidenced that the proposed system is capable to correctly qualify the drivers’ emotion in a driving simulation context.
Keywords: Driver Monitoring System | Emotion recognition | Facial expression recognition
Abstract: This paper presents a software based on an innovative Convolutional Neural Network model to recognize the six Ekman's universal emotions from the photos of human faces captured in the wild. The CNN was trained using three different datasets already labeled and merged after making them homogeneous. A comparison among different types of CNN architectures using the Keras framework for Python language is proposed and the evaluation results are presented.
Keywords: convolutional neural network | deep learning | emotion recognition
Abstract: This paper introduces a new system capable of adaptively managing multimedia contents (e.g. music, video clips, etc.) and lighting scenarios based on the detected user's emotional state. The system captures the emotion from the user's face expression mapping it into a 2D valence-arousal space where the multimedia content is mapped and matches them with lighting color. Results of preliminary tests suggest that the proposed system is able to detect the user's emotional state and manage proper music and light colors in a symbiotic way.
Abstract: MoBeTrack (Mobile Behaviour Tracking) is a toolkit for automated collection of data necessary to support User Experience (UX) assessment of mobile applications. In contrast to existing frameworks, it is able to collect user demographic information (i.e., age and gender), trace any user interaction and recognize user's emotions during the use of an application. An SDK for iOS allows to easily embedding the toolkit in every mobile application in a flexible and scalable way.
Abstract: The present work introduces an emotional tracking system to monitor Shopping Experience at different touchpoints in a retail store, based on the elaboration of the information extracted from biometric data and facial expressions. A preliminary test has been carried out to determine the system effectiveness in a real context regarding to emotion detection and customers' sex, age and ethnicity discrimination. To this end, information provided by the system have been compare with the results of a traditional video analysis. Results suggest that the proposed system can be effectively used to support the analysis of customer experience in a retail context.
Keywords: customer experience | emotion analysis | emotion tracking | face recognition | facial expression | shopping experience
Abstract: The present work introduces an emotional tracking system to monitor Shopping Experience at different touchpoints in a store, based on the elaboration of the information extracted from biometric data and facial expressions. Preliminary tests suggest that the proposed system can be effectively used in a retail context.
Keywords: Customer experience | Emotion analysis | Emotion tracking | Shopping experience
Abstract: This research aims to develop a system that examines and reacts to the changing behaviors and emotions of individuals in order to improve their shopping experience. The system is able to track emotions in real time at different touchpoints in a store and control a set of networked devices to configure the sensing space and all provided services responsive to the cus-tomers’ needs. This paper describes the general approach adopted to design the overall system and illustrates in detail the module prototyped to understand the users’ emotions through the analysis of facial expressions.
Keywords: Context-aware computing | Emotion recognition | Methods for CX | Shopping experience
Abstract: Spatial Augmented reality (SAR) represents a key technology for the development of smart manufacturing as it is barrier free, does not require the use of Head Mounted Displays or any other wearable devices and it fits most of the industrial constraints. The paper presents a novel SAR-based system to support the manual work in future smart factories. It conveys technical instructions during assembly, provides alerts in case of risks for humans' safety and finally identifies which postures can bring to muscoloscheletric problems if repeated. Experiments with 30 participants demonstrated the effectiveness of the proposed SAR-based system as compared LED monitor-based system and the overall achieved usability. The results proved that SAR technology improves the operators' performance with respect to a LED monitor-based system and that users well accept it. We found that SAR is more effective for difficult tasks than for simple ones.
Keywords: Augmented Reality | Ergonomic assessmnt | In-Situ projection | Musculoskeletal Disorders evaluation | task guidance
Abstract: This study describes an User-Centered approach to design an User Interface (UI) to support daily activities of people with dementia. Such interface is the main hub of a home automation system able to monitor the house and reminds to the users some information when they approach the door to leave the home. In order to involve end users in UI evaluation at the end of the first stage of the design process, a specific experimental protocol, based on task analysis, structural interview, and behavioral observation, is defined. It allows to evaluate user-machine interaction considering aspects related to both adequacy of product feature and user's subjective opinion and behavior. A disposable high fidelity prototype of the UI is realized by using a touch screen tablet. Two tests, respectively dedicated to verify the adequacy of the icons and the understandability of the interface, are performed. A total of 20 subjects with different MMSE score are involved. Results show that people with low and medium dementia are able to understand and use the touch interface and provide some suggestion about how the GUI can be improved. Finally, some approaches to support the future development activities and the next usability tests are discussed.
Keywords: Assistive Technology | Dementia | Human-Computer Interaction | Usability Evaluation