Ulrich Luca
Ricercatore TD(A)
Politecnico di Torino
luca.ulrich@polito.it
Sito istituzionale
SCOPUS ID: 57195305974
Orcid: 0000-0001-8407-0660
Pubblicazioni scientifiche
Abstract: In the last couple of decades, the research on 3D facial expression recognition has been fostered by the creation of tailored databases containing prototypical expressions of different individuals and by the advances in cost effective acquisition technologies. Though, most of the currently available databases consist of exaggerated facial expressions, due to the imitation principle which they rely on. This makes these databases only partially employable for real world applications such as human-computer interaction for smart products and environments, health, and industry 4.0, as algorithms learn on these ‘inflated’ data which do not respond to ecological validity requirements. In this work, we present two novel 2D + 3D spontaneous facial expression databases of young adults with different geographical origin, in which emotions have been evoked thanks to affective images of the acknowledged IAPS and GAPED databases, and verified with participants’ self-reports. To the best of our knowledge, these are the first three-dimensional facial databases with emotions elicited by validated affective stimuli.
Keywords: 3D facial expression | Affective database | Ecological validity | Facial expression recognition | Human-computer interaction | Spontaneous expressions
Abstract: The use of CAD and 3D printing of surgical guides (SGs) for osteotomies is a widely developed practice in orthopaedic surgery, and particularly in maxillo-facial interventions, but validation studies rarely occur in literature. The present study defines a methodology to validate SGs dimensionally and mechanically through geometrical analysis, tensile testing, contact simulations, and abrasion testing. Distortions between the 3D printed SGs and the CAD model are quantified and an average deviation error for each production process step is obtained. Mechanical analysis identifies a way of applying the load on the SG to measure their equivalent linear stiffness (N/mm), maximum displacement (mm) and corresponding tolerable load (N) by varying some dimensional parameters. The stress state was assessed by finite element method (FEM) analysis, then the numerical results were compared with experimental ones using tensile tests: stiffness, maximum displacement and the corresponding loads were evaluated. The distribution of contact pressure on soft tissues was obtained numerically by FEM analysis. Finally, an ad hoc machine has been specially built to engrave discoidal specimens with typical operating room conditions. The methodology has been validated using 11 SG fibular and mandibular specimens and reporting the obtained results of each procedure step.
Keywords: CAD | Cutting guides | FEM | Maxillo-facial surgery | Surgical guides
Abstract: In orthopedic surgery and maxillofacial there is a growing use of augmented reality (AR) as a technology to increase the visual perception of the surgeon in the operating room. The objective of this review is to analyze the state of the art in the use of AR for osteotomies, highlighting the advantages and the most-known open issues to be addressed in the future research. Scopus, Web of Science, Pubmed and IEEE Xplore databases have been explored with a keyword search, setting the time limits from January 2017 to January 2023, inclusive. Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines has been used in this review, focusing on anatomical districts, real-virtual environment interaction, advantaged and limitations of existing AR-based applications. 49 articles met the inclusion criteria and have been selected in the final analysis. For the sake of clarity, works have been grouped according to the anatomical district, but also the real-virtual environment interaction methodology was reported, as well as information regarding accuracy assessment. A Quality Function Deployment (QFD) has been used to assess the AR-based solutions with regards to the more traditional freehand (FH) and Patient Specific Template (PST) approaches. Finally, some suggestions to foster the AR-based solution adoption in osteotomies treatment have been drawn, considering the highlighted advantages and limitations of this technology. The AR resulted to meet the surgeons’ needs more than other traditional approaches. Among the emerged advantages, AR can lead to a better surgical field accessibility, more flexible solutions and lower the management effort. Nonetheless, future research should address some well-known issues, among which the calibration time, the robustness of the tracking, and the HMDs discomfort.
Keywords: Augmented reality | Cutting guides | Maxillofacial surgery | Mixed reality | Orthopedic surgery | Osteotomies | Patient specific templates | Surgical guides
Abstract: The metaverse, powered by XR technologies, enables human augmentation by enhancing physical, cognitive, and sensory capabilities. Cultural heritage sees the metaverse as a vehicle for expression and exploration, providing new methods for heritage fruition and preservation. This article proposes a metaverse application, inspired by the events of the Italian Resistance, promoting interactions between multiple users in an immersive VR experience while safeguarding intangible cultural assets according to an edutainment approach. The virtual environment, based on Ivrea’s town hall square, provides in-depth information about the partisan’s life and the historical value of its actions for the city. Furthermore, the application allows users to meet in the same virtual place and engage with one another in real time through the Spatial SDK. Before the public presentation, a heterogeneous group of thirty users underwent usability and engagement tests to assess the experience on both VR headsets and smartphones. Tests revealed statistically significant evidence that there is a genuine difference in users’ perceptions of usability and engagement with different devices and types of interaction. This study highlights the effectiveness of adopting XR as a supporting technology to complement the real experience of cultural heritage valorization.
Keywords: cultural heritage | digital heritage | edutainment | metaverse | virtual reality
Abstract: Developing great products is not simple; users want products that can constantly adjust to their needs. The product development process should consider not only the users’ requirements and wishes but also their perceptions and emotions during and after the human-product interaction. Traditional studies have used self-report methods to study the users’ emotions; however, technological advances are making other methods able to measure respondents’ behavior. Electroencephalography (EEG), a technique for recording and interpreting the brain’s electrical activity, is becoming a valid tool to assess users’ emotional states. This study aims to explore the EEG as a method to interpret emotions. To do this, we created three different VR scenarios characterized by different interior design and automatic chromatic variations as a stimulus; this research aims to analyze if the changes in colors and scenarios reflect on the participants’ emotional responses, specifically on Valence, Arousal, and Engagement. The findings show that EEG results are a valid aid to emotion interpretation; also that color variation might influence users’ emotions and that the emotional responses were more evident when changing between scenarios. We expect this study can provide more information regarding the potential of physiological methods to explore users’ emotions during the product design and development.
Keywords: EEG | Emotional design | Engagement | Product development
Abstract: The evaluation of hip implantation success remains one of the most relevant problems in orthopaedics. There are several factors that can cause its failure, e.g.: aseptic loosening and dislocations of the prosthetic joint due to implant impingement. Following a total hip arthroplasty, it is fundamental that the orthopaedist can evaluate which may be the possible risk factors that would lead to dislocation, or in the worst cases, to implant failure. A procedure has been carried out with the aim of evaluating the Range of Movement (ROM) of the implanted prosthesis, to predict whether the inserted implant is correctly positioned or will be prone to dislocation or material wear due to the malposition of its components. Leveraging on a previous patented methodology that consists in the 3D reconstruction and movement simulation of the hip joint, this work aims to provide a more effective visualization of the simulation results through Mixed Reality (MR). The use of MR for the representation of hip kinematics and implant position can provide the orthopaedic surgeon with a deeper understanding of the orientation and position of implanted components, as well as the consequences of such placements while looking directly at the patient. To this end, an anchoring system based on a body-tracking recognition library was developed, so that both completely automatic and human-assisted options are available without additional markers or sensors. An Augmented Reality (AR) prototype has been developed in Unity 3D and used on HoloLens 2, integrating the implemented human-assisted anchoring system option.
Keywords: Computer-aided surgery | HoloLens 2 | Mixed reality | THA assessment | Total hip arthroplasty
Abstract: In the context of human–computer interaction (HCI), understanding user engagement (UE) while interacting with a product or service can provide valuable information for enhancing the design process. UE has been a priority research theme within HCI, as it assesses the user experience by studying the individual’s behavioral response to some stimulus. Many studies looking to quantify the UE are available; however, most use self-report methods that rely only on participants’ answers. This study aims to explore a non-traditional method, specifically electroencephalography, to analyze users’ engagement while interacting with an advergame, an interactive form of advertising in video games. We aim to understand if a more interactive type of advertising will enhance the UE and whether, at the same time, it would influence the user’s purchase intention (UPI). To do this, we computed and compared the UE during the interaction with an advergame and a conventional TV commercial while measuring the participants’ brain activity. After the interaction with both types of advertising, the UPI was also evaluated. The findings demonstrate that a more interactive advertisement increased the participants’ UE and that, in most cases, a UE increment positively influenced the UPI. This study shows an example of the potential of physiological feedback applications to explore the users’ perceptions during and after the human–product interaction. The findings show how physiological methods can be used along with traditional ones for enhancing the UE analysis and provide helpful information about the advantages of engagement measurement in HCI applications.
Keywords: advergames | EEG | purchase intention | user engagement
Abstract: The current study presents a multi-task end-to-end deep learning model for real-time blood accumulation detection and tools semantic segmentation from a laparoscopic surgery video. Intraoperative bleeding is one of the most problematic aspects of laparoscopic surgery. It is challenging to control and limits the visibility of the surgical site. Consequently, prompt treatment is required to avoid undesirable outcomes. This system exploits a shared backbone based on the encoder of the U-Net architecture and two separate branches to classify the blood accumulation event and output the segmentation map, respectively. Our main contribution is an efficient multi-task approach that achieved satisfactory results during the test on surgical videos, although trained with only RGB images and no other additional information. The proposed multi-tasking convolutional neural network did not employ any pre- or postprocessing step. It achieved a Dice Score equal to 81.89% for the semantic segmentation task and an accuracy of 90.63% for the event detection task. The results demonstrated that the concurrent tasks were properly combined since the common backbone extracted features proved beneficial for tool segmentation and event detection. Indeed, active bleeding usually happens when one of the instruments closes or interacts with anatomical tissues, and it decreases when the aspirator begins to remove the accumulated blood. Even if different aspects of the presented methodology could be improved, this work represents a preliminary attempt toward an end-to-end multi-task deep learning model for real-time video understanding.
Keywords: bleeding detection | CNN | laparoscopic surgery | multi-task convolutional neural network | semantic segmentation
Abstract: Most cultural promotion and dissemination are nowadays performed through the digitization of heritage sites and museums, a necessary requirement to meet the new needs of the public. Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR) have the potential to improve the experience quality and educational effect of these sites by stimulating users’ senses in a more natural and vivid way. In this respect, head-mounted display (HMD) devices allow visitors to enhance the experience of cultural sites by digitizing information and integrating additional virtual cues about cultural artifacts, resulting in a more immersive experience that engages the visitor both physically and emotionally. This study contributes to the development and incorporation of AR, MR, and VR applications in the cultural heritage domain by providing an overview of relevant studies utilizing fully immersive systems, such as headsets and CAVE systems, emphasizing the advantages that they bring when compared to handheld devices. We propose a framework study to identify the key features of headset-based Extended Reality (XR) technologies used in the cultural heritage domain that boost immersion, sense of presence, and agency. Furthermore, we highlight core characteristics that favor the adoption of these systems over more traditional solutions (e.g., handheld devices), as well as unsolved issues that must be addressed to improve the guests’ experience and the appreciation of the cultural heritage. An extensive search of Google Scholar, Scopus, IEEE Xplore, ACM Digital Library, and Wiley Online Library databases was conducted, including papers published from January 2018 to September 2022. To improve review reporting, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used. Sixty-five papers met the inclusion criteria and were classified depending on the study's purpose: education, entertainment, edutainment, touristic guidance systems, accessibility, visitor profiling, and management. Immersive cultural heritage systems allow visitors to feel completely immersed and present in the virtual environment, providing a stimulating and educational cultural experience that can improve the quality and learning purposes of cultural visits. Nonetheless, the analyzed studies revealed some limitations that must be faced to give a further impulse to the adoption of these technologies in the cultural heritage domain.
Keywords: Augmented reality (AR) | Cultural heritage | Digital heritage | Head-mounted display (HMD) | Mixed reality (MR) | Virtual reality (VR)
Abstract: Background: Addressing intraoperative bleeding remains a significant challenge in the field of robotic surgery. This research endeavors to pioneer a groundbreaking solution utilizing convolutional neural networks (CNNs). The objective is to establish a system capable of forecasting instances of intraoperative bleeding during robot-assisted radical prostatectomy (RARP) and promptly notify the surgeon about bleeding risks. Methods: To achieve this, a multi-task learning (MTL) CNN was introduced, leveraging a modified version of the U-Net architecture. The aim was to categorize video input as either “absence of blood accumulation” (0) or “presence of blood accumulation” (1). To facilitate seamless interaction with the neural networks, the Bleeding Artificial Intelligence-based Detector (BLAIR) software was created using the Python Keras API and built upon the PyQT framework. A subsequent clinical assessment of BLAIR’s efficacy was performed, comparing its bleeding identification performance against that of a urologist. Various perioperative variables were also gathered. For optimal MTL-CNN training parameterization, a multi-task loss function was adopted to enhance the accuracy of event detection by taking advantage of surgical tools’ semantic segmentation. Additionally, the Multiple Correspondence Analysis (MCA) approach was employed to assess software performance. Results: The MTL-CNN demonstrated a remarkable event recognition accuracy of 90.63%. When evaluating BLAIR’s predictive ability and its capacity to pre-warn surgeons of potential bleeding incidents, the density plot highlighted a striking similarity between BLAIR and human assessments. In fact, BLAIR exhibited a faster response. Notably, the MCA analysis revealed no discernible distinction between the software and human performance in accurately identifying instances of bleeding. Conclusion: The BLAIR software proved its competence by achieving over 90% accuracy in predicting bleeding events during RARP. This accomplishment underscores the potential of AI to assist surgeons during interventions. This study exemplifies the positive impact AI applications can have on surgical procedures.
Keywords: artificial intelligence | complications | prostate cancer | robotics
Abstract: In the last decade, museums and exhibitions have benefited from the advances in Virtual Reality technologies to create complementary virtual elements to the traditional visit. The aim is to make the collections more engaging, interactive, comprehensible and accessible. Also, the studies regarding users’ and visitors’ engagement suggest that the real affective state cannot be fully assessed with self-assessment techniques and that other physiological techniques, such as EEG, should be adopted to gain a more unbiased and mature understanding of their feelings. With the aim of contributing to bridging this knowledge gap, this work proposes to adopt literature EEG-based indicators (valence, arousal, engagement) to analyze the affective state of 95 visitors interacting physically or virtually (in a VR environment) with five handicraft objects belonging to the permanent collection of the Museo dell’Artigianato Valdostano di Tradizione, which is a traditional craftsmanship museum in the Valle d’Aosta region. Extreme Gradient Boosting (XGBoost) was adopted to classify the obtained engagement measures, which were labeled according to questionnaire replies. EEG analysis played a fundamental role in understanding the cognitive and emotional processes underlying immersive experiences, highlighting the potential of VR technologies in enhancing participants’ cognitive engagement. The results indicate that EEG-based indicators have common trends with self-assessment, suggesting that their use as ‘the ground truth of emotion’ is a viable option.
Keywords: craftsmanship | cultural heritage | EEG | user engagement | Virtual Reality | XGBoost
Abstract: Facial appearance is one prominent feature in analyzing several aspects, e.g., aesthetics and expression of emotions, and face analysis is crucial in many fields. Face analysis requires measurements that can be performed by different technologies and typically relies on landmarks identification. Recently, low-cost customer grade 3D cameras have been introduced in the market, enabling an increase of application at affordable cost with nominal adequate performances. Novel cameras require to be thoroughly metrologically characterized to guarantee these performances. Cameras are calibrated following a standard general-purpose procedure. However, the specificity of facial measurements requires a task-based metrological characterization to include typical influence factors. This work outlines a methodology for task-based metrological characterization of low-cost 3D cameras for facial analysis, consisting of: influence factor identification by ANOVA, related uncertainty contribution assessment, uncertainty propagation, landmarking uncertainty estimation. The proposed methodology is then demonstrated on a customer grade state-of-the-art 3D camera available on the market.
Keywords: Depth cameras | Face analysis | Human-machine interaction | Machine vision | Measurement uncertainty | Soft tissue landmarks
Abstract: In the era of ‘metaverse’, virtual environments are gaining popularity among new multimedia contents and are also recognized as a valuable means to deliver emotional content. This is favoured by cost reduction, availability and acceptance by end-users of virtual reality technology. Creating effective virtual environments can be achieved by exploiting several opportunities: creating artificial worlds able to generate different stories, mixing sensory cues, and making the whole interactive. The design space for creating emotional virtual environments is ample, and no clear idea of how to integrate the various components exists. This paper discusses how to combine multiple design elements to elicit five distinct emotions. We developed and tested two scenarios per emotion. We present the methodology, the development of the case studies, and the results of the testing.
Keywords: Affective Virtual Reality | Design Methodology | Emotions | Metaverse | Virtual Reality
Abstract: Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols.
Keywords: augmented reality | digital health | HoloLens | Human–Computer Interaction (HCI) | Human–Machine Interaction (HMI) | intraoperative | medical training | mixed reality | rehabilitation
Abstract: Following an oncological resection or trauma it may be necessary to reconstruct the normal anatomical and functional mandible structures to ensure the effective and complete social reintegration of patients. In most surgical procedures, reconstruction of the mandibular shape and its occlusal relationship is performed through the free fibula flap using a surgical guide which allows the surgeon to easily identify the location and orientation of the cutting plane. In the present work, we present a Mixed Reality (MR)-based solution to support professionals in surgical guide positioning. The proposed solution, through the use of a Head-Mounted Display (HMD) such as that of the HoloLens 2, visualizes a 3D virtual model of the surgical guide, positioned over the patient’s real fibula in the correct position as identified by the medical team before the procedure. The professional wearing the HMD is then assisted in positioning the real guide over the virtual one by our solution, which is capable of tracking the real guide during the whole process and computing its distance from the final position. The assessment results highlight that Mixed Reality is an eligible technology to support surgeons, combining the usability of the device with an improvement of the accuracy in fibula flap removal surgery.
Keywords: HoloLens 2 | mandibular reconstruction | maxillofacial surgery | mixed reality | surgical guide
Abstract: Despite the great potential of Virtual Reality (VR) to arouse emotions, there are no VR affective databases available as it happens for pictures, videos, and sounds. In this paper, we describe the validation of ten affective interactive Virtual Environments (VEs) designed to be used in Virtual Reality. These environments are related to five emotions. The testing phase included using two different experimental setups to deliver the overall experience. The setup did not include any immersive VR technology, because of the ongoing COVID-19 pandemic, but the VEs were designed to run on stereoscopic visual displays. We collected measures related to the participants’ emotional experience based on six discrete emotional categories plus neutrality and we included an assessment of the sense of presence related to the different experiences. The results showed how the scenarios can be differentiated according to the emotion aroused. Finally, the comparison between the two experimental setups demonstrated high reliability of the experience and strong adaptability of the scenarios to different contexts of use.
Abstract: Driver inattention is the primary cause of vehicle accidents; hence, manufacturers have introduced systems to support the driver and improve safety; nonetheless, advanced driver assistance systems (ADAS) must be properly designed not to become a potential source of distraction for the driver due to the provided feedback. In the present study, an experiment involving auditory and haptic ADAS has been conducted involving 11 participants, whose attention has been monitored during their driving experience. An RGB-D camera has been used to acquire the drivers’ face data. Subsequently, these images have been analyzed using a deep learning-based approach, i.e., a convolutional neural network (CNN) specifically trained to perform facial expression recognition (FER). Analyses to assess possible relationships between these results and both ADAS activations and event occurrences, i.e., accidents, have been carried out. A correlation between attention and accidents emerged, whilst facial expressions and ADAS activations resulted to be not correlated, thus no evidence that the designed ADAS are a possible source of distraction has been found. In addition to the experimental results, the proposed approach has proved to be an effective tool to monitor the driver through the usage of non-invasive techniques.
Keywords: ADAS | CNN | DADA | Deep learning | Driver’s attention | RGB-D camera
Abstract: Background and aim of the work: Implant dislocation in total hip arthroplasties (THA) is a common concern amongst the orthopedic surgeons and represents the most frequent complication after primary implant. Several causes could be responsible for the dislocation, including the malpositioning of the components. Conventional imaging techniques frequently fail to detect the mechanical source of dislocation mainly because they could not reproduce a dynamic evaluation of the components. The purpose of this study was to elaborate a diagnostic tool capable to virtually assess if the range of movement (ROM) of a THA is free from anterior and/or superior mechanical impingement. The ultimate aim is to give the surgeon the possibility to weigh the mechanical contribution in a THA dislocation. Methods: A group of patients who underwent THA revision for acute dislocation was compared to a group of non-dislocating THA. CT scans and a virtual model of each patient was obtained. A software called “Prosthesis Impingement Simulator (PIS)” was developed for simulating the (ROM) of the prosthetic hip. The ROM free of mechanical impingement was compared between the two groups. Results: The PIS test could detect the dislocations with a sensitivity of 71,4%, and a specificity of 85,7%. The Fisher’s exact test showed a p-value of 0,02. Conclusion: The PIS seems to be an effective tool for the determination of hip prosthetic impingement, as the main aid of the software is the exclusion of mechanical causes in the event of a dislocation.
Keywords: Dislocation | Impingement | Range of motion | Revision surgery | Software | THA | Total hip arthroplasty
Abstract: Computer graphics is-in many cases-about visualizing what you cannot see. However, virtual reality (VR), from its beginnings, aimed at stimulating all human senses: not just the visual channel. Moreover, this set of multisensory stimuli allows users to feel present and able to interact with the virtual environment. In this way, VR aims to deliver experiences that are comparable to real-life ones in their level of detail and stimulation, intensity, and impact. Hence, VR is not only a means to see, but also to feel differently. With the spreading of VR technologies, there is a growing interest in using VR to evoke emotions, including positive and negative ones. This article discusses the current possibilities and the authors' experience collected in the field in trying to elicit emotions through VR. It explores how different design aspects and features can be used, describing their contributions and benefits in the development of affective VR experiences. This work aims at raising awareness of the necessity to consider and explore the full design space that VR technology provides in comparison to traditional media. Additionally, it provides possible tracks of VR affective applications, illustrating how they could impact our emotions and improve our life, and providing guidelines for their development.
Abstract: Artificial Intelligence (AI) algorithms, together with a general increased computational performance, allow nowadays exploring the use of Facial Expression Recognition (FER) as a method of recognizing human emotion through the use of neural networks. The interest in facial emotion and expression recognition in real-life situations is one of the current cutting-edge research challenges. In this context, the creation of an ecologically valid facial expression database is crucial. To this aim, a controlled experiment has been designed, in which thirty-five subjects aged 18–35 were asked to react spontaneously to a set of 48 validated images from two affective databases, IAPS and GAPED. According to the Self-Assessment Manikin, participants were asked to rate images on a 9-points visual scale on valence and arousal. Furthermore, they were asked to select one of the six Ekman’s basic emotions. During the experiment, an RGB-D camera was also used to record spontaneous facial expressions aroused in participants storing both the color and the depth frames to feed a Convolutional Neural Network (CNN) to perform FER. In every case, the prevalent emotion pointed out in the questionnaires matched with the expected emotion. CNN obtained a recognition rate of 75.02%, computed comparing the neural network results with the evaluations given by a human observer. These preliminary results have confirmed that this experimental setting is an effective starting point for building an ecologically valid database.
Keywords: 3D facial database | Affective database | Basic emotions | Ecologically-valid data | Facial expression recognition | Human-robot interaction
Abstract: Recently a wide variety of applications has been developed integrating 3D functionalities. Advantages given by the possibility of relying on depth information allows the developers to design new algorithms and to improve the existing ones. In particular, for what concerns face morphology, 3D has led to the possibility to obtain face depth maps highly close to reality and consequently an improvement of the starting point for further analysis such as Face Detection, Face Authentication, Face Identification and Face Expression Recognition. The development of the aforementioned applications would have been impossible without the progress of sensor technologies for obtaining 3D information. Several solutions have been adopted over time. In this paper, emphasis is put on passive stereoscopy, structured light, time-of-flight (ToF) and active stereoscopy, namely the most used technologies for the cameras design and fulfilment according to the literature. The aim of this article is to investigate facial applications and to examine 3D camera technologies to suggest some guidelines for addressing the correct choice of a 3D sensor according to the application that has to be developed.
Keywords: 3D cameras | 3D face analysis | Sensors | Stereoscopy | Structured light | ToF
Abstract: Common sense usually considers the assessment of female human attractiveness to be subjective. Nevertheless, in the past decades, several studies and experiments showed that an objective component in beauty assessment exists and can be strictly related, even if it does not match, with proportions of features. Proportions can be studied through analysis of the face, which relies on landmarks, i.e., specific points on the facial surface, which are shared by everyone, and measurements between them. In this work, several measures have been gathered from studies in the literature considering datasets of beautiful women to build a set of measures that can be defined as suggestive of female attractiveness. The resulting set consists of 29 measures applied to a public dataset, the Bosphorus database, whose faces have been both analyzed by the developed methodology based on the expanded set of measures and judged by human observers. Results show that the set of chosen measures is significant in terms of attractiveness evaluation, confirming the key role of proportions in beauty assessment; furthermore, the sorting of identified measures has been performed to identify the most significant canons involved in the evaluation.
Keywords: 3D landmarks | Attractiveness | Face analysis | Face proportions | Features extraction
Abstract: Introduction of collision models aided surgeons to plan their resection of the femoral and acetabular impinging deformities. Their application on the hip joint is particularly interesting in association with hip arthroscopy because with technique the vision is limited and identifying friction’s zones is not easy. Although their large diffusion in the market, they still presents several limits. First, the 3D reconstruction has been used only in the preoperatory phase. Its use inside the operatory room is still not available. Furthermore the actual software lack of a pure automatized program to create the 3D model of the hip. Second, nowadays collision model’s software separate and study only the bony part of the hip, erasing from the 3D reconstruction soft tissues. Third, the programs currently developed use statistical data as reference to 3d virtual models and the motion between the hip’s bones. They base the motion analysis on the comparison between data extrapolated from the healthy population and data from the patient exam, consequently image processing may be inaccurate. Although according to our opinion collision model are useful even with their actual limits, in this chapter we describe the limits of the actual software and their potential development. © 2020 by Nova Science Publishers, Inc. All rights reserved.
Abstract: This study proposes a novel quality function deployment (QFD) design methodology based on customers' emotions conveyed by facial expressions. The current advances in pattern recognition related to face recognition techniques have fostered the cross-fertilization and pollination between this context and other fields, such as product design and human-computer interaction. In particular, the current technologies for monitoring human emotions have supported the birth of advanced emotional design techniques, whose main focus is to convey users' emotional feedback into the design of novel products. As quality functional deployment aims at transforming the voice of customers into engineering features of a product, it appears to be an appropriate and promising nest in which to embed users' emotional feedback with new emotional design methodologies, such as facial expression recognition. This way, the present methodology consists in interviewing the user and acquiring his/her face with a depth camera (allowing three-dimensional (3D) data), clustering the face information into different emotions with a support vector machine classificator, and assigning customers' needs weights relying on the detected facial expressions. The proposed method has been applied to a case study in the context of agriculture and validated by a consortium. The approach appears sound and capable of collecting the unconscious feedback of the interviewee.
Keywords: 3D facial expression recognition | Emotional design | Quality function deployment (QFD) | User-centred design
Abstract: Surgical interventions for jaw reconstruction require the design and the production of surgical guides that allow the surgeon to operate quickly and accurately. In some cases, the reconstruction is performed by inserting a prothesis, thus operating exclusively on the jaw, while in other cases the reconstruction is performed by withdrawing and inserting part of the fibula in place of the original jaw bone. This project aims to develop a procedure that allows 3D modeling of the surgical guides necessary for surgical intervention. The idea is to find a surgical guide archetype, a starting shape for the surgeon so that the cutting planes can be oriented without the surgical guide having to be redesigned from scratch for every single patient. The first step of the procedure is the segmentation, performed applying the thresholding operation on the images provided by magnetic resonance MR in order to identify the region of interest (ROI). The second step is the reconstruction of the 3D model, so that a mesh is obtained from 2D images. Subsequently the mesh is post-processed and the cutting plans along which the surgeon will intervene are defined.
Keywords: 3D modeling | 3D reconstruction | Maxillofacial surgery | Surgical guides
Abstract: This work proposes a method for recognizing the main 13 Facial Action Units and the 6 basic emotions. The methodologies rely on Differential Geometry to extract relevant discriminant features from the query faces, and on some linear quantities used as measures: Euclidean, geodesic, and angles between 17 automatically extracted soft-tissue landmarks. A thresholding system which evaluates local properties of connected regions, selected through tailored geometrical descriptors, supports the identification of the AUs. Then, a technique based on crisp logic allows the identification of the global expression. The three-dimensional context has been preferred due to its invariance to different lightening/make-up/camouflage conditions.
Keywords: Emotional design | Face expression recognition | Intelligent drive
Abstract: This study proposes a novel automatic method for facial landmark localization relying on geometrical properties of 3D facial surface working both on complete faces displaying different emotions and in presence of occlusions. In particular, 12 descriptors coming from Differential Geometry including the coefficients of the fundamental forms, Gaussian, mean, principal curvatures, shape index and curvedness are extracted as facial features and their local geometric properties are exploited to localize 13 soft-tissue landmarks from eye and nose areas. The method is deterministic and is backboned by a thresholding technique designed by studying the behaviour of each geometrical descriptor in correspondence to the locus of each landmark. Occlusions are managed by a detection algorithm based on geometrical properties which allows to proceed with the landmark localization avoiding the covered areas. Experimentations were carried out on 3132 faces of the Bosphorus database and of a 230-sized internal database, including expressive and occluded ones (mouth, eye, and eyeglasses occlusions), obtaining 4.75 mm mean localization error.
Keywords: 3D face | Differential geometry | Face analysis | Feature extraction | Landmark localization
Abstract: Introduction of collision models aided surgeons to plan their resection of the femoral and acetabular impinging deformities. Their application on the hip joint is particularly interesting in association with hip arthroscopy because with technique the vision is limited and identifying friction’s zones is not easy. Although their large diffusion in the market, they still presents several limits. First, the 3D reconstruction has been used only in the preoperatory phase. Its use inside the operatory room is still not available. Furthermore the actual software lack of a pure automatized program to create the 3D model of the hip. Second, nowadays collision model’s software separate and study only the bony part of the hip, erasing from the 3D reconstruction soft tissues. Third, the programs currently developed use statistical data as reference to 3d virtual models and the motion between the hip’s bones. They base the motion analysis on the comparison between data extrapolated from the healthy population and data from the patient exam, consequently image processing may be inaccurate. Although according to our opinion collision model are useful even with their actual limits, in this chapter we describe the limits of the actual software and their potential development.