Vezzetti Enrico
Professore Ordinario
Politecnico di Torino
enrico.vezzetti@polito.it
Sito istituzionale
SCOPUS ID: 23020576900
Orcid: 0000-0001-8910-7020
Pubblicazioni scientifiche
Abstract: In the last couple of decades, the research on 3D facial expression recognition has been fostered by the creation of tailored databases containing prototypical expressions of different individuals and by the advances in cost effective acquisition technologies. Though, most of the currently available databases consist of exaggerated facial expressions, due to the imitation principle which they rely on. This makes these databases only partially employable for real world applications such as human-computer interaction for smart products and environments, health, and industry 4.0, as algorithms learn on these ‘inflated’ data which do not respond to ecological validity requirements. In this work, we present two novel 2D + 3D spontaneous facial expression databases of young adults with different geographical origin, in which emotions have been evoked thanks to affective images of the acknowledged IAPS and GAPED databases, and verified with participants’ self-reports. To the best of our knowledge, these are the first three-dimensional facial databases with emotions elicited by validated affective stimuli.
Keywords: 3D facial expression | Affective database | Ecological validity | Facial expression recognition | Human-computer interaction | Spontaneous expressions
Abstract: The use of CAD and 3D printing of surgical guides (SGs) for osteotomies is a widely developed practice in orthopaedic surgery, and particularly in maxillo-facial interventions, but validation studies rarely occur in literature. The present study defines a methodology to validate SGs dimensionally and mechanically through geometrical analysis, tensile testing, contact simulations, and abrasion testing. Distortions between the 3D printed SGs and the CAD model are quantified and an average deviation error for each production process step is obtained. Mechanical analysis identifies a way of applying the load on the SG to measure their equivalent linear stiffness (N/mm), maximum displacement (mm) and corresponding tolerable load (N) by varying some dimensional parameters. The stress state was assessed by finite element method (FEM) analysis, then the numerical results were compared with experimental ones using tensile tests: stiffness, maximum displacement and the corresponding loads were evaluated. The distribution of contact pressure on soft tissues was obtained numerically by FEM analysis. Finally, an ad hoc machine has been specially built to engrave discoidal specimens with typical operating room conditions. The methodology has been validated using 11 SG fibular and mandibular specimens and reporting the obtained results of each procedure step.
Keywords: CAD | Cutting guides | FEM | Maxillo-facial surgery | Surgical guides
Abstract: In orthopedic surgery and maxillofacial there is a growing use of augmented reality (AR) as a technology to increase the visual perception of the surgeon in the operating room. The objective of this review is to analyze the state of the art in the use of AR for osteotomies, highlighting the advantages and the most-known open issues to be addressed in the future research. Scopus, Web of Science, Pubmed and IEEE Xplore databases have been explored with a keyword search, setting the time limits from January 2017 to January 2023, inclusive. Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines has been used in this review, focusing on anatomical districts, real-virtual environment interaction, advantaged and limitations of existing AR-based applications. 49 articles met the inclusion criteria and have been selected in the final analysis. For the sake of clarity, works have been grouped according to the anatomical district, but also the real-virtual environment interaction methodology was reported, as well as information regarding accuracy assessment. A Quality Function Deployment (QFD) has been used to assess the AR-based solutions with regards to the more traditional freehand (FH) and Patient Specific Template (PST) approaches. Finally, some suggestions to foster the AR-based solution adoption in osteotomies treatment have been drawn, considering the highlighted advantages and limitations of this technology. The AR resulted to meet the surgeons’ needs more than other traditional approaches. Among the emerged advantages, AR can lead to a better surgical field accessibility, more flexible solutions and lower the management effort. Nonetheless, future research should address some well-known issues, among which the calibration time, the robustness of the tracking, and the HMDs discomfort.
Keywords: Augmented reality | Cutting guides | Maxillofacial surgery | Mixed reality | Orthopedic surgery | Osteotomies | Patient specific templates | Surgical guides
Abstract: Protein–protein interaction occurs on surface patches with some degree of complementary geometric and chemical features. Building on this understanding, this study endeavors to characterize the spike protein of the SARS-CoV-2 virus at the morphological and geometrical levels in its Alpha, Delta, and Omicron variants. In particular, the affinity between different SARS-CoV-2 spike proteins and the ACE2 receptor present on the membrane of the human respiratory system cells is investigated. To achieve an adequate degree of geometrical accuracy, the 3D depth maps of the proteins in exam are filtered by developing an ad-hoc convolutional filter with a kernel implemented as a sphere of varying radius, simulating a ball rolling on the surface (similar to the 'rolling ball' filter). This ball ideally models a hypothetical molecule that could interface with the protein and is inspired by the geometric approach to macromolecule–ligand interactions proposed by Kuntz et al. in 1982. The aim is to mitigate the imperfections and to obtain a smoother surface that could be studied from a geometrical perspective for binding purposes. A set of geometric descriptors, borrowed from the 3D face analysis context is then mapped point-by-point onto protein depth maps. Following a feature extraction phase inspired by Histogram of Oriented Gradients and Local Binary Patterns, the final histogram features are used as input for a Support Vector Machine classifier to automatically classify the proteins according to their surface affinity, where a similarity in shape is observed between ACE2 and the spike protein of the SARS-CoV-2 Omicron variant. Finally, Root Mean Square Error analysis is used to quantify the geometrical affinity between the ACE2 receptor and the respective Receptor Binding Domains of the three SARS-CoV-2 variants, culminating in a geometrical explanation for the higher contagiousness of Omicron relative to the other variants under study.
Abstract: The metaverse, powered by XR technologies, enables human augmentation by enhancing physical, cognitive, and sensory capabilities. Cultural heritage sees the metaverse as a vehicle for expression and exploration, providing new methods for heritage fruition and preservation. This article proposes a metaverse application, inspired by the events of the Italian Resistance, promoting interactions between multiple users in an immersive VR experience while safeguarding intangible cultural assets according to an edutainment approach. The virtual environment, based on Ivrea’s town hall square, provides in-depth information about the partisan’s life and the historical value of its actions for the city. Furthermore, the application allows users to meet in the same virtual place and engage with one another in real time through the Spatial SDK. Before the public presentation, a heterogeneous group of thirty users underwent usability and engagement tests to assess the experience on both VR headsets and smartphones. Tests revealed statistically significant evidence that there is a genuine difference in users’ perceptions of usability and engagement with different devices and types of interaction. This study highlights the effectiveness of adopting XR as a supporting technology to complement the real experience of cultural heritage valorization.
Keywords: cultural heritage | digital heritage | edutainment | metaverse | virtual reality
Abstract: Developing great products is not simple; users want products that can constantly adjust to their needs. The product development process should consider not only the users’ requirements and wishes but also their perceptions and emotions during and after the human-product interaction. Traditional studies have used self-report methods to study the users’ emotions; however, technological advances are making other methods able to measure respondents’ behavior. Electroencephalography (EEG), a technique for recording and interpreting the brain’s electrical activity, is becoming a valid tool to assess users’ emotional states. This study aims to explore the EEG as a method to interpret emotions. To do this, we created three different VR scenarios characterized by different interior design and automatic chromatic variations as a stimulus; this research aims to analyze if the changes in colors and scenarios reflect on the participants’ emotional responses, specifically on Valence, Arousal, and Engagement. The findings show that EEG results are a valid aid to emotion interpretation; also that color variation might influence users’ emotions and that the emotional responses were more evident when changing between scenarios. We expect this study can provide more information regarding the potential of physiological methods to explore users’ emotions during the product design and development.
Keywords: EEG | Emotional design | Engagement | Product development
Abstract: The evaluation of hip implantation success remains one of the most relevant problems in orthopaedics. There are several factors that can cause its failure, e.g.: aseptic loosening and dislocations of the prosthetic joint due to implant impingement. Following a total hip arthroplasty, it is fundamental that the orthopaedist can evaluate which may be the possible risk factors that would lead to dislocation, or in the worst cases, to implant failure. A procedure has been carried out with the aim of evaluating the Range of Movement (ROM) of the implanted prosthesis, to predict whether the inserted implant is correctly positioned or will be prone to dislocation or material wear due to the malposition of its components. Leveraging on a previous patented methodology that consists in the 3D reconstruction and movement simulation of the hip joint, this work aims to provide a more effective visualization of the simulation results through Mixed Reality (MR). The use of MR for the representation of hip kinematics and implant position can provide the orthopaedic surgeon with a deeper understanding of the orientation and position of implanted components, as well as the consequences of such placements while looking directly at the patient. To this end, an anchoring system based on a body-tracking recognition library was developed, so that both completely automatic and human-assisted options are available without additional markers or sensors. An Augmented Reality (AR) prototype has been developed in Unity 3D and used on HoloLens 2, integrating the implemented human-assisted anchoring system option.
Keywords: Computer-aided surgery | HoloLens 2 | Mixed reality | THA assessment | Total hip arthroplasty
Abstract: In the context of human–computer interaction (HCI), understanding user engagement (UE) while interacting with a product or service can provide valuable information for enhancing the design process. UE has been a priority research theme within HCI, as it assesses the user experience by studying the individual’s behavioral response to some stimulus. Many studies looking to quantify the UE are available; however, most use self-report methods that rely only on participants’ answers. This study aims to explore a non-traditional method, specifically electroencephalography, to analyze users’ engagement while interacting with an advergame, an interactive form of advertising in video games. We aim to understand if a more interactive type of advertising will enhance the UE and whether, at the same time, it would influence the user’s purchase intention (UPI). To do this, we computed and compared the UE during the interaction with an advergame and a conventional TV commercial while measuring the participants’ brain activity. After the interaction with both types of advertising, the UPI was also evaluated. The findings demonstrate that a more interactive advertisement increased the participants’ UE and that, in most cases, a UE increment positively influenced the UPI. This study shows an example of the potential of physiological feedback applications to explore the users’ perceptions during and after the human–product interaction. The findings show how physiological methods can be used along with traditional ones for enhancing the UE analysis and provide helpful information about the advantages of engagement measurement in HCI applications.
Keywords: advergames | EEG | purchase intention | user engagement
Abstract: The current study presents a multi-task end-to-end deep learning model for real-time blood accumulation detection and tools semantic segmentation from a laparoscopic surgery video. Intraoperative bleeding is one of the most problematic aspects of laparoscopic surgery. It is challenging to control and limits the visibility of the surgical site. Consequently, prompt treatment is required to avoid undesirable outcomes. This system exploits a shared backbone based on the encoder of the U-Net architecture and two separate branches to classify the blood accumulation event and output the segmentation map, respectively. Our main contribution is an efficient multi-task approach that achieved satisfactory results during the test on surgical videos, although trained with only RGB images and no other additional information. The proposed multi-tasking convolutional neural network did not employ any pre- or postprocessing step. It achieved a Dice Score equal to 81.89% for the semantic segmentation task and an accuracy of 90.63% for the event detection task. The results demonstrated that the concurrent tasks were properly combined since the common backbone extracted features proved beneficial for tool segmentation and event detection. Indeed, active bleeding usually happens when one of the instruments closes or interacts with anatomical tissues, and it decreases when the aspirator begins to remove the accumulated blood. Even if different aspects of the presented methodology could be improved, this work represents a preliminary attempt toward an end-to-end multi-task deep learning model for real-time video understanding.
Keywords: bleeding detection | CNN | laparoscopic surgery | multi-task convolutional neural network | semantic segmentation
Abstract: Objectives: The aim of this study was to analyse changes in facial soft tissue thickness (FSTT) after corrective surgeries for dental malocclusion. The correlation between body mass index (BMI) and sex of patients and their FSTT before undergoing surgery was analysed. Materials and methods: Cone beam computed tomography of seventeen patients that underwent Le Fort I osteotomy in combination with bilateral sagittal split osteotomy were collected. Hard and soft tissue landmarks were selected basing on the interventions. FSTT were computed, and measurements from pre- to post-operative were compared. The relationship between FSTT, sex, and BMI was investigated. Results: Considering the comparison between pre- and post-operative measurements, any significant difference emerged (p >.05). The Pearson’s correlation coefficient computed between BMI and the FSTT (pre-operative) showed a correlation in normal-weight patients; the region-specific analysis highlighted a stronger correlation for specific landmarks. Higher median values emerged for women than for men; the subset-based analysis showed that women presented higher values in the malar region, while men presented higher values in the nasal region. Conclusions: The considered surgeries did not affect the FSTT of the patients; differences related to BMI and sex were found. A collection of FSTT mean values was provided for twenty landmarks of pre- and post-operative of female and male subjects. Clinical relevance: This exploratory analysis gave insights on the behaviour of STT after maxillofacial surgeries that can be applied in the development of predictive methodologies for soft tissue displacements and to study modifications in the facial aspect of the patients.
Keywords: Bilateral sagittal split osteotomy (BSSO) | Body mass index (BMI) | Facial landmarks | Le Fort I osteotomy (LFI) | Soft tissue thickness (STT)
Abstract: Most cultural promotion and dissemination are nowadays performed through the digitization of heritage sites and museums, a necessary requirement to meet the new needs of the public. Augmented Reality (AR), Mixed Reality (MR), and Virtual Reality (VR) have the potential to improve the experience quality and educational effect of these sites by stimulating users’ senses in a more natural and vivid way. In this respect, head-mounted display (HMD) devices allow visitors to enhance the experience of cultural sites by digitizing information and integrating additional virtual cues about cultural artifacts, resulting in a more immersive experience that engages the visitor both physically and emotionally. This study contributes to the development and incorporation of AR, MR, and VR applications in the cultural heritage domain by providing an overview of relevant studies utilizing fully immersive systems, such as headsets and CAVE systems, emphasizing the advantages that they bring when compared to handheld devices. We propose a framework study to identify the key features of headset-based Extended Reality (XR) technologies used in the cultural heritage domain that boost immersion, sense of presence, and agency. Furthermore, we highlight core characteristics that favor the adoption of these systems over more traditional solutions (e.g., handheld devices), as well as unsolved issues that must be addressed to improve the guests’ experience and the appreciation of the cultural heritage. An extensive search of Google Scholar, Scopus, IEEE Xplore, ACM Digital Library, and Wiley Online Library databases was conducted, including papers published from January 2018 to September 2022. To improve review reporting, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used. Sixty-five papers met the inclusion criteria and were classified depending on the study's purpose: education, entertainment, edutainment, touristic guidance systems, accessibility, visitor profiling, and management. Immersive cultural heritage systems allow visitors to feel completely immersed and present in the virtual environment, providing a stimulating and educational cultural experience that can improve the quality and learning purposes of cultural visits. Nonetheless, the analyzed studies revealed some limitations that must be faced to give a further impulse to the adoption of these technologies in the cultural heritage domain.
Keywords: Augmented reality (AR) | Cultural heritage | Digital heritage | Head-mounted display (HMD) | Mixed reality (MR) | Virtual reality (VR)
Abstract: The research on technologies and methodologies for (accurate, real-time, spontaneous, three-dimensional…) facial expression recognition is ongoing and has been fostered in the past decades by advances in classification algorithms like deep learning, which makes them part of the Artificial Intelligence literature. Still, despite its upcoming application to contexts such as human–computer interaction, product and service design, and marketing, only a few literature studies have investigated the willingness of end users to share their facial data with the purpose of detecting emotions. This study investigates the level of awareness and interest of 373 potential consumers towards this technology in the car insurance sector, particularly in the contract drafting phase, with a focus on differentiating the respondents between generation Y and Z. Results show that younger people, individuals with higher levels of education, and social network users feel more confident about this innovative technology and are more likely to share their expressive facial data.
Keywords: Facial expression recognition | Insurance market | Survey | Technology acceptance | Technology propensity | Young consumers
Abstract: Background: Addressing intraoperative bleeding remains a significant challenge in the field of robotic surgery. This research endeavors to pioneer a groundbreaking solution utilizing convolutional neural networks (CNNs). The objective is to establish a system capable of forecasting instances of intraoperative bleeding during robot-assisted radical prostatectomy (RARP) and promptly notify the surgeon about bleeding risks. Methods: To achieve this, a multi-task learning (MTL) CNN was introduced, leveraging a modified version of the U-Net architecture. The aim was to categorize video input as either “absence of blood accumulation” (0) or “presence of blood accumulation” (1). To facilitate seamless interaction with the neural networks, the Bleeding Artificial Intelligence-based Detector (BLAIR) software was created using the Python Keras API and built upon the PyQT framework. A subsequent clinical assessment of BLAIR’s efficacy was performed, comparing its bleeding identification performance against that of a urologist. Various perioperative variables were also gathered. For optimal MTL-CNN training parameterization, a multi-task loss function was adopted to enhance the accuracy of event detection by taking advantage of surgical tools’ semantic segmentation. Additionally, the Multiple Correspondence Analysis (MCA) approach was employed to assess software performance. Results: The MTL-CNN demonstrated a remarkable event recognition accuracy of 90.63%. When evaluating BLAIR’s predictive ability and its capacity to pre-warn surgeons of potential bleeding incidents, the density plot highlighted a striking similarity between BLAIR and human assessments. In fact, BLAIR exhibited a faster response. Notably, the MCA analysis revealed no discernible distinction between the software and human performance in accurately identifying instances of bleeding. Conclusion: The BLAIR software proved its competence by achieving over 90% accuracy in predicting bleeding events during RARP. This accomplishment underscores the potential of AI to assist surgeons during interventions. This study exemplifies the positive impact AI applications can have on surgical procedures.
Keywords: artificial intelligence | complications | prostate cancer | robotics
Abstract: In the last decade, museums and exhibitions have benefited from the advances in Virtual Reality technologies to create complementary virtual elements to the traditional visit. The aim is to make the collections more engaging, interactive, comprehensible and accessible. Also, the studies regarding users’ and visitors’ engagement suggest that the real affective state cannot be fully assessed with self-assessment techniques and that other physiological techniques, such as EEG, should be adopted to gain a more unbiased and mature understanding of their feelings. With the aim of contributing to bridging this knowledge gap, this work proposes to adopt literature EEG-based indicators (valence, arousal, engagement) to analyze the affective state of 95 visitors interacting physically or virtually (in a VR environment) with five handicraft objects belonging to the permanent collection of the Museo dell’Artigianato Valdostano di Tradizione, which is a traditional craftsmanship museum in the Valle d’Aosta region. Extreme Gradient Boosting (XGBoost) was adopted to classify the obtained engagement measures, which were labeled according to questionnaire replies. EEG analysis played a fundamental role in understanding the cognitive and emotional processes underlying immersive experiences, highlighting the potential of VR technologies in enhancing participants’ cognitive engagement. The results indicate that EEG-based indicators have common trends with self-assessment, suggesting that their use as ‘the ground truth of emotion’ is a viable option.
Keywords: craftsmanship | cultural heritage | EEG | user engagement | Virtual Reality | XGBoost
Abstract: Facial appearance is one prominent feature in analyzing several aspects, e.g., aesthetics and expression of emotions, and face analysis is crucial in many fields. Face analysis requires measurements that can be performed by different technologies and typically relies on landmarks identification. Recently, low-cost customer grade 3D cameras have been introduced in the market, enabling an increase of application at affordable cost with nominal adequate performances. Novel cameras require to be thoroughly metrologically characterized to guarantee these performances. Cameras are calibrated following a standard general-purpose procedure. However, the specificity of facial measurements requires a task-based metrological characterization to include typical influence factors. This work outlines a methodology for task-based metrological characterization of low-cost 3D cameras for facial analysis, consisting of: influence factor identification by ANOVA, related uncertainty contribution assessment, uncertainty propagation, landmarking uncertainty estimation. The proposed methodology is then demonstrated on a customer grade state-of-the-art 3D camera available on the market.
Keywords: Depth cameras | Face analysis | Human-machine interaction | Machine vision | Measurement uncertainty | Soft tissue landmarks
Abstract: Introduction: In recent years, the scientific community focused on developing Computer-Aided Diagnosis (CAD) tools that could improve clinicians’ bone fractures diagnosis, primarily based on Convolutional Neural Networks (CNNs). However, the discerning accuracy of fractures’ subtypes was far from optimal. The aim of the study was 1) to evaluate a new CAD system based on Vision Transformers (ViT), a very recent and powerful deep learning technique, and 2) to assess whether clinicians’ diagnostic accuracy could be improved using this system. Materials and methods: 4207 manually annotated images were used and distributed, by following the AO/OTA classification, in different fracture types. The ViT architecture was used and compared with a classic CNN and a multistage architecture composed of successive CNNs. To demonstrate the reliability of this approach, (1) the attention maps were used to visualize the most relevant areas of the images, (2) the performance of a generic CNN and ViT was compared through unsupervised learning techniques, and (3) 11 clinicians were asked to evaluate and classify 150 proximal femur fractures’ images with and without the help of the ViT, then results were compared for potential improvement. Results: The ViT was able to predict 83% of the test images correctly. Precision, recall and F1-score were 0.77 (CI 0.64–0.90), 0.76 (CI 0.62–0.91) and 0.77 (CI 0.64–0.89), respectively. The clinicians’ diagnostic improvement was 29% (accuracy 97%; p 0.003) when supported by ViT's predictions, outperforming the algorithm alone. Conclusions: This paper showed the potential of Vision Transformers in bone fracture classification. For the first time, good results were obtained in sub-fractures classification, outperforming the state of the art. Accordingly, the assisted diagnosis yielded the best results, proving the effectiveness of collaborative work between neural networks and clinicians.
Keywords: CAD system | Deep learning | Femur fracture | Self-attention | Vision transformer
Abstract: Introduction: The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. Methods: This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. Results: The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. Discussion: Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process.
Keywords: abdominal | Kidney | prostate
Abstract: Augmented reality robot-assisted partial nephrectomy (AR-RAPN) is limited by the need of a constant manual overlapping of the hyper-accuracy 3D (HA3D) virtual models to the real anatomy. To present our preliminary experience with automatic 3D virtual model overlapping during AR-RAPN. To reach a fully automated HA3D model overlapping, we pursued computer vision strategies, based on the identification of landmarks to link the virtual model. Due to the limited field of view of RAPN, we used the whole kidney as a marker. Moreover, to overcome the limit of similarity of colors between the kidney and its neighboring structures, we super-enhanced the organ, using the NIRF Firefly fluorescence imaging technology. A specifically developed software named “IGNITE” (Indocyanine GreeN automatIc augmenTed rEality) allowed the automatic anchorage of the HA3D model to the real organ, leveraging the enhanced view offered by NIRF technology. Ten automatic AR-RAPN were performed. For all the patients a HA3D model was produced and visualized as AR image inside the robotic console. During all the surgical procedures, the automatic ICG-guided AR technology successfully anchored the virtual model to the real organ without hand-assistance (mean anchorage time: 7 seconds), even when moving the camera throughout the operative field, while zooming and translating the organ. In 7 patients with totally endophytic or posterior lesions, the renal masses were correctly identified with automatic AR technology, performing a successful enucleoresection. No intraoperative or postoperative Clavien >2 complications or positive surgical margins were recorded. Our pilot study provides the first demonstration of the application of computer vision technology for AR procedures, with a software automatically performing a visual concordance during the overlap of 3D models and in vivo anatomy. Its actual limitations, related to the kidney deformations during surgery altering the automatic anchorage, will be overcome implementing the organ recognition with deep learning algorithms.
Abstract: In the era of ‘metaverse’, virtual environments are gaining popularity among new multimedia contents and are also recognized as a valuable means to deliver emotional content. This is favoured by cost reduction, availability and acceptance by end-users of virtual reality technology. Creating effective virtual environments can be achieved by exploiting several opportunities: creating artificial worlds able to generate different stories, mixing sensory cues, and making the whole interactive. The design space for creating emotional virtual environments is ample, and no clear idea of how to integrate the various components exists. This paper discusses how to combine multiple design elements to elicit five distinct emotions. We developed and tested two scenarios per emotion. We present the methodology, the development of the case studies, and the results of the testing.
Keywords: Affective Virtual Reality | Design Methodology | Emotions | Metaverse | Virtual Reality
Abstract: Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols.
Keywords: augmented reality | digital health | HoloLens | Human–Computer Interaction (HCI) | Human–Machine Interaction (HMI) | intraoperative | medical training | mixed reality | rehabilitation
Abstract: Purpose: To evaluate the role of 3D models on positive surgical margin rate (PSM) rate in patients who underwent robot-assisted radical prostatectomy (RARP) compared to a no-3D control group. Secondarily, we evaluated the postoperative functional and oncological outcomes. Methods: Prospective study enrolling patients with localized prostate cancer (PCa) undergoing RARP with mp-MRI-based 3D model reconstruction, displayed in a cognitive or augmented-reality fashion, at our Centre from 01/2016 to 01/2020. A control no-3D group was extracted from the last two years of our Institutional RARP database. PSMr between the two groups was evaluated and multivariable linear regression (MLR) models were applied. Finally, Kaplan–Meier estimator was used to calculate biochemical recurrence at 12 months after the intervention. Results: 160 patients were enrolled in the 3D Group, while 640 were selected for the Control Group. A more conservative NS approach was registered in the 3D Group (full NS 20.6% vs 12.7%; intermediate NS 38.1% vs 38.0%; standard NS 41.2% vs 49.2%; p = 0.02). 3D Group patients had lower PSM rates (25 vs. 35.1%, p = 0.01). At MLR models, the availability of 3D technology (p = 0.005) and the absence of extracapsular extension (ECE, p = 0.004) at mp-MRI were independent predictors of lower PSMr. Moreover, 3D model represented a significant protective factor for PSM in patients with ECE or pT3 disease. Conclusion: The availability of 3D models during the intervention allows to modulate the NS approach, limiting the occurrence of PSM, especially in patients with ECE at mp-MRI or pT3 PCa.
Keywords: 3D modeling | Augmented reality | Prostate cancer | Robotic surgery | Surgical margins
Abstract: Following an oncological resection or trauma it may be necessary to reconstruct the normal anatomical and functional mandible structures to ensure the effective and complete social reintegration of patients. In most surgical procedures, reconstruction of the mandibular shape and its occlusal relationship is performed through the free fibula flap using a surgical guide which allows the surgeon to easily identify the location and orientation of the cutting plane. In the present work, we present a Mixed Reality (MR)-based solution to support professionals in surgical guide positioning. The proposed solution, through the use of a Head-Mounted Display (HMD) such as that of the HoloLens 2, visualizes a 3D virtual model of the surgical guide, positioned over the patient’s real fibula in the correct position as identified by the medical team before the procedure. The professional wearing the HMD is then assisted in positioning the real guide over the virtual one by our solution, which is capable of tracking the real guide during the whole process and computing its distance from the final position. The assessment results highlight that Mixed Reality is an eligible technology to support surgeons, combining the usability of the device with an improvement of the accuracy in fibula flap removal surgery.
Keywords: HoloLens 2 | mandibular reconstruction | maxillofacial surgery | mixed reality | surgical guide
Abstract: The current study aimed to propose a Deep Learning (DL) based framework to retrieve in real-time the position and the rotation of an object in need of maintenance from live video frames only. For testing the positioning performances, we focused on intervention on a generic Fused Deposition Modeling (FDM) 3D printer maintenance. Lastly, to demonstrate a possible Augmented Reality (AR) application that can be built on top of this, we discussed a specific case study using a Prusa i3 MKS FDM printer. This method was developed using a You Only Look Once (YOLOv3) network for object detection to locate the position of the FDM 3D printer and a subsequent Rotation Convolutional Neural Network (RotationCNN), trained on a dataset of artificial images, to predict the rotations’ parameters for attaching the 3D model. To train YOLOv3 we used an augmented dataset of 1653 real images, while to train the RotationCNN we utilized a dataset of 99.220 synthetic images, showing the FDM 3D Printer with different orientations, and fine-tuned it using 235 real images tagged manually. The YOLOv3 network obtained an AP (Average Precision) of 100% with Intersection Over Unit parameter of 0.5, while the RotationCNN showed a mean Geodesic Distance of 0.250 (σ = 0.210) and a mean accuracy to detect the correct rotation r of 0.619 (σ = 0.130), considering as acceptable the range [r − 10, r + 10]. We then evaluate the CAD system performances with 10 non-expert users: the average speed improved from 9.61 (σ = 1.53) to 5.30 (σ = 1.30) and the average number of actions to complete the task from 12.60 (σ = 2.15) to 11.00 (σ = 0.89). This work is a further step through the adoption of DL and AR in the assistance domain. In future works, we will overcome the limitations of this approach and develop a complete mobile CAD system that could be extended to any object that presents a 3D counterpart model.
Keywords: Augmented reality | CAD assistance | Deep learning | Neural network
Abstract: Building great products or services is not easy; users want products and services that exceed their expectations and evolve with their needs; it is not just about building the right features. Knowing the user engagement (UE) towards a physical, virtual product or service can give valuable information that could be used as feedback for the design, enhancing its chances of success. In the context of user-centered design, UE is the assessment of the user experience characterized by the study of the individual's cognitive, affective, and behavioral response to some stimulus, such as a product, a service, or a website. UE considers not only the users’ requirements and wishes but also their perceptions and reactions during and after an interaction with a product, system, or service. Many studies looking to quantify the UE are available. Still, a framework that provides a generic view of the most commonly used methods and metrics to measure UE does not yet exist in the literature. Aiming to understand the UE better, in this research, we developed a conceptual framework summarizing the available metrics and techniques used across different contexts, including good practices of self-report methods and physiological approaches. We expect this study will allow future researchers, developers, and designers to consider the UE as one of the most prominent product/service success indicators and use this guideline to find the more appropriate method, technique, and metric for its measurement.
Keywords: Consumer engagement | Physiological | UCD | User engagement measurement
Abstract: The 6D pose estimation of an object from an image is a central problem in many domains of Computer Vision (CV) and researchers have struggled with this issue for several years. Traditional pose estimation methods (1) leveraged on geometrical approaches, exploiting manually annotated local features, or (2) relied on 2D object representations from different points of view and their comparisons with the original image. The two methods mentioned above are also known as Feature-based and Template-based, respectively. With the diffusion of Deep Learning (DL), new Learning-based strategies have been introduced to achieve the 6D pose estimation, improving traditional methods by involving Convolutional Neural Networks (CNN). This review analyzed techniques belonging to different research fields and classified them into three main categories: Template-based methods, Feature-based methods, and Learning-Based methods. In recent years, the research mainly focused on Learning-based methods, which allow the training of a neural network tailored for a specific task. For this reason, most of the analyzed methods belong to this category, and they have been in turn classified into three sub-categories: Bounding box prediction and Perspective-n-Point (PnP) algorithm-based methods, Classification-based methods, and Regression-based methods. This review aims to provide a general overview of the latest 6D pose recovery methods to underline the pros and cons and highlight the best-performing techniques for each group. The main goal is to supply the readers with helpful guidelines for the implementation of performing applications even under challenging circumstances such as auto-occlusions, symmetries, occlusions between multiple objects, and bad lighting conditions.
Keywords: 6D position estimation | Computer vision | Deep learning | RGB Input
Abstract: Despite the great potential of Virtual Reality (VR) to arouse emotions, there are no VR affective databases available as it happens for pictures, videos, and sounds. In this paper, we describe the validation of ten affective interactive Virtual Environments (VEs) designed to be used in Virtual Reality. These environments are related to five emotions. The testing phase included using two different experimental setups to deliver the overall experience. The setup did not include any immersive VR technology, because of the ongoing COVID-19 pandemic, but the VEs were designed to run on stereoscopic visual displays. We collected measures related to the participants’ emotional experience based on six discrete emotional categories plus neutrality and we included an assessment of the sense of presence related to the different experiences. The results showed how the scenarios can be differentiated according to the emotion aroused. Finally, the comparison between the two experimental setups demonstrated high reliability of the experience and strong adaptability of the scenarios to different contexts of use.
Abstract: Driver inattention is the primary cause of vehicle accidents; hence, manufacturers have introduced systems to support the driver and improve safety; nonetheless, advanced driver assistance systems (ADAS) must be properly designed not to become a potential source of distraction for the driver due to the provided feedback. In the present study, an experiment involving auditory and haptic ADAS has been conducted involving 11 participants, whose attention has been monitored during their driving experience. An RGB-D camera has been used to acquire the drivers’ face data. Subsequently, these images have been analyzed using a deep learning-based approach, i.e., a convolutional neural network (CNN) specifically trained to perform facial expression recognition (FER). Analyses to assess possible relationships between these results and both ADAS activations and event occurrences, i.e., accidents, have been carried out. A correlation between attention and accidents emerged, whilst facial expressions and ADAS activations resulted to be not correlated, thus no evidence that the designed ADAS are a possible source of distraction has been found. In addition to the experimental results, the proposed approach has proved to be an effective tool to monitor the driver through the usage of non-invasive techniques.
Keywords: ADAS | CNN | DADA | Deep learning | Driver’s attention | RGB-D camera
Abstract: In recent years information and communication technologies (ICT) have played a significant role in all aspects of modern society and have impacted socioeconomic development in sectors such as education, administration, business, medical care and agriculture. The benefits of such technologies in agriculture can be appreciated only if farmers use them. In order to predict and evaluate the adoption of these new technological tools, the technology acceptance model (TAM) can be a valid aid. This paper identifies the most commonly used external variables in e-learning, agriculture and virtual reality applications for further validation in an e-learning tool designed for EU farmers and agricultural entrepreneurs. Starting from a literature review of the technology acceptance model, the analysis based on Quality Function Deployment (QFD) shows that computer self-efficacy, individual innovativeness, computer anxiety, perceived enjoyment, social norm, content and system quality, experience and facilitating conditions are the most common determinants addressing technology acceptance. Furthermore, findings evidenced that the external variables have a different impact on the two main beliefs of the TAM Model, Perceived Usefulness (PU) and Perceived Ease of Use (PEOU). This study is expected to bring theoretical support for academics when determining the variables to be included in TAM extensions.
Keywords: Agriculture | E-learning | QFD | TAM | Technology acceptance | Virtual reality
Abstract: Background and aim of the work: Implant dislocation in total hip arthroplasties (THA) is a common concern amongst the orthopedic surgeons and represents the most frequent complication after primary implant. Several causes could be responsible for the dislocation, including the malpositioning of the components. Conventional imaging techniques frequently fail to detect the mechanical source of dislocation mainly because they could not reproduce a dynamic evaluation of the components. The purpose of this study was to elaborate a diagnostic tool capable to virtually assess if the range of movement (ROM) of a THA is free from anterior and/or superior mechanical impingement. The ultimate aim is to give the surgeon the possibility to weigh the mechanical contribution in a THA dislocation. Methods: A group of patients who underwent THA revision for acute dislocation was compared to a group of non-dislocating THA. CT scans and a virtual model of each patient was obtained. A software called “Prosthesis Impingement Simulator (PIS)” was developed for simulating the (ROM) of the prosthetic hip. The ROM free of mechanical impingement was compared between the two groups. Results: The PIS test could detect the dislocations with a sensitivity of 71,4%, and a specificity of 85,7%. The Fisher’s exact test showed a p-value of 0,02. Conclusion: The PIS seems to be an effective tool for the determination of hip prosthetic impingement, as the main aid of the software is the exclusion of mechanical causes in the event of a dislocation.
Keywords: Dislocation | Impingement | Range of motion | Revision surgery | Software | THA | Total hip arthroplasty
Abstract: Today, surgical operations are less invasive than they were a few decades ago and, in medicine, there is a growing trend towards precision surgery. Among many technological advancements, augmented reality (AR) can be a powerful tool for improving the surgery practice through its ability to superimpose the 3D geometrical information of the pre-planned operation over the surgical field as well as medical and instrumental information gathered from operating room equipment. AR is fundamental to reach new standards in maxillofacial surgery. The surgeons will be able to not shift their focus from the patients while looking to the monitors. Osteotomies will not require physical tools to be fixed on patient bones as guides to make resections. Handling grafts and 3D models directly in the operating room will permit a fine tuning of the procedure before harvesting the implant. This article aims to study the application of AR head-mounted displays (HMD) in three operative scenarios (oncological and reconstructive surgery, orthognathic surgery, and maxillofacial trauma surgery) by the means of quantitative logic using the Quality Function Deployment (QFD) tool to determine their requirements. The article provides an evaluation of the readiness degree of HMD currently on market and highlights the lacking features.
Keywords: Computer-assisted surgery | Head mounted display | Maxillofacial surgery | Precision medicine | Quality function deployment
Abstract: The development of new methods for the correct disposal of waste is unavoidable for any city that aims to become eco-friendly. Waste management is no exception. In the modern era, the treatment and disposal of infectious waste should be seen as an opportunity to generate renewable energy, resource efficiency, and, above all, to improve the population's quality of life. Northern Italy currently produces 66,600 tons/year of infectious waste, mostly treated through incineration plants. This research aims to explore a more ecological and sustainable solution, thereby contributing one more step toward achieving better cities for all. Particularly, this paper presents a conceptual design of the main sterilization chamber for infectious waste. The methodology selected was Design Thinking (DT), since it has a user-centered approach which allows for co-design and the inclusion of the target population. This study demonstrates to the possibility of obtaining feasible results based on the user's needs through the application of DT as a framework for engineering design.
Keywords: Design thinking | Eco-friendly city | Infectious waste | Resource efficiency | Sustainability | Waste management
Abstract: Student engagement allows educational institutions to make better decisions regarding teaching methodologies, methods for evaluating the quality of education, and ways to provide timely feedback. Due to the COVID-19 pandemic, identifying cognitive student engagement in distance learning has been a challenge in higher education institutions. In this study, we implemented a non-self-report method assessing students’ heart rate data to identify the cognitive engagement during active learning activities. Additionally, as a supplementary tool, we applied a previously validated self-report method. This study was performed in distance learning lessons on a group of university students in Bogota, Colombia. After data analysis, we validated five hypotheses and compared the results from both methods. The results confirmed that the heart rate assessment had a statistically significant difference with respect to the baseline during active learning activities, and this variance could be positive or negative. In addition, the results show that if students are previously advised that they will have to develop an a new task after a passive learning activity (such as a video projection), their heart rate will tend to increase and consequently, their cognitive engagement will also increase. We expect this study to provide input for future research assessing student cognitive engagement using physiological parameters as a tool.
Keywords: Active learning | COVID-19 | Distance learning | Heart rate | Higher education | Non-self-report | Self-report | Student engagement
Abstract: Purpose: The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient’s organ with its 2D endoscopic image, to assist surgeons during the procedure. Methods: This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos (A, B, C, D, E), selected and tagged by our team’s specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. Results: U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 (σ = 0.076) compared to 0.339 (σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model’s anchor point, from 12.569 (σ= 4.456) to 4.160 (σ = 1.448) and in the Geodesic Distance between the predicted and actual model’s rotations, from 0.266 (σ = 0.131) to 0.169 (σ = 0.073). Conclusion: This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.
Keywords: Deep learning | Intra-operative | Neural network | Semantic segmentation
Abstract: Pattern recognition system developers have looked in multiple directions over the years and designed a broad spectrum of methodologies for face identification and verification, both in 2D and 3D. These techniques rely on sound methods and experimentations, and currently give high to excellent recognition rates in terms of performance. Nonetheless, it seems that the most performing face recognition system, especially when familiar faces are involved, is still the human being, able to detect known faces in the wild, in presence of occlusions or extreme light contrast, caricatures, sketches, partial views, blurred images. This is one of the manifold reasons why the human visual system at eye and brain level and face perception techniques are currently being studied by neuroscientists and psychologists, with the aim to uncover the processes underneath the human vision. The purpose of this work is to review the current literature about perception foundations and related biologically-inspired methodologies for face recognition.
Keywords: Biologically-inspired computing | Face expression recognition | Face perception | Face recognition | Human vision | Visual system
Abstract: Supporting education and training initiatives has been identified as an effective way to address Sustainable Development Challenges. In this sense, e-learning stands out as one of the most viable alternatives considering its advantages in terms of resources, time management, and geographical location. Understanding the reasons that move users to adopt these technologies is critical for achieving the desired social objectives. The Technology Acceptance Model (TAM) provides valuable guidelines to identify the variables shaping users’ acceptance of innovations. The present study aims to validate a TAM extension designed for FARMER 4.0, an e-learning application in the agricultural sector. Findings suggest that content quality (CQ) is the primary determinant of farmers’ and agricultural entrepreneurs’ perception of the tool’s usefulness (PU). Furthermore, experience (EXP) and self-efficacy (SE) shape potential users’ perceptions about ease of use (PEOU). This study offers helpful insight into the design and development of e-learning applications in the farming sector and provides empirical evidence of TAM’s validity to assess technology acceptance.
Keywords: Agriculture | E-learning applications | TAM | Technology acceptance
Abstract: Patients with severe facial deformities present serious dysfunctionalities along with an unsatisfactory aesthetic facial appearance. Several methods have been proposed to specifically plan the interventions on the patient’s needs, but none of these seem to achieve a sufficient level of accuracy in predicting the resulting facial appearance. In this context, a deep knowledge of what occurs in the face after bony movements in specific surgeries would give the possibility to develop more reliable systems. This study aims to propose a novel 3D approach for the evaluation of soft tissue zygomatic modifications after zygomatic osteotomy; geometrical descriptors usually involved in face analysis tasks, i.e., face recognition and facial expression recognition, are here applied to soft tissue malar region to detect changes in surface shape. As ground truth for zygomatic changes, a zygomatic openness angular measure is adopted. The results show a high sensibility of geometrical descriptors in detecting shape modification of the facial surface, outperforming the results obtained from the angular evaluation.
Keywords: 3D face analysis | Geometrical descriptors | Malar augmentation | Orthognathic surgery | Soft tissue prediction | Zygomatic osteotomy
Abstract: Background: The aim of this prospective study is to objectively assess 3D soft tissue and bone changes of the malar region by using the malar valgization osteotomy in concomitant association with orthognatic surgery. Materials and methods: From January 2015 to January 2018, 10 patients who underwent single stage bilateral malar valgization osteotomy in conjunction with maxillo-mandibular orthognatic procedures for aesthetic and functional correction were evaluated. Clinical and surgical reports were collected and patient satisfaction was evaluated with a VAS score. For each patient, maxillofacial CT-scans were collected 1 month preoperatively (T0) and 6 months after the operation (T1). DICOM data were imported and elaborated in the software MatLab, which creates a 3D soft tissue model of the face. 3D Bone changes were assessed importing DICOM data into iPlan (BrainLAB 3.0) software and the superimposition process was achieved using autofusion. Descriptive statistical analyses were obtained for soft tissue and bone changes. Results: Considering bone assessment the comparison by superimposition between T0 and T1 showed an increase of the distance between bilateral malar prominence (Pr – Pl) and a slight forward movement (87,65 ± 1,55 to 97,60 ± 5,91); p-value 0.007. All of the patients had improvement of α angle, ranging from 36,30 ± 1,70 to 38,45 ± 0,55, p-value 0,04 (αr) and 36,75 ± 1,58 to 38,45 ± 0,35; p-value 0,04 (αl). The distance S increased from 78,05 ± 2,48 to 84,2 ± 1,20; p-value 0,04 (Sr) and 78,65 ± 2,16 to 82,60 ± 0,90 (Sl); p-value 0,03. Considering the soft tissue, the comparison by superimposition between T0 and T1 showed an antero-lateral movement (p-value 0.008 NVL; p-value 0.001 NVR) of the malar bone projection together with an increase in width measurements (p-value 0,05 VL; p-value 0,01 VR). Angular measurement confirmed the pattern of the bony changes (p-value 0.034 αL; p-value 0,05 αR). Conclusion: The malar valgization osteotomy in conjunction with orthognatic surgery is effective in improving zygomatic projection contributing to a balanced facial correction in midface hypoplasia.3D geometrical based volume and surface analysis demonstrate an increase in transversal and forward direction. The osteotomy can be safely performed in conjunction with orthognatic procedures.
Keywords: Malar augmentation | Orthognatic surgery | Zygomatic osteotomy
Abstract: Computer graphics is-in many cases-about visualizing what you cannot see. However, virtual reality (VR), from its beginnings, aimed at stimulating all human senses: not just the visual channel. Moreover, this set of multisensory stimuli allows users to feel present and able to interact with the virtual environment. In this way, VR aims to deliver experiences that are comparable to real-life ones in their level of detail and stimulation, intensity, and impact. Hence, VR is not only a means to see, but also to feel differently. With the spreading of VR technologies, there is a growing interest in using VR to evoke emotions, including positive and negative ones. This article discusses the current possibilities and the authors' experience collected in the field in trying to elicit emotions through VR. It explores how different design aspects and features can be used, describing their contributions and benefits in the development of affective VR experiences. This work aims at raising awareness of the necessity to consider and explore the full design space that VR technology provides in comparison to traditional media. Additionally, it provides possible tracks of VR affective applications, illustrating how they could impact our emotions and improve our life, and providing guidelines for their development.
Abstract: Artificial Intelligence (AI) algorithms, together with a general increased computational performance, allow nowadays exploring the use of Facial Expression Recognition (FER) as a method of recognizing human emotion through the use of neural networks. The interest in facial emotion and expression recognition in real-life situations is one of the current cutting-edge research challenges. In this context, the creation of an ecologically valid facial expression database is crucial. To this aim, a controlled experiment has been designed, in which thirty-five subjects aged 18–35 were asked to react spontaneously to a set of 48 validated images from two affective databases, IAPS and GAPED. According to the Self-Assessment Manikin, participants were asked to rate images on a 9-points visual scale on valence and arousal. Furthermore, they were asked to select one of the six Ekman’s basic emotions. During the experiment, an RGB-D camera was also used to record spontaneous facial expressions aroused in participants storing both the color and the depth frames to feed a Convolutional Neural Network (CNN) to perform FER. In every case, the prevalent emotion pointed out in the questionnaires matched with the expected emotion. CNN obtained a recognition rate of 75.02%, computed comparing the neural network results with the evaluations given by a human observer. These preliminary results have confirmed that this experimental setting is an effective starting point for building an ecologically valid database.
Keywords: 3D facial database | Affective database | Basic emotions | Ecologically-valid data | Facial expression recognition | Human-robot interaction
Abstract: This article reports on the results of research aimed to translate biometric 3D face recognition concepts and algorithms into the field of protein biophysics in order to precisely and rapidly classify morphological features of protein surfaces. Both human faces and protein surfaces are free-forms and some descriptors used in differential geometry can be used to describe them applying the principles of feature extraction developed for computer vision and pattern recognition. The first part of this study focused on building the protein dataset using a simulation tool and performing feature extraction using novel geometrical descriptors. The second part tested the method on two examples, first involved a classification of tubulin isotypes and the second compared tubulin with the FtsZ protein, which is its bacterial analog. An additional test involved several unrelated proteins. Different classification methodologies have been used: a classic approach with a support vector machine (SVM) classifier and an unsupervised learning with a k-means approach. The best result was obtained with SVM and the radial basis function kernel. The results are significant and competitive with the state-of-the-art protein classification methods. This leads to a new methodological direction in protein structure analysis.
Keywords: 3D face analysis | differential geometry | geometrical descriptors | machine learning | protein classification | support vector machine | tubulin
Abstract: Facial Emotion Recognition (FER) is the automatic processing of human emotions by means of facial expression analysis[1]. The most common approach exploits 3D Face Descriptors (3D-FD)[2], which derive from depth maps[3] by using mathematical operators. In recent years, Convolutional Neural Networks (CNNs) have been successfully employed in a wide range of tasks including large-scale image classification systems and to overcome the hurdles in facial expression classification. Based on previous studies, the purpose of the present work is to analyze and compare the abstraction level of 3D face descriptors with abstraction in deep CNNs. Experimental results suggest that 3D face descriptors have an abstraction level comparable with the features extracted in the fourth layer of CNN, the layer of the network having the highest correlations with emotions.
Keywords: Abstraction | CNN | Deep learning | Explainable AI | Facial Emotion Recognition | FER
Abstract: Purpose: Suspected fractures are among the most common reasons for patients to visit emergency departments and often can be difficult to detect and analyze them on film scans. Therefore, we aimed to design a Deep Learning-based tool able to help doctors in diagnosis of bone fractures, following the hierarchical classification proposed by the Arbeitsgemeinschaft für Osteosynthesefragen (AO) Foundation and the Orthopaedic Trauma Association (OTA). Methods: 2453 manually annotated images of proximal femur were used for the classification in different fracture types (1133 Unbroken femur, 570 type A, 750 type B). Secondly, the A type fractures were further classified into the types A1, A2, A3. Two approaches were implemented: the first is a fine-tuned InceptionV3 convolutional neural network (CNN), used as a baseline for our own proposed approach; the second is a multistage architecture composed by successive CNNs in cascade, perfectly suited to the hierarchical structure of the AO/OTA classification. Gradient Class Activation Maps (Grad-CAM) where used to visualize the most relevant areas of the images for classification. The averaged ability of the CNN was measured with accuracy, area under receiver operating characteristics curve (AUC), recall, precision and F1-score. The averaged ability of the orthopedists with and without the help of the CNN was measured with accuracy and Cohen's Kappa coefficient. Results: We obtained an averaged accuracy of 0.86 (CI 0.84−0.88) for three classes classification and 0.81 (CI 0.79−0.82) for five classes classification. The average accuracy improvement of specialists was 14 % with and without the CAD (Computer Assisted Diagnosis) system. Conclusion: We showed the potential of using a CAD system based on CNN for improving diagnosis accuracy and for helping students with a lower level of expertise. We started our work with proximal femur fractures and we aim to extend it to all bone segments further in the future, in order to implement a tool that could be used in every-day hospital routine.
Keywords: Bone fracture | Convolutional neural network | Deep Learning | Orthopaedics | X-Ray
Abstract: Recently a wide variety of applications has been developed integrating 3D functionalities. Advantages given by the possibility of relying on depth information allows the developers to design new algorithms and to improve the existing ones. In particular, for what concerns face morphology, 3D has led to the possibility to obtain face depth maps highly close to reality and consequently an improvement of the starting point for further analysis such as Face Detection, Face Authentication, Face Identification and Face Expression Recognition. The development of the aforementioned applications would have been impossible without the progress of sensor technologies for obtaining 3D information. Several solutions have been adopted over time. In this paper, emphasis is put on passive stereoscopy, structured light, time-of-flight (ToF) and active stereoscopy, namely the most used technologies for the cameras design and fulfilment according to the literature. The aim of this article is to investigate facial applications and to examine 3D camera technologies to suggest some guidelines for addressing the correct choice of a 3D sensor according to the application that has to be developed.
Keywords: 3D cameras | 3D face analysis | Sensors | Stereoscopy | Structured light | ToF
Abstract: Purpose: The current study aimed to systematically review the literature addressing the use of deep learning (DL) methods in intraoperative surgery applications, focusing on the data collection, the objectives of these tools and, more technically, the DL-based paradigms utilized. Methods: A literature search with classic databases was performed: we identified, with the use of specific keywords, a total of 996 papers. Among them, we selected 52 for effective analysis, focusing on articles published after January 2015. Results: The preliminary results of the implementation of DL in clinical setting are encouraging. Almost all the surgery sub-fields have seen the advent of artificial intelligence (AI) applications and the results outperformed the previous techniques in the majority of the cases. From these results, a conceptualization of an intelligent operating room (IOR) is also presented. Conclusion: This evaluation outlined how AI and, in particular, DL are revolutionizing the surgery field, with numerous applications, such as context detection and room management. This process is evolving years by years into the realization of an IOR, equipped with technologies perfectly suited to drastically improve the surgical workflow.
Keywords: deep learning | intraoperative | neural network | surgical workflow
Abstract: Background and Objective. We present an original approach to the development of augmented reality (AR) real-time solutions for robotic surgery navigation. The surgeon operating the robotic system through a console and a visor experiences reduced awareness of the operatory scene. In order to improve the surgeon's spatial perception during robot-assisted minimally invasive procedures, we provide him/her with a solid automatic software system to position, rotate and scale in real-time the 3D virtual model of a patient's organ aligned over its image captured by the endoscope. Methods. We observed that the surgeon may benefit differently from the 3D augmentation during each stage of the surgical procedure; moreover, each stage may present different visual elements that provide specific challenges and opportunities to exploit for organ detection strategies implementation. Hence we integrate different solutions, each dedicated to a specific stage of the surgical procedure, into a single software system. Results. We present a formal model that generalizes our approach, describing a system composed of integrated solutions for AR in robot-assisted surgery. Following the proposed framework, and application has been developed which is currently used during in vivo surgery, for extensive testing, by the Urology unity of the San Luigi Hospital, in Orbassano (To), Italy. Conclusions. The main contribution of this paper is in presenting a modular approach to the tracking problem during in-vivo robotic surgery, whose efficacy from a medical point of view has been assessed in cited works. The segmentation of the whole procedure in a set of stages allows associating the best tracking strategy to each of them, as well as to re-utilize implemented software mechanisms in stages with similar features.
Keywords: Augmented reality | Robot-Assisted surgery | Surgical navigation
Abstract: This paper presents a learning outcomes-based methodology to produce a summative assessment to use in any course at any educational level. It takes into consideration the European Qualifications Framework (EQF) to define the learning outcomes, Bloom’s taxonomy to define the assessment questions and the QR code to help managing large class size. The methodology has been applied in a case study regarding the technical drawing course of the BSc Engineering programme at the Faculty of Engineering in Italy. In general, the implementation of the new assessment, during the two academic years of analysis, has shown good results in terms of reduction of failures (7% in the first year and 3.9% in the second year), improvement of the weighted average mark (4.6%), reduction of the minimum mark obtained (8.1%), increase of the maximum mark obtained (3.1%) and time reduction of 48% to upload marks to the electronic register.
Keywords: assessment questions | Bloom’s taxonomy | engineering education | European Qualifications Framework (EQF) | technical drawing
Abstract: In recent years, bone fracture detection and classification has been a widely discussed topic and many researchers have proposed different methods to tackle this problem. Despite this, a universal approach able to classify all the fractures in the human body has not yet been defined. We aim to analyze and evaluate a selection of papers, chosen according to their representative approach, where the authors applied different deep learning techniques to classify bone fractures, in order to select the strengths of each of them and try to delineate a generalized strategy. Each study is summarized and evaluated using a radar graph with six values: area under the curve (AUC), test accuracy, sensitivity, specificity, dataset size and labelling reliability. Plus, we defined the key points which should be taken into account when trying to accomplish this purpose and we compared each study with our baseline. In recent years, deep learning and, in particular, the convolution neural network (CNN), has achieved results comparable to those of humans in bone fracture classification. Adopting a correct generalization, we are reasonably sure that a computer-aided diagnosis (CAD) system, correctly designed to assist doctors, would save a considerable amount of time and would limit the number of wrong diagnoses.
Keywords: Bone fracture | CAD system | Deep learning | Neural network | Orthopedics | X-ray
Abstract: Introduction: As we enter the era of "big data," an increasing amount of complex health-care data will become available. These data are often redundant, "noisy," and characterized by wide variability. In order to offer a precise and transversal view of a clinical scenario the artificial intelligence (AI) with machine learning (ML) algorithms and Artificial neuron networks (ANNs) process were adopted, with a promising wide diffusion in the near future. The present work aims to provide a comprehensive and critical overview of the current and potential applications of AI and ANNs in urology. Evidence Acquisition: A non-systematic review of the literature was performed by screening Medline, PubMed, the Cochrane Database, and Embase to detect pertinent studies regarding the application of AI and ANN in Urology. Evidence Synthesis: The main application of AI in urology is the field of genitourinary cancers. Focusing on prostate cancer, AI was applied for the prediction of prostate biopsy results. For bladder cancer, the prediction of recurrence-free probability and diagnostic evaluation were analysed with ML algorithms. For kidney and testis cancer, anecdotal experiences were reported for staging and prediction of diseases recurrence. More recently, AI has been applied in non-oncological diseases like stones and functional urology. Conclusions: AI technologies are growing their role in health care; but, up to now, their "real-life" implementation remains limited. However, in the near future, the potential of AI-driven era could change the clinical practice in Urology, improving overall patient outcomes.
Keywords: Artificial intelligence | Big data | Urologic neoplasms | Urology
Abstract: In today’s modern competitive market, the purpose of every business is to adopt strategies and technologies to manage their interactions with their current and potential customers. This is possible only when an organization listens to the customer’s voice, pitching customer requirements with the best of their inputs. Kano’s model offers an effective method for both industries and academic research in classifying different customer requirements into different categories based on their impact on Customer Satisfaction, obtaining competitive and accurate information about customer requirements. The aim of this chapter is to try to facilitate practitioners in future research about the further development of Kano’s model and organizations in the future use of the Kano approach to correctly identify customer requirements and channelize their resources in the right direction.
Abstract: Robot-assisted radical prostatectomy (RARP) has become a standardized practice in contemporary prostate cancer surgical procedures. Nowadays, the use of tailored surgical procedures in urologic surgery aims to maximize treatment efficacy while minimizing its impact on patient functions and health-related quality-of-life parameters. Augmented reality (AR) technology appears as a significant development in Image Guided Surgery (IGS) integrating surgical navigation with 3D virtual organ models registered on to the real patient’s anatomy. In particular, AR provides surgeons the ability to see through solid objects; as such, it has been exploited in different surgical specialties. In this paper, we present the development of a software system augmenting the spatial navigation of the surgical environment allowed by surgical robots. The application is able to visualize the 3D virtual model of the organ (prostate and kidneys) targeted by the surgical procedure, overlay it over its real counterpart, as captured by the endoscope camera, using of registration and tracking techniques in real time, and stream the augmentation to the surgeon.
Keywords: Augmented reality (AR) | Image Guided Surgery (IGS) | Minimally invasive surgery (MIS) | Robotic surgery
Abstract: This work proposes an innovative method for evaluating usersâ™ engagement, combining the User Engagement Scale (UES) questionnaire and a facial expression recognition (FER) system, active research topics of increasing interest in the humanâ"computer interaction domain (HCI). The subject of the study is a 3D simulator that reproduces a virtual FabLab in which users can approach and learn 3D modeling software and 3D printing. During the interaction with the virtual environment, a structured-light camera acquires the face of the participant in real-time, to catch its spontaneous reactions and compare them with the answers to the UES closed-ended questions. FER methods allow overcoming some intrinsic limits in the adoption of questioning methods, such as the non-sincerity of the interviewees and the lack of correspondence with facial expressions and body language. A convolutional neural network (CNN) has been trained on the Bosphorus database (DB) to perform expression recognition and the classification of the video frames in three classes of engagement (deactivation, average activation, and activation) according to the model of emotion developed by Russell. The results show that the two methodologies can be integrated to evaluate user engagement, to combine weighted answers and spontaneous reactions and to increase knowledge for the design of the new product or service.
Keywords: 3D simulator | CNN | Deep learning | Facial expression recognition | Human-computer interaction | User engagement scale | User-centered design
Abstract: Common sense usually considers the assessment of female human attractiveness to be subjective. Nevertheless, in the past decades, several studies and experiments showed that an objective component in beauty assessment exists and can be strictly related, even if it does not match, with proportions of features. Proportions can be studied through analysis of the face, which relies on landmarks, i.e., specific points on the facial surface, which are shared by everyone, and measurements between them. In this work, several measures have been gathered from studies in the literature considering datasets of beautiful women to build a set of measures that can be defined as suggestive of female attractiveness. The resulting set consists of 29 measures applied to a public dataset, the Bosphorus database, whose faces have been both analyzed by the developed methodology based on the expanded set of measures and judged by human observers. Results show that the set of chosen measures is significant in terms of attractiveness evaluation, confirming the key role of proportions in beauty assessment; furthermore, the sorting of identified measures has been performed to identify the most significant canons involved in the evaluation.
Keywords: 3D landmarks | Attractiveness | Face analysis | Face proportions | Features extraction
Abstract: Due to its versatility, virtual technology is being widely employed in different domains, from industry to amusement. The possibility to adopt this technology in early product/service design is going to bring positive effects such as the reduction of costs associated with the production of physical prototypes and the generation of a more effective knowledge of users' feedback. This study proposes a preliminary methodology to evaluate users' engagement in interacting with a virtual environment that consists of the integration between a self-report method (the user engagement scale questionnaire) and a method based on facial expression recognition. Results reported in this paper show that the two methodologies generate different types of knowledge which can be used to fit users' needs and expectations. Even if focused on a specific case study, i.e., the evaluation of the engagement in a virtual learning environment, this paper aims to propose a methodology that can be applied to all kinds of virtual products.
Keywords: FER (facial expression recognition) | Product life cycle | Self-reports | Student engagement | Virtual learning environment
Abstract: Augmented Reality began to be used in the last decade to guide and assist the surgeon during minimally invasive surgery. In many AR-based surgical navigation systems, a patient-specific 3D model of the surgical procedure target organ is generated from preoperative images and overlaid on the real views of the surgical field. We are currently developing an AR-based navigation system to support robot-assisted radical prostatectomy (AR-RARP) and in this paper we address the registration and localization challenge of the 3D prostate model during the procedure, evaluating the performances of a Successive Quadratic Programming (SQP) non-linear optimization technique used to align the coordinates of a deformable 3D model to those of the surgical environment. We compared SQP results in solving the 3D pose problem with those provided by the Matlab Computer Vision Toolkit perspective-three-point algorithm, highlighting the differences between the two approaches.
Keywords: Augmented Reality | Computed-assisted surgery | Performance evaluation | Prostatectomy | Robotic surgical procedures | Successive Quadratic Programming
Abstract: This study proposes a novel occlusions detection and restoration strategy. The aim is to success with 3D face recognition even when faces are partially occluded by external objects. The method, which relies on geometrical facial properties, is designed for managing two types of facial occlusions (eye and mouth occlusions due to hands). First occlusions are detected and (if present) classified, by considering their effects on the 3D points cloud. Then, the occluded regions are progressively removed, and finally, the non-occluded symmetrical regions are used to restore the missing information. After the restoration process, face recognition is performed relying on the restored facial information and on the localized landmarks. The landmarking methodology relies on derivatives and on 12 differential geometry descriptors. The discriminating features adopted for facial comparison include shape index histograms, Euclidean and geodetical distances between landmarks, facial curves, and nose volume. Obtained recognition rates, evaluated on the whole Bosphorus database and on our private dataset, ranging from 92.55 to 97.20% depending on the completeness of data.
Keywords: 3D face | Differential geometry | Face analysis | Face recognition | Feature extraction
Abstract: Three-dimensional technologies have had a wide diffusion in several fields of application throughout the last decades; medicine is no exception and the interest in their introduction in clinical applications has grown with the refinement of such technologies. We focus on the application of 3D methodologies in maxillofacial surgery, where they can give concrete support in surgical planning and in the prediction of involuntary facial soft-tissue changes after planned bony repositioning. The purpose of this literature review is to offer a panorama of the existing prediction methods and software with a comparison of their reliability and to propose a series of still pending issues. Various software are available for surgical planning and for the prediction of tissue displacements, but their reliability is still an unknown variable in respect of the accuracy needed by surgeons. Maxilim, Dolphin and other common planning software provide a realistic result, but with some inaccuracies in specific areas of the face; it also is not totally clear how the prediction is obtained by the software and what is the theoretical model they are based on.
Keywords: 3D face analysis | Orthognathic surgery | Prediction methods | Soft tissue prediction | Surgical planning
Abstract: In recent years, facial expression analysis and recognition (FER) have emerged as an active research topic with applications in several different areas, including the human-computer interaction domain. Solutions based on 2D models are not entirely satisfactory for real-world applications, as they present some problems of pose variations and illumination related to the nature of the data. Thanks to technological development, 3D facial data, both still images and video sequences, have become increasingly used to improve the accuracy of FER systems. Despite the advance in 3D algorithms, these solutions still have some drawbacks that make pure three-dimensional techniques convenient only for a set of specific applications; a viable solution to overcome such limitations is adopting a multimodal 2D+3D analysis. In this paper, we analyze the limits and strengths of traditional and deep-learning FER techniques, intending to provide the research community an overview of the results obtained looking to the next future. Furthermore, we describe in detail the most used databases to address the problem of facial expressions and emotions, highlighting the results obtained by the various authors. The different techniques used are compared, and some conclusions are drawn concerning the best recognition rates achieved.
Keywords: 2D/3Dcomparison | 3Dface analysis | Action units | Deep learning-based FER | Facial action coding system | Facial expression recognition
Abstract: Nowadays, facial mimicry studies have acquired a great importance in the clinical domain and 3D motion capture systems are becoming valid tools for analysing facial muscles movements, thanks to the remarkable developments achieved in the 1990s. However, the face analysis domain suffers from a lack of valid motion capture protocol, due to the complexity of the human face. Indeed, a framework for defining the optimal marker set layout does not exist yet and, up to date, researchers still use their traditional facial point sets with manually allocated markers. Therefore, the study proposes an automatic approach to compute a minimum optimized marker layout to be exploited in facial motion capture, able to simplify the marker allocation without decreasing the significance level. Specifically, the algorithm identifies the optimal facial marker layouts selecting the subsets of linear distances among markers that allow to automatically recognizing with the highest performances, through a k-nearest neighbours classification technique, the acted facial movements. The marker layouts are extracted from them. Various validation and testing phases have demonstrated the accuracy, robustness and usefulness of the custom approach.
Keywords: 3D face | Face analysis | Feature extraction | Marker optimization | Motion capture
Abstract: This study proposes a novel quality function deployment (QFD) design methodology based on customers' emotions conveyed by facial expressions. The current advances in pattern recognition related to face recognition techniques have fostered the cross-fertilization and pollination between this context and other fields, such as product design and human-computer interaction. In particular, the current technologies for monitoring human emotions have supported the birth of advanced emotional design techniques, whose main focus is to convey users' emotional feedback into the design of novel products. As quality functional deployment aims at transforming the voice of customers into engineering features of a product, it appears to be an appropriate and promising nest in which to embed users' emotional feedback with new emotional design methodologies, such as facial expression recognition. This way, the present methodology consists in interviewing the user and acquiring his/her face with a depth camera (allowing three-dimensional (3D) data), clustering the face information into different emotions with a support vector machine classificator, and assigning customers' needs weights relying on the detected facial expressions. The proposed method has been applied to a case study in the context of agriculture and validated by a consortium. The approach appears sound and capable of collecting the unconscious feedback of the interviewee.
Keywords: 3D facial expression recognition | Emotional design | Quality function deployment (QFD) | User-centred design
Abstract: Interactive learning experiences are becoming the standard for today’s ‘tech-savvy’ generation of students and an important issue for research in instructional technology. The design and implementation of higher education, incorporating interactive technologies, can be difficult and often requires high levels of design knowledge. Our intent is to assist researchers, instructors and designers in identifying an effective methodology to design interactive learning contents that use recent interactive technologies, in particular 360° video, and encourage greater student engagement. In this study, 360° videos have been designed and implemented in an engineering program but the design methodology we suggest can be apply in any industrial or educational context. Then, 360° videos have been evaluated by the students as highly immersive and engaged environments that surround them and offer them an increased sense of presence, giving them a 360-degree view of the environment. In this type of video, viewers no longer only look at a single screen, they can point the camera lens wherever they want, allowing viewers to watch the video from multiple perspectives (active), rather than only from the director’s point of view (passive).
Keywords: 360° video | Engineering education | Student engagement | Virtual reality
Abstract: The paper examines the effect of the recent VR technologies on consumers behaviour providing guidelines to design a rich and immersive environment that is able to deliver high-impact and memorable experience and engage audiences and potential consumers anytime and anywhere. In order to study the impact, firstly, the paper has identified and described the technological characteristics of VR in terms of the ability of the VR devices to surround the user and the nature of simulation and the consumer engagement expressed through varying levels of cognitive, affective and behavioural manifestations. Then, a VR environment with technological characteristics of interactivity, hypertextuality, virtuality (presence of virtual elements), modality, location specificity, mobility and connectivity has been designed. VR environment shows a virtual supermarket in the form of 360 degree video able to create highly immersive sensory experiences that promote the subjective presence of consumers and impress their senses, touch their hearts, and stimulate their minds. The results obtained show that applying virtual technology to marketing activities allows marketers to identify and respond to opportunities through new technologies which are faster, more effective, and lower cost and to become more responsive to consumers’ needs providing virtual experience where they want it, how they want it and when they want it.
Keywords: Computer-mediated environments | Consumer behaviour | Human–computer interaction | Virtual markets
Abstract: As the potentials of technology grow, the embedding of IT advances in different fields and applications increases. A recent example is virtual reality and in particular the virtual product. The possibility of having a product in a virtual form allows creators and designers to efficiently manage the cycle of a product generation and evolution. The key advantage of the “virtual” is to have the product in advance, even in the conceptualization phase, with clear benefits in terms of consumptions of resources and, hence, sustainability. A potential customer could thus interact with a product-to-be and provide feedback about its look and feel, its usability, and, most of all, give an emotional response. In this context, the interaction between the virtual product and the future customer becomes a core point for the new approaches related to user-centred and user experience design, giving birth to a design methodology called “emotional design”. In particular, the study of facial expressions seems to be the more reliable and attractive aspect of it.
Keywords: 3D | Concept design | Emotional design | Facial expression recognition | PLM | Virtual reality
Abstract: Surgical interventions for jaw reconstruction require the design and the production of surgical guides that allow the surgeon to operate quickly and accurately. In some cases, the reconstruction is performed by inserting a prothesis, thus operating exclusively on the jaw, while in other cases the reconstruction is performed by withdrawing and inserting part of the fibula in place of the original jaw bone. This project aims to develop a procedure that allows 3D modeling of the surgical guides necessary for surgical intervention. The idea is to find a surgical guide archetype, a starting shape for the surgeon so that the cutting planes can be oriented without the surgical guide having to be redesigned from scratch for every single patient. The first step of the procedure is the segmentation, performed applying the thresholding operation on the images provided by magnetic resonance MR in order to identify the region of interest (ROI). The second step is the reconstruction of the 3D model, so that a mesh is obtained from 2D images. Subsequently the mesh is post-processed and the cutting plans along which the surgeon will intervene are defined.
Keywords: 3D modeling | 3D reconstruction | Maxillofacial surgery | Surgical guides
Abstract: This paper deals with parenthood perception (maternal and paternal) after the visualization and interaction (touch) with a 3D printed facial fetal model. The model is created using Additive Manufacturing techniques, starting from the image elaboration of routine ultrasound data. In this study, the method used for the elaboration and construction of 3D printable models of fetal faces starting from routine ultrasound images is briefly described. In addition, we present the results of a new survey conducted with future parents at the Altamedica clinic (Rome, Italy) to verify whether there are any benefits derived from the use of 3D printing models with future parents, both regarding the improvement of the parenthood experience, and the improvement of the understanding and collaboration with the physicians in case of fetal malformations, using 3D models coupled with the data of routine ultrasound examinations.
Keywords: 3D ultrasound | Additive manufacturing | Fetal face | Image processing | Parenthood perception | Survey
Abstract: A peculiar characteristic of fashion companies is their natural predisposition to transformation. In fact, they design new collections at least two times per year. Introducing new collections means developing simultaneously hundred of new products that has to match customer’s tastes/trends that evolve very fast. By acquiring and monitoring customers’ information from social and digital channels, fashion industries can capture customer’s tastes/trends picture. This requires analyzing a huge amount of heterogeneous data, such as feelings, positions, etc. In this scenario, the use of big data analytics can provide new insights on customer’s tastes/trends. Hence, the objective of this research is to examine how some of the most important and sophisticated applications of Big Data Analytics could increase customers’ satisfaction and bring advantages to the New Product Development process itself.
Abstract: This work proposes a method for recognizing the main 13 Facial Action Units and the 6 basic emotions. The methodologies rely on Differential Geometry to extract relevant discriminant features from the query faces, and on some linear quantities used as measures: Euclidean, geodesic, and angles between 17 automatically extracted soft-tissue landmarks. A thresholding system which evaluates local properties of connected regions, selected through tailored geometrical descriptors, supports the identification of the AUs. Then, a technique based on crisp logic allows the identification of the global expression. The three-dimensional context has been preferred due to its invariance to different lightening/make-up/camouflage conditions.
Keywords: Emotional design | Face expression recognition | Intelligent drive
Abstract: This paper proposes a methodology to assess the discriminative capabilities of geometrical descriptors referring to the public Bosphorus 3D facial database as testing dataset. The investigated descriptors include histogram versions of Shape Index and Curvedness, Euclidean and geodesic distances between facial soft-tissue landmarks. The discriminability of these features is evaluated through the analysis of single block of features and their meanings with different techniques. Multilayer perceptron neural network methodology is adopted to evaluate the relevance of the features, examined in different test combinations. Principle component analysis (PCA) is applied for dimensionality reduction.
Keywords: 3D face recognition | Dimensionality reduction | Geometrical descriptors | Neural network | Principal component analysis
Abstract: This paper proposes a methodology to assess the discriminative capabilities of geometrical descriptors referring to the public Bosphorus 3D facial database as testing dataset. The investigated descriptors include histogram versions of Shape Index and Curvedness, Euclidean and geodesic distances between facial soft-tissue landmarks. The discriminability of these features is evaluated through the analysis of single block of features and their meanings with different techniques. Multilayer perceptron neural network methodology is adopted to evaluate the relevance of the features, examined in different test combinations. Principle Component Analysis (PCA) is applied for dimensionality reduction.
Keywords: 3D face recognition | Dimensionality reduction | Geometrical descriptors | Neural network | Principal component analysis
Abstract: Methodologies for 3D face recognition which work in the presence of occlusions are core for the current needs in the field of identification of suspects, as criminals try to take advantage of the weaknesses among the implemented security systems by camouflaging themselves and occluding their face with eyeglasses, hair, hands, or covering their face with scarves and hats. Recent occlusion detection and restoration strategies for recognition purposes of 3D partially occluded faces with unforeseen objects are here presented in a literature review. The research community has worked on face recognition systems under controlled environments, but uncontrolled conditions have been investigated in a lesser extent. The paper details the experiments and databases used to handle the problem of occlusion and the results obtained by different authors. Lastly, a comparison of various techniques is presented and some conclusions are drawn referring to the best outcomes.
Keywords: 3D face analysis | Face detection | Face recognition | Facial occlusions | Facial restoration
Abstract: This study proposes a novel automatic method for facial landmark localization relying on geometrical properties of 3D facial surface working both on complete faces displaying different emotions and in presence of occlusions. In particular, 12 descriptors coming from Differential Geometry including the coefficients of the fundamental forms, Gaussian, mean, principal curvatures, shape index and curvedness are extracted as facial features and their local geometric properties are exploited to localize 13 soft-tissue landmarks from eye and nose areas. The method is deterministic and is backboned by a thresholding technique designed by studying the behaviour of each geometrical descriptor in correspondence to the locus of each landmark. Occlusions are managed by a detection algorithm based on geometrical properties which allows to proceed with the landmark localization avoiding the covered areas. Experimentations were carried out on 3132 faces of the Bosphorus database and of a 230-sized internal database, including expressive and occluded ones (mouth, eye, and eyeglasses occlusions), obtaining 4.75 mm mean localization error.
Keywords: 3D face | Differential geometry | Face analysis | Feature extraction | Landmark localization
Abstract: Ultrasound scans, Computed Axial Tomography, Magnetic Resonance Imaging are only few examples of medical imaging tools boosting physicians in diagnosing a wide range of pathologies. Anyway, no standard methodology has been defined yet to extensively exploit them and current diagnoses procedures are still carried out mainly relying on physician's experience. Although the human contribution is always fundamental, it is self-evident that an automatic procedure for image analysis would allow a more rapid and effective identification of dysmorphisms. Moving toward this purpose, in this work we address the problem of feature extraction devoted to the detection of specific diseases involving facial dysmorphisms. In particular, a bounded Depth Minimum Steiner Trees (D-MST) clustering algorithm is presented for discriminating groups of individuals relying on the manifestation/absence of the labio-schisis pathology, commonly called cleft lip. The analysis of three-dimensional facial surfaces via Differential Geometry is adopted to extract landmarks. The extracted geometrical information is furthermore elaborated to feed the unsupervised clustering algorithm and produce the classification. The clustering returns the probability of being affected by the pathology, allowing physicians to focus their attention on risky individuals for further analysis.
Keywords: Artificial intelligence | Clustering | D-MST | Decision support | Diagnosis | Facial dysmorphism | Feature extraction | Labio-schisis | Landmarking
Abstract: In the twenty-first century, meeting our technological challenges demands educational excellence, a skilled populace that is ready for the critical challenges society faces. There is widespread consensus, however, that education systems are failing to adequately prepare all students with the essential twenty-first century knowledge and skills necessary to succeed in life, career, and citizenship. The purpose of this paper is to understand how twenty-first century knowledge and skills can be appropriately embedded in engineering education finalised to innovative product development by using additive manufacturing (AM). The study designs a learning model by which to achieve effective AM education to address the requirements of twenty-first century and to offer students the occasion to experiment with STEM (Science, technology, engineering, and mathematics) concepts. The study is conducted using the quality function deployment (QFD) methodology.
Keywords: additive manufacturing | and mathematics | engineering | engineering education | learning/training model | Quality function deployment | science | technology | twenty-first century skills
Abstract: Open Innovation (OI), a paradigm coined by Chesbrough in 2003, is not only ‘openness towards innovation’, it also represents a firm-view-point-based innovation strategy, which suggests the involvement of external actors in the innovation process. The present paper is part of an on-going discussion on open issues related to OI and the advantages, strengths, challenges and limits of its implementation, dealing with the frequently debated topic of the future of this paradigm. What has emerged from this study is that a clear one-way outline of the OI implementation process does not exist, either in the literature or in practical cases, but common catalyst points, such as a general ‘open’ attitude, the existence of a technology transfer approach, an Intellectual Property protection strategy, and the frequent use of search routines, are identifiable. However, the future of this paradigm is not clear, as the concept may fade away because the process could be fully integrated in the innovation process.
Keywords: creative economy | creative thinking | innovation intermediaries | intellectual property | Open Innovation | SMEs
Abstract: 3D face was recently investigated for various applications, including biometrics and diagnosis. Describing facial surface, i.e. how it bends and which kinds of patches is composed by, is the aim of studies of Face Analysis, whose ultimate goal is to identify which features could be extracted from three-dimensional faces depending on the application. In this study, we propose 105 novel geometrical descriptors for Face Analysis. They are generated by composing primary geometrical descriptors such as mean, Gaussian, principal curvatures, shape index, curvedness, and the coefficients of the fundamental forms, and by applying standard functions such as sine, cosine, and logarithm to them. The new descriptors were mapped on 217 facial depth maps and analysed in terms of descriptiveness of facial shape and exploitability for localizing landmark points. Automatic landmark extraction stands as the final aim of this analysis. Results showed that some newly generated descriptors were sounder than the primary ones, meaning that their local behaviours in correspondence to a landmark position is thoroughly specific and can be registered with high similarity on every face of our dataset.
Keywords: 3D face | Face analysis | Face expression recognition | Face recognition | Geometry | Landmarks
Abstract: ‘Family business‘ phenomenon is considered a dominant source of growth, development, and social and economic stability within the luxury industry, representing the most widespread business model in the world. Moreover, in the current scenario, characterized by new and ever-changing dynamics of the global market, firms operating in the luxury industry need to quickly and timely anticipate competitors' next moves, monitor market tastes, adapt their products and properly communicate their positioning. For reaching this goal “family business” companies needs to improve their new product development processes by adopting innovative strategies and tools for creating an “environment for innovation” able to pursue “the right product in the right time”. In light of these considerations, the aim of this paper is to provide a methodology able to identify how and where a Collaborative Product Development/Definition and Management solution can provide company improvements.
Keywords: capability maturity model (CMM) | Collaborative product development/definition and management (CPDM) | new product development (NPD) | product lifecycle management (PLM)
Abstract: Requirement management represents one of the key process in the complex product life cycle because it is involved not only at the beginning, but also in the further phases where the definition of the technical specifications sometimes implicates requirements tradeoff due to conflicts. For this reason the role of RM tools and methodologies, that normally represents a stand-alone solution, has to change and to be more integrated in the Product Lifecycle Management platform. At present a real shared integrated RM solution doesn’t exist and for this reason it is necessary to provide a framework for supporting the customization of the available RM solutions for catching the real and specific company needs in this new collaborative scenario. For this reason this paper presents a methodical approach that incorporates user-centered design principles into the customization process of the tool. It permits to be adopted in each possible company scenario thanks to its ability to catch the company specific needs and further identifying the right features for the company. The proposed methodology puts the user, rather than the system, at the center of the process because the RM solution could be considered effective only if it is able to save time and money in the data management by users. Moreover, this tool assessment method can help organizations efficiently determine candidate tools, to understand what is important in that organization and to make a tool selection customized for their needs. The case study on Requirement Management tools as Part of Product Lifecycle Management (PLM) Solution is presented.
Keywords: Kano model | Product Lifecycle Management (PLM) | Quality function deployment (QFD) | Requirement Management (RM) | User-centered design
Abstract: The Kano model distinguishes the different relationships between Customer Requirements fulfillment and Customer Satisfaction, especially the nonlinear relationships. However, the model only emphasizes on the classification method and the qualitative descriptions of various relationship curves. Based on the understanding of the traditional Kano model, different approaches have been proposed to extend the Kano model from qualitative descriptions to quantitative analysis to understand customer needs in a more accurate manner. However, the results of these quantitative methods are mixed and no consensus has emerged on the most appropriate and general purpose approach. The present study describes strengths and weaknesses of qualitative and quantitative Kano approaches (such as Fuzzy, Continuous Fuzzy, Analytical Kano models) and proposes an assessment framework that identifies the relationships between approaches and classification requirements in order to help to select the most suitable methodology for analysing the most successful product and service quality attributes affecting the customer satisfaction.
Keywords: Customer satisfaction | Quantitative Kano methods | The Kano model
Abstract: Medical devices play a crucial role in all diagnosis, care and treatment of patients and the number and variety of medical devices is huge and increasing annually. Healthcare professionals handle a wide range of devices every day. In this paper 3D Web-based interactive learning applications, which provide a helpful background, reminder and practical training to medical devices, have been developed. This article presents the production cycle of these interactive learning objects and the study conducted to measure the students’ perception of the objects’ effectiveness for learning. Three 3D Web-based interactive medical devices (sphygmomanometer, an electrocardiogram and a defibrillator) are described. Each of these applications introduces elements of interactivity with the learning content, and provides a practical training to the use of the medical devices. This e-learning package is aimed at all medical, nursing and allied healthcare professionals working in all healthcare settings and environments. It is also for all pharmacists in any environment and all community care workers in nursing and private healthcare settings. This training will also be useful to all staff working in a patient or patient supporting area, either someone new to the role or to a long standing professional as a reminder of best practice.
Keywords: 3D Web-based interactive medical devices | Interactive learning environments | Learning and training applications | Medical education | Virtual reality
Abstract: This work proposes a methodology to automatically diagnose and formalize prenatal cleft lip with representative key points and identify the type of defect (unilateral, bilateral, right, or left) in three-dimensional ultrasonography (3D US). Differential Geometry has been used as a framework for describing facial shapes and curvatures. Then, descriptors coming from this field are employed for identifying the typical key points of the defect and its dimensions. The descriptive accurateness of these descriptors has allowed us to automatically extract reference points, quantitative distances, labial profiles, and to provide information about facial asymmetry. Seventeen foetal faces, nine of healthy foetuses and eight with different types of cleft lips, have been obtained through a Voluson system and used for testing the algorithm. In case no defect is present, the algorithm detects thirteen standard facial soft-tissue landmarks. This would help ultrasonographists and future mothers in identifying the most salient points of the forthcoming baby. This algorithm has been designed to support practitioners in identifying and classifying cleft lips. The gained results have shown that differential geometry may be a valuable tool for describing faces and for diagnosis.
Keywords: 3D ultrasound | Cleft lip | Dysmorphisms | Landmarking | Syndrome diagnosis
Abstract: Introduction of collision models aided surgeons to plan their resection of the femoral and acetabular impinging deformities. Their application on the hip joint is particularly interesting in association with hip arthroscopy because with technique the vision is limited and identifying friction’s zones is not easy. Although their large diffusion in the market, they still presents several limits. First, the 3D reconstruction has been used only in the preoperatory phase. Its use inside the operatory room is still not available. Furthermore the actual software lack of a pure automatized program to create the 3D model of the hip. Second, nowadays collision model’s software separate and study only the bony part of the hip, erasing from the 3D reconstruction soft tissues. Third, the programs currently developed use statistical data as reference to 3d virtual models and the motion between the hip’s bones. They base the motion analysis on the comparison between data extrapolated from the healthy population and data from the patient exam, consequently image processing may be inaccurate. Although according to our opinion collision model are useful even with their actual limits, in this chapter we describe the limits of the actual software and their potential development.
Abstract: HOVER platforms are interactive digital working space composed of HOrizontal and VERtical interconnected tactile surfaces. Since ICED 2011, we defended the hypothesis that interactive surfaces can greatly increase the effectiveness of the early preliminary design phase by increasing collaboration. So it is pivotal to understand the impact of such interactive surfaces on collaborative behaviors of design teams. We present the TATIN-PIC HOVER platform, an example of HOVER platform, that we built as a test tool. We describe our experimental protocol to assess if HOVER platforms change the collaborative behaviors of design teams throughout the idea generation phase, compared to traditional pen-And-paper mediating tools. To achieve this goal, we compare pen-And-paper against HOVER platforms. We measure the equity of contributions in the common objectification process (i.e. writing concepts on post-it) with a sample of 40 young practitioners. Our preliminary results show that HOVER platforms tend to change collaborative behaviors from an exclusive collaboration mode to a mutual one. This opens interesting research paths that are presented in the conclusion.
Keywords: Collaborative design | Computer aided design (CAD) | Conceptual design | Human behaviour in design | Workspaces for design
Abstract: 3D face was recently investigated for various applications, including biometrics and diagnosis. Describing facial surface, i.e. how it bends and which kinds of patches is composed by, is the aim of studies in Face Analysis, whose ultimate goal is to identify which features could be extracted from three-dimensional faces depending on the application. In this study, we propose 54 novel geometrical descriptors for Face Analysis. They are generated by composing primary geometrical descriptors such as mean, Gaussian, principal curvatures, shape index, curvedness, and the coefficients of the fundamental forms. The new descriptors were mapped on 217 facial depth maps and analysed in terms of descriptiveness of facial shape and exploitability for localizing landmark points. Automatic landmark extraction stands as the final aim of this analysis. Results showed that the newly generated descriptors are suitable to 3D face description and to support landmark localization procedures.
Keywords: 3D Face | Face Analysis | Face Recognition | Geometry | Landmarks
Abstract: Ultrasound is by far the most adopted method for safe screening and diagnosis in the prenatal phase, thanks to its non-harmful nature with respect to radiation-based imaging techniques. The main drawback of ultrasound imaging is its sensitivity to scattering noise, which makes automatic tissues segmentation a tricky task, limiting the possible range of applications. An algorithm for automatically extracting the facial surface is presented here. The method provides a comprehensive segmentation process and does not require any human intervention or training procedures, leading from the output of the scanner directly to the 3D mesh describing the face. The proposed segmentation technique is based on a two-step statistical process that relies on both volumetric histogram processing and 2D segmentation. The completely unattended nature of such a procedure makes it possible to rapidly populate a large database of 3D point clouds describing healthy and unhealthy faces, enhancing the diagnosis of rare syndromes through statistical analyses.
Keywords: 3D histogram processing | 3D ultrasound | DICOM | Image segmentation
Abstract: Purpose – Quality function deployment (QFD) is a worldwide-known, design for quality approach, which gathers several design quality methods. Among them, the House of Quality (HoQ) correlates the Voice of Customer and the Voice of Company thanks to L-shaped (2D) Matrix Diagrams (MDs). This paper theorizes, as logically possible, the extension from a bi-dimensional representation (a customer and a provider) to a higher N-dimensional representational freedom without altering the typical QFD’s customer-provider posture. The purpose of this paper is to present QFD 3D: the extension of the Relationship MD in the HoQ toward a third dimension (two customers and a provider or two providers and a customer). Design/methodology/approach – The proposed method is based on an existing well-known quality management tool such as MDs. The authors extend its representational capability by substituting the current L-shaped MDs (2D) with C-shaped ones (3D). The mathematical validity of this extension is described to demonstrate the correctness of the approach. Findings – The paper presents the logical validity and a case study concerning a three-dimensional extension of the Relationship MD in the HoQ, typical of the QFD approach. Research limitations/implications – The results are limited to a three-dimensional extension. Situations where more than three actors are simultaneously involved are theoretically possible, but they are out of the scope of the current research. The difficulty in manipulating 3D representations on traditional supporting tools will be completely reversed on new computer-supported tools. The proposed method is meant to be a useful and efficient instrument for correlating the needs and the services in multi-actors-based scenario, using a QFD design quality approach fueled by IT support tools. Practical implications – This paper can be used as guideline for further researches on N-dimensional extensions of HoQ. The proposed method can be used in a scenario based on the Triple Helix of Innovation. It gives a clear correlation between different needs and services, facilitating the decision-making process and the constitution of a more comprehensive view of the scenario under a quality management approach. Originality/value – Thousands of articles propose different QFD case studies all based on a bi-dimensional correlation between a customer and a provider. This paper proposes a method to extend the pertinence of QFD to scenarios where more than two actors are simultaneously correlated. Considering in particular three actors, the resulting three-dimensional Relationship MD in the HoQ is a totally new design quality tool for correlating customers’ needs and providers’ services. This can result in a significant gain of time and representational ability for quality managers who have to correlate multiple customers with a service provider and conversely a customer to multiple service providers. This approach open the doors for new QFD tools fueled by IT.
Keywords: House of Quality | Matrix diagrams | QFD 3D | Quality function deployment | Quality management | Quality management tool
Abstract: A 3D automatic facial expression recognition procedure is presented in this work. The method is based on point-by-point mapping of seventeen Differential Geometry descriptors onto the probe facial depth map, which is then partitioned into seventy-nine regions. Then, features such as mean, median, mode, volumes, histograms are computed for each region and for each descriptor, to reach a varied large set of parameters representing the query face. Each set of parameters, given by a geometrical descriptor, a region, and a feature, form a trio, whose featuring numerical values are compared with appropriate thresholds, set via experimentation in a previous phase by processing a limited portion of the public facial Bosphorus database. This allows the identification of the emotion-based expression of the query 3D face among the six basic ones (anger, disgust, fear, joy, sadness, surprise). The algorithm was tested on the Bosphorus database and is suitable for applications in security, marketing, medical. The three-dimensional context has been preferred due to its invariance to different lightening/make-up/camouflage conditions.
Keywords: 3D face | Differential geometry | Emotions | Face expression recognition (FER) | Facial expression recognition | Shape index
Abstract: This study proposes a novel approach to automatically localise 11 landmarks from facial RGB images. The novelty of this method relies on the application, i.e., point-by-point mapping, of 11 differential geometry descriptors such as curvatures to the three individual RGB image components. Thus, three-dimensional features are applied to bidimensional facial image representations and used, via thresholding techniques, to extract the landmark positions. The method was tested on the Bosphorus database and showed global average errors lower than five millimetres. The idea behind this approach is to embed this methodology in state-of-the-art 3D landmark detection methods to accomplish a full automatic landmarking by exploiting the advantages of both 2D and 3D data. Some landmarks such as pupils are arduous to be automatically extracted only via three-dimensional techniques. Thus, this method is intended as a bridging-the-gap preliminary technique that takes advantages of 2D imaging only for integrating advanced landmark localisation methodologies.
Keywords: Differential geometry | Face analysis | Facial landmarks | Landmark localisation | RGB images
Abstract: Recent Face Analysis advances have focused the attention on studying and formalizing 3D facial shape. Landmarks, i.e. typical points of the face, are perfectly suited to the purpose, as their position on visage shape allows to build up a map of each human being’s appearance. This turns to be extremely useful for a large variety of fields and related applications. In particular, the forensic context is taken into consideration in this study. This work is intended as a survey of current research advances in forensic science involving 3D facial landmarks. In particular, by selecting recent scientific contributions in this field, a literature review is proposed for in-depth analyzing which landmarks are adopted, and how, in this discipline. The main outcome concerns the identification of a leading research branch, which is landmark-based facial reconstruction from skull. The choice of selecting 3D contributions is driven by the idea that the most innovative Face Analysis research trends work on three-dimensional data, such as depth maps and meshes, with three-dimensional software and tools. The third dimension improves the accurateness and is robust to colour and lightning variations.
Keywords: 3D face | Fiducial point | Forensic | Landmarks | Reconstruction
Abstract: The aim of this work is to automatically diagnose and formalize prenatal cleft lip with representative key points and identify the type of defect (unilateral, bilateral, right, or left) in three-dimensional ultrasonography (3D US). Geometry has been used as a framework for describing facial shapes and curvatures. Then, descriptors coming from this field are employed for identifying the typical key points of the defect and its dimensions. The descriptive accuracy of these descriptors has allowed us to automatically extract reference points, quantitative distances, labial profiles, and to provide information about facial asymmetry. Eighteen foetal faces, ten of healthy foetuses and eight with different types of cleft lips, have been obtained through a Voluson system and used for testing the algorithm. Cleft lip has been diagnosed and correctly characterized in all cases. Transverse and cranio-caudal length of the cleft have been computed and upper lip profile has been automatically extract to have a visual quantification of the overall labial defect. The asymmetry information obtained is consistent with the defect. This algorithm has been designed to support practitioners in identifying and classifying cleft lips. The gained results have shown that geometry might be a proper tool for describing faces and for diagnosis.
Keywords: 3D ultrasound | Cleft lip | Dysmorphisms | Landmarking | Syndrome diagnosis
Abstract: Traditionally, the development of complex mechatronic products, such as products in aerospace or automotive domain, have employed a "document-based" Systems Engineering (SE) approach to perform the systems engineering activities. This approach is characterized by the generation of textual specifications and design documents that are used and exchanged between all project users. Today, innovative interdisciplinary product development requires a rethinking of current methods and IT solutions, employing an efficient Systems Engineering strategy. The goal is to move from a "documentbased" approach to a "model-based" approach that addresses all engineering disciplines. The "Model Based Systems Engineering (MBSE)" methodology is an approach that involves modeling for supporting system requirements definition and management, design, analysis, verification and validation activities. This approach provides a set of data and models that allows design teams to analyze the performances of the different product configurations in an early stage and to ensure product data traceability along the entire product lifecycle maintaining a structured relation between costumer requirements and all the product solution analyzed. At present a shared operative approached aimed at integrating MBSE in a Product Life Cycle Management scenario doesn't exist. For that reason, the paper outlines the key activities to deploy successfully a MBSE methodology, based on the Systems Modeling Language (SysML) within a PLM platform by the use of the Product Functional View.
Keywords: Model-Based Systems Engineering | Product Lifecycle Management | SysML | Systems engineering
Abstract: Product Lifecycle Management (PLM) is in some segments, such in the aerospace and defence (A&D) and automotive fields, a well-known business approach to support product development. However, while these markets recognise the power of PLM, others, such as textile, which searches for more performing solutions to meet the new globalised market competitiveness, still need to take the first steps towards a better understanding of how its processes/features could support the company businesses. For this reason, this paper aims to provide a roadmap about where and how PLM could support the development of textile products, in particular those ones that should be quickly innovated in order to answer to the volatile customers’ demand (fast fashion scenario), adopting a user-centred approach. Starting from a rigorous formalisation of the textile New Product Introduction (NPI) process, weaknesses and strengths have been then formalised via the involvement of NPI stakeholders catching and analysing their needs as users/actors. This set of requirements has been then correlated with the PLM features, through the use of Quality Function Deployment (QFD), in order to define a set of guidelines describing for each NPI process how PLM processes/features could be linked and which benefits could be obtained with this link. These guidelines could help textile companies to have a clearer idea of the advantages that could be obtained through the adoption of the PLM approach. Hence, the proposed methodology is able to provide to the company with a quantification of the impact of the PLM features on its processes. Moreover, it could support vendors to provide their solutions with more tailored features.
Keywords: Fast fashion | Product lifecycle management | Quality function deployment | Textile
Abstract: Concept map are not a new phenomenon in engineering education and are nowadays used to enhance "meaningful learning." In literature a lot of works on the use of the concept maps in education exist, but not many within the "interactive" context. This study contributes to expand the framework of research on the development of web-based 3D interactive concept maps. They incorporate web-based 3D interactive images that support the learning of abstract and difficult topics in Engineering drawing course, motivating the students and increasing their attention. The effects of different learning strategies (2D concept mapping vs. web-based 3D interactive concept mapping) on the learning outcomes and on the spatial ability are investigated. The results of this study shows that Web-based 3D interactive concept maps compensates spatial ability deficits, that is, helps students who have low spatial ability to build an effective mental representation of the learning content.
Keywords: concept map | engineering education | interactive learning | spatial ability | web3D
Abstract: The TATIN and TATIN-PIC projects lies at the crossroads of preliminary design and Computer Supported Cooperative Work in Design (CSCWD) tools. Those projects studied the impact of multi- Touch, multi-users tabletop groupware for co-located teamwork. The projects aim was to observe an improvement of the effectiveness of the preliminary design process when mediated by a CSCWD tool. Along 4 years several design observations have been conducted on specific design methods; each result have been presented independently during past Design Society conferences. This paper regroups and synthesizes all those results to draw a holistic conclusion. Digital intermediary objects represent the (currently) unique advantage of such CSCWD tools, as well as their greatest potential. Digital boosts their traditional role of inter-mediator improving teamwork's perceived productivity and confidence in the results, and open a wide range of possibilities like the interaction with AI systems.
Keywords: Collaborative design | Computer aided design (CAD) | Early design phases | Research methodologies and methods | Teamwork
Abstract: In a web-based learning environment, interactivity has been referred to as the most important element for successful e-learning. This article presents the production cycle of an interactive medical device, namely a virtual 3D electroencephalogram, and the study conducted to measure the students' satisfaction of the learning application based on Kano's quality model. The web-based interactive learning application introduces unique elements of interactivity with the learning content, specifically designed to train biomedical engineering students at the use of the medical device. The results of a post-deployment student survey suggest that the visual and interactive features embedded in the application have the potential to induce positive satisfaction of users. The proposed approach may serve as reference for the correct design of similar learning applications not only in the biomedical engineering context but also outside the realms of engineering sciences.
Keywords: E-learning | Electroencephalogram | Kano's model | Learner satisfaction | Web-based interactive application
Abstract: This work proposes a methodology that can be used to define a FEM simulation of the body welding process with the aim of evaluating compliant assembly deformations and spring-back, considering the effect of material plasticity, in order to improve the results of variational analysis methods, which so far have been based on a linear elastic material model. With reference to the automotive field, the simulation considers the effects of fixturing and resistance spot welding applied to sheet metal parts subjected to dimensional and geometrical tolerances.
Keywords: Compliant assembly | FEM | Plasticity | Resistance spot welding | Variational analysis
Abstract: Modern corporations, institutions, universities and schools consider e-education as a way of educating larger groups of students in less time and of reducing the use of various resources. Since setting up a web-system for e-education requires a significant amount of time, as well as the necessity of having a computer and other resources, the assessment of the goals and desired effects of the online system should be made in the preparation phase. It is important to highlight that only putting course contents on the web, without using appropriate pedagogical models and principles, without appropriate means of communication between participants and instructors and without the use of modern information technologies to present the learning content is not enough to fulfil educational goals. It should be noted that, after the substantial initial investment and excitement about the possibilities of e-education, many sources of difficulty and dissatisfaction regarding online learning have emerged. Most e-learning studies propose possible improvements in course material, with little attempt to explore the learners' dimension, context, pedagogical considerations or technology aspects. In this article, we have analysed the aspects that need to be considered when designing an e-learning platform for engineering education and we have proposed a methodology, based on the Quality Function Deployment approach, useful for the preparation phase of distance learning systems. Finally, we have explained the impact of new information technologies (Web3D technologies) on online engineering education and highlighted how these tools have been implemented in the e-learning platforms for mechanical engineering that we have developed.
Keywords: distance education | engineering education | interactive learning environments | lifelong learning | pedagogical issues | technology integration/innovation
Abstract: The next generation of tyre sensors will be bonded directly onto the inner liner (IL) in order to measure important parameters such as strain, vehicle load, contact pressure, the tyre-road friction coefficient or wear. Sensor packages (SP) have a sensor node, which is bonded and kept in position by a specifically designed rubber housing (RH). Since the measurements they provide to the car control unit are used to improve the active or passive safety of vehicles, these packages can be considered critical safety components that should be dimensioned carefully. A tyre analysis, whether statical or dynamical, in which the complete structure is considered, under any load, inflating pressure or temperature working condition is mainly oriented towards defining the tyre product. The insertion of an SP inside such a complex tyre model, with the purpose of only analysing its behaviour, would be too time consuming considering the strong nonlinear behaviour of the tyre model. Therefore, this work presents a method that can be used to define a computationally lightweight finite element method (FEM) simulation, which is able to recreate the working conditions to which an SP is subjected. The basic idea behind this method is to separate the analysis of the SP from the structural tyre analyses; the latter is only run once, independently. The first task is to impose the deformed shape on a simplified model of the tyre with a bonded SP. All the deformation states that occur during rolling are computed in a static FEM simulation. The second task is to apply the inertial forces that act on the SP, whether computed or measured directly on the tyre, as external loads. These tasks are implemented in user-defined routines that are executed by the FEM solver. The method permits the stress concentration inside the RH material volume to be identified, at any angular position of the wheel. This information is then used, during the design process, to identify the most suitable geometry to level out the stress distribution. The resulting shape can be tested under different boundary conditions, by substituting the corresponding data arrays, but using the same FEM model. Since the deformed shapes and inertial forces are stored as simple text matrices (which are also used to form a test library), they can be easily interchanged in a flexible way. This more extended design process can reduce the costs of prototyping moulds. The proposed methodology has been developed and tested for the Pirelli Cyber TM Tyre project.
Keywords: FEM | Rubber house | Tyre sensor
Abstract: The application of three-dimensional (3D) facial analysis and landmarking algorithms in the field of maxillofacial surgery and other medical applications, such as diagnosis of diseases by facial anomalies and dysmorphism, has gained a lot of attention. In a previous work, we used a geometric approach to automatically extract some 3D facial key points, called landmarks, working in the differential geometry domain, through the coefficients of fundamental forms, principal curvatures, mean and Gaussian curvatures, derivatives, shape and curvedness indexes, and tangent map. In this article we describe the extension of our previous landmarking algorithm, which is now able to extract eyebrows and mouth landmarks using both old and new meshes. The algorithm has been tested on our face database and on the public Bosphorus 3D database. We chose to work on the mouth and eyebrows as a separate study because of the role that these parts play in facial expressions. In fact, since the mouth is the part of the face that moves the most and affects mainly facial expressions, extracting mouth landmarks from various facial poses means that the newly developed algorithm is pose-independent.
Keywords: 3D face | 3D scanner | Differential geometry | Face morphology | Medical diagnosis | Soft-tissue landmarks
Abstract: The 3D scanner domain normally relies on the presence of many tools and technologies. They are mainly divided between contact and non-contact ones, but at present a new trend is coming up, starting from the articulated arms family, where the traditional mechanical joints, allowing flexible measures on complex objects, have been replaced by handy scanner with laser trackers, or with markers. These new systems have the advantage to get the entire object shape without changing the object position itself, but moving the scanning device without the involvement of long software alignments and post treatment operations. Anyway, the use of laser trackers obliges the operator to maintain the scanner head always visible, while the marker usage is quite critical because the measure precision depends strongly on the marker position. Starting from these considerations, the research project presented in this paper has been focused on the design and development of a flexible 3D scanner, without trackers and markers, that merge low-cost optical technologies together with low-cost inertial sensors. Thanks to this synergy, the developed 3D scanner is able to measure the three-dimensional shape of the object recording its movement and transferring it to a control unit for obtaining a point cloud. The paper presents the main concepts of the 3D scanner design and is concluded by an experimental phase made with the use of a benchmark. © 2013 Elsevier Inc.
Keywords: 3D scanners | Inertial sensors | Reverse engineering
Abstract: Product lifecycle management (PLM) is increasingly important for organizations acting in dynamic and competitive markets since PLM can improve decision-making and reduce costs by taking advantage of the efficiencies and effectiveness coming from improved market intelligence and collaboration of partners. In the last years, academics as well as practitioners and consultants have developed a wide range of maturity models for the purpose of measuring and describing certain aspects of PLM "maturity". The PLM maturity models permit to assess the relative position of companies on their road to complete PLM implementation. However, it is a tough job for the company to select the most appropriate PLM maturity model, as each maturity model has different attributes. Since, to our knowledge, no benchmarking for PLM maturity models exists to date, the rising number of maturity models implicates problems with respect to retrievability and reusability. With the aim to enhance the time consuming and exhausting search and selection process of appropriate assessment models, a benchmarking framework for PLM maturity models is presented in this paper. © 2013 Springer-Verlag London.
Keywords: Capability maturity model (CMM) | Maturity models | Product lifecycle management (PLM)
Abstract: The face is perhaps the most important human anatomical part, and its study is very important in many fields, such as the medical one and the identification one. Technical literature presents many works on this topic involving bi-dimensional solutions. Even if these solutions are able to provide interesting results, they are strongly subjected to images distortion. Thanks to the significant improvements obtained in the 3D scanner domain (photogrammetry for instance), today it is possible to replace the 2D images with more precise and complete 3D models (triangulated points clouds). Working on three-dimensional data, in fact, it is possible to obtain a more complete set of information about the face morphology. At present, even if it is possible to find interesting papers on this field, there is the lack of a complete protocol for converting the big amount of data coming from the three-dimensional point clouds in a reliable set of facial data, which could be employed for recognition and medical tasks. Starting from some anatomical human face concepts, it has been possible to understand that some soft-tissue landmarks could be the right data set for supporting many processes working on three-dimensional models. So, working in the Differential Geometry domain, through the Coefficients of the Fundamental Forms, the Principal Curvatures, Mean and Gaussian Curvatures and also with the derivatives and the Shape and Curvedness Indexes, the study has proposed a structured methodology for soft-tissue landmark formalization in order to provide a methodology for their automatic identification. The proposed methodology and its sensitivity have been tested with the involvement of a series of subjects acquired in different scenarios. © 2012 Springer Science+Business Media, LLC.
Keywords: 3D models | 3D scanners | Face morphology | Soft tissue landmarks
Abstract: In the last decade, three-dimensional landmarking has gained attention for different applications, such as face recognition for both identification of suspects and authentication, facial expression recognition, corrective and aesthetic surgery, syndrome study and diagnosis. This work focuses on the last one by proposing a geometrically-based landmark extraction algorithm aimed at diagnosing syndromes on babies before their birth. Pivotal role in this activity is the support provided by physicians and 3D ultrasound tools for working on real faces. In particular, the landmarking algorithm here proposed only relies on descriptors coming from Differential Geometry (Gaussian, mean, and principal curvatures, derivatives, coefficients of first and second fundamental forms, Shape and Curvedness indexes) and is tested on nine facial point clouds referred to nine babies taken by a three-dimensional ultrasound tool at different weeks' gestation. The results obtained, validated with the support of four practitioners, show that the localization is quite accurate. All errors lie in the range between 0 and 3.5 mm and the mean distance for each shell is in the range between 0.6 and 1.6 mm. The landmarks showing the highest errors are the ones belonging to the mouth region. Instead, the most precise landmark is the pronasal, on the nose tip, with a mean distance of 0.55 mm. Relying on current literature, this study is something missing in the state-of-the-art of the field, as present facial studies on 3D ultrasound do not work on automatic landmarking yet.
Keywords: 3D echography | 3D face | 3D ultrasound | Dysmorphisms | Landmarking | Syndrome diagnosis
Abstract: In the last decades, several three-dimensional face recognition algorithms have been thought, designed, and assessed. What they have in common can be hardly said, as they differ in theoretical background, tools, and method. Here we propose a new 3D face recognition algorithm, entirely developed in Matlab ® ,whose framework totally comes from differential geometry. First, 17 soft-tissue landmarks are automatically extracted relying on geometrical properties of facial shape. We made use of derivatives, coefficients of the fundamental forms, principal, mean, and Gaussian curvatures, and shape and curvedness indexes. Then, a set of geodesic and Euclidean distances, together with nose volume and ratios between geodesic and Euclidean distances, has been computed and summed in a final score, used to compare faces. The highest contribution of this work, we believe, is that its theoretical substratum is differential geometry with its various descriptors, which is something totally new in the field.
Keywords: 3D face | Face recognition | Geodesic distance | Geometry | Landmark | Shape index
Abstract: In the current "mass customization" scenario, product complexity is increasing significantly due to the necessity to answer as quickly and effectively as possible to many different costumer needs but maintaining costs under control. In this scenario, requirements management becomes a fundamental features for the entire product lifecycle, as enterprises need to have a complete and clear idea of the market for succeeding in developing and supporting the right and innovative product. Moreover, considering that product lifecycle is characterized by many "trade-off", so that product features are often negotiated in order to fulfil to conflicting requirements, it is important to support the "traceability" of the entire lifecycle "negotiation" process. For this reason, PLM platform has to provide suitable methodologies and tools able to efficiently support the design and management of large set of complex requirements. Requirements Management Tools (RMt) embedded in PLM solutions help keeping specifications consistent, up-to-date, and accessible. At present, there are different possible solutions, but a shared PLM integrated seems not to be available. In order to fill this gap, this paper has developed an user-based strategy, based on Kano methodology, so on "user satisfaction", in order to define a structured set of guidelines to support the design of the features of an integrated PLM requirement management tool. © 2014 Elsevier B.V.
Keywords: Customer requirements | Kano | Product Lifecycle Management | Requirements management | Satisfaction
Abstract: This paper proposes a design strategy and a Computer Supported Cooperative Work in Design (CSCWD) system, which have co-evolved together to meet the goal of improving the Preliminary Design Process (PDP). Because there is no consensus for a definition of PDP, here we define it as an evolutionary, iterative and heuristic process. Methodological Circulation is a design strategy where multidisciplinary design teams explore the solution space of a problem, while a project manager facilitates this heuristic and effectual exploration by determining the way forward through appropriate decision-making. Creating a computer support system for this design strategy requires considering 4 factors at the base of this co-evolutionary approach: the problem-solving cognitivist posture, the dynamic of the preliminary design activities, the nature of collaboration, and the composition of the design group and management. We present how these factors have mutually influenced our CSCWD system called TATIN-PIC and the MC. We present the preliminary results of ethnographic observations of design teams performing project planning within the TATIN-PIC environment and a traditional designer's environment.
Keywords: Collaborative design | Computer supported cooperative work in design | Design methodology | Early design phases | Methodological circulation
Abstract: Working in the low cost 3D scanner design domain, it would be very interesting to employ the inertial technologies because they could provide objects' surface spatial data, recording their movements, and asking a very low cost in term of sensor investment. Unfortunately these technologies are characterized by distortion problems that normally do not allow to obtain satisfying measures for being employed for 3D scanning applications. This situation happens when working with Magnetic Angular Rate Gravity (MARG) sensor, on which many reports have been written to describe the methods used to suitably manage the data provided by the sensors in order to obtain an accurate orientation estimation; but only a few address the problem of calibration and distortion compensation. Furthermore, the proposed approaches usually involve both complex sensors models and accurate calibration facilities expensive from the workload, the computational and the economic points of view which compromise their possible employment in low-cost 3D scanning applications. In this paper, a novel approach for MARG sensors heading alignment and distortion compensation is proposed in order to increase the reliability of the information provided by the sensors and improve the process of attitude estimation, in order to get measurement quality level sufficient to be employable in 3D scanning applications. Both the effectivity and the reliability of the proposed approach are validated by some experimental results and the performances are evaluated considering the quality of the outcome provided by the same attitude estimation algorithm processing raw data and compensated data. © 2013 Elsevier B.V. All rights reserved.
Keywords: 3D scanners | Attitude estimation | Calibration procedure | Magnetic angular rate gravity
Abstract: The variational analysis of a compliant assembly is influenced to a great extent by the plastic deformations of the parts, which are caused by the fixturing and the application of resistance spot welding to the flanges, which are mismatching because of tolerance effects. Spring-back of an assembly results to be very different when evaluated with an elastic material model and with a plastic model. The aim of this paper is to define the finite element analysis (FEA) methods that are necessary to transfer the complex interaction of the complete resistance spot welding process, which is best described by a coupled thermoelectrical-mechanical simulation with 3D solid elements, to a shell model. The entire welding process is simulated by considering the following steps: fixture closure on the parts, weldgun closure on the flanges, heating and cooling of the weld spots, release and measuring of the resulting assembly. The constraints corresponding to the datum point scheme are defined on the shell partitions, and a mesh offset is applied to the welding flange in order to simulate the geometry mismatch caused by the effects of the dimensional and geometrical tolerances. The methods developed to recreate the welding process conditions on a shell model are implemented in FEA runtime routines. The temperature distribution, previously exported from 3D thermal simulations, is loaded and imposed on the shell nodes of the welding partition during the heating and cooling phases, in order to make the material plastic and reduce the elastic energy available for spring-back. The weld caps that act against the part's flanges are defined with analytical rigid surfaces in order to avoid the necessity of explicitly modelling them in the FEA software. The contact between the welding flanges is redefined to lock the nodes that surpass the melting temperature. The methods were tested on the shell model of a butt joint. The thus obtained deformations were in good agreement with the results of the complete 3D thermoelectrical-mechanical simulation. This result makes it possible to calculate deformations with a plastic model in a reasonable time and use them as input data for an improved variational analysis. © 2013 Springer-Verlag London.
Keywords: Compliant assembly | FEM | Plasticity | Quality | Resistance spot welding | Shell model | Variational analysis
Abstract: Face study and description through landmarks is a key activity in many fields, in particular the medical one for both corrective and esthetic surgery purposes. In a previous work, we used a geometric approach to automatically extract landmarks working in the Differential Geometry domain, through the coefficients of the Fundamental Forms, the Principal Curvatures, Mean and Gaussian Curvatures, derivatives, and Shape and Curvedness Indexes. In this paper we describe the improvements made to our previously developed algorithm by using a new parameterization of the mesh, new geometrical descriptors, and new conditions. © 2013 Elsevier B.V. All rights reserved.
Keywords: 3D face | 3D scanner | Differential Geometry | Face morphology | Soft-tissue landmark extraction
Abstract: Recently, 3D landmark extraction has been widely researched and experimented in medical field, for both corrective and aesthetic purposes. Automation of these procedures on three-dimensional face renderings is something desirable for the specialists who work in this field. In this work we propose a new method for accurate landmark localization on facial scans. The method relies on geometrical descriptors, such as curvatures and Shape Index, for computing candidate and initial points, and on a statistical model based on Procrustes Analysis and Principal Component Analysis, which is fitted to candidate points, for extracting the final landmarks. The elaborated method is independent on face pose. © 2012 Elsevier Ireland Ltd.
Keywords: 3D face | Differential Geometry | Landmark extraction | PCA | Procrustes Analysis
Abstract: The variational analysis of compliant assemblies is mainly based on linear elastic models. Some guidelines have been defined to integrate material plasticity into a tolerance analysis model in order to improve its results when considering the resistance spot welding (RSW) process. A finite element model that simulates the body-in-white and RSW processes has been applied to butt and slip joints, with parts subjected to dimensional and geometrical tolerances that cause gap mismatching condition and loading interference on fixtures. The dimensional quality of assemblies is affected by plasticization near the welding spot, at the base of welded flanges and near the locators. The springback evidenced relative rotation of parts. © Springer-Verlag London Limited 2011.
Keywords: Compliant assembly | FEM | Quality | Resistance spot welding | Tolerance analysis
Abstract: Considering that the market is always asking more and more complex and customised products, but that at the same time it is necessary to maintain under control cost and time, enterprises have to find appropriate methodologies and tools able to support their knowledge sharing for exploiting as best as possible all the experiences and to focus all the efforts on the development of the product innovative aspects. To reach this aim, it is necessary to introduce in the product lifecycle management approaches appropriate methodologies for supporting knowledge formalised during the product development process, in order to store the company expertise in a comprehensible, accessible and simple way and to support its retrieve by any other could be interested in. This paper deals with proposing a methodology for knowledge reuse during the product 'concept' phase, working on customer requirements, product specification and semantic strategies. The proposed methodology has been experimented inside a students' race team, supporting the development of the new version of the their vehicle. Copyright © 2012 Inderscience Enterprises Ltd.
Keywords: Knowledge management | Knowledge reuse | Product concept | Product lifecycle management
Abstract: Distance measures and geometrical features are widely used to describe faces. Generally, they are extracted punctually from landmarks, namely anthropometric reference points. The aims are various, such as face recognition, facial expression recognition, face detection, study of changes in facial morphology due to growth, or dysmorphologies. Most of the time, landmarks were extracted with the help of an algorithm or manually located on the faces. Then, measures are computed or geometrical features are extracted to perform the scope of the study. This paper is intended as a survey collecting and explaining all these features, in order to provide a structured user database of the potential parameters and their characteristics. Firstly, facial soft-tissue landmarks are defined and contextualized; then the various measures are introduced and some results are given; lastly, the most important measures are compared to identify the best one for face recognition applications. © 2012 Elsevier B.V. All rights reserved.
Keywords: 3D models | 3D scanners | Face morphology | Soft tissue landmarks
Abstract: The face is one of the most important parts of the human anatomy, and its study is very important, especially for developing automatic public security recognition strategies. In order to support this field, it is necessary to find a formal way of converting what the human eyes normally do in recognizing one person from another by extracting implicitly some morphological features. Since human recognition happens through an automatic "authentication" of facial shape and features, this study should be undertaken in the geometrical domain. The technical literature shows many parameters that could be adopted for finding a solution to this problem, but at present there is no evidence of a reliable solution. For this reason, this work, analysing strengths and constraints of what is available in the geometrical domain, provides the first guideline for supporting the development of an automatic face recognition approach. Starting from differential geometry, such as the coefficients of the fundamental forms, the principal curvatures, mean and Gaussian curvatures, the derivatives and the shape and curvedness indices introduced by Koenderink and VanDoorn, this paper, working on a consistent set of case studies, analyses the geometrical descriptors' performances in the three-dimensional facial study by the use of a set of indicators (similarity between different faces, sensitivity to noise, etc.). This is a preliminary study for analysing the behaviours of these descriptors on faces. It may be used as a guideline or a theoretical framework for researchers studying face shape or for face recognition applications. © 2012 Elsevier B.V. All rights reserved.
Keywords: 3D scanners | Differential geometry | Face morphometry | Geometrical descriptors | Geometrical human modelling
Abstract: The present globalized market is forcing many companies to invest in new strategies and tools for supporting knowledge management. This aspect is becoming a key factor in the industrial competitiveness for the presence of extended enterprises that normally deal with huge data exchange and share processes. This scenario is due to the presence of partners geographically distributed over the entire globe, that participate in different steps of the product lifecycle (product development, maintenance and recycling). At present, Product Lifecycle Management (PLM) seems to be the appropriate solution to support enterprises in this complex scenario, even though a real standardized approach for the implementation of knowledge sharing and management tools does not exist today. For this reason, the aim of this paper is to develop a knowledge management operative methodology able to support the formalization and the reuse of the enterprise expertise acquired while working on previous products. By focusing on consumer packaged goods enterprises and on the concept development phase (which is one of the most knowledge intensive phases of the whole product lifecycle), this research work has developed a new systematic methodology to support knowledge codification and knowledge management operations. The new methodology integrates the Quality Function Deployment (QFD) and the Teoriya Resheniya Izobreatatelskikh Zadatch (TRIZ). Also, a case study on the problem of waste disposal has been conducted to validate the proposed methodology. © 2011 Elsevier Ltd. All rights reserved.
Keywords: Knowledge sharing | PLM | QFD | TRIZ | Waste disposal
Abstract: When talking about reverse engineering, it is necessary to focus on the management of point clouds. Generally speaking, every 3D scanner device codifies simple and complex geometries providing different point cloud densities as an output. Point cloud density is usually more correlated with the technical specifications of the device employed rather than with the morphology of the object acquired. This situation is due to the frequent use of structured grids by a large quantity of devices. In order to solve this problem, we therefore need to integrate the classical structured grid acquisition with a smart selective one, which is able to identify different point cloud densities in accordance with the morphological complexity of the object regions acquired. Currently, we can reach the destination in many different ways. Each of them is able to provide different performances depending on the object morphology and the performances of 3D scanner devices. Unfortunately, there does not yet exist one universal approach able to be employed in all cases. For this reason, the present paper aims to propose a first analysis of the available methodologies and parameters, in order to provide final users with some guidelines for supporting their decisions according to the specific application they are facing. Moreover, the developed guidelines have been illustrated and validated by a series of case studies of the proposed method. © 2011 Elsevier Ltd. All rights reserved.
Keywords: 3D scanner | Morphological analysis | Point cloud management | Reverse engineering
Abstract: Assessment of facial soft tissues could be implemented using only anatomical landmarks. These points are so significant in the medical context because are able to provide significant information about the human face morphology and dimensions. At present their detection and location is made by expert physicians using palpation. Even if this procedure normally provides reliable information, the reliability of the results is proportional to the expertise of the physician. Considering that at present many physicians are beginning to use 3D scanners that provide three-dimensional data of the human face, it is possible to implement a robust and repeatable methodology that supports the physician's diagnosis. To reach this goal it is necessary to implement a methodology based on geometrical codification of landmarks and which mathematically formalizes the physician's visual and palpation analyses of the real patient. © 2010 Springer Science+Business Media, LLC and International Society of Aesthetic Plastic Surgery.
Keywords: 3D scanners | Automatic detection | Human face | Soft tissues landmarks
Abstract: Model-based definition (MBD) is a new strategy of product lifecycle management (PLM) based on computer-aided design (CAD) models transition from simple gatherers of geometrical data to comprehensive sources of information for the overall product lifecycle. With MBD, most of the data related to a product are structured inside native CAD models, instead of being scattered in different forms through the PLM database. MBD aims are suppression of redundant documents and drawings, better data consistency, better product/process virtualization, and better support for all computer-aided technologies tasks under engineering and manufacturing disciplines. Developing MBD today, for a medium- to large-sized company in the automotive and aerospace sectors, deals primarily with its fundamentals: data structures. Companies need a common approach to structure data in reusable, unified forms inside native three-dimensional CAD models. For this reason, this research work has been developed by focusing the attention on a method for supporting the MBD implementation by the use of the quality function deployment approach. In order to analyze the efficacy of the proposed approach, it has been validated in the aerospace and defense domain where companies deal with complex products, characterized by a large amount of data exchange, and where collaborative design is a fundamental practice. © 2010 Springer-Verlag London Limited.
Keywords: Computer-aided design | Model-based definition | Paperless engineering | Product lifecycle management | Product virtualization
Abstract: To obtain the best surgical results in orthognathic surgery, treatment planning and evaluation of results should be performed. In these operations it is necessary to provide the physician with powerful tools that can underline the behavior of soft tissue. For this reason, considering the improvements provided by the use of 3D scanners in medical diagnosis, we propose a methodology for analyzing facial morphology working with geometrical features. The methodology has been tested on patients with malocclusion in order to analyze the reliability and efficiency of the provided diagnostic results. © Springer Science+Business Media, LLC and International Society of Aesthetic Plastic Surgery 2009.
Keywords: 3D scanner | Facial morphology | Shape analysis | Soft tissue shifts
Abstract: When working with 3D scanner devices, one of the most critical problems is usually the low quality of the point cloud provided by the scanning device. This problem mainly consists of the following two aspects. The first one is surely the choice of the strategy used to acquire the object shape. Most of the times, the selected strategy is based on selective sampling. This choice proved to be valid, especially when working with Free-Form surfaces: by using a selective sampling strategy is in fact possible to limit point density increase to those regions showing high morphological complexity. The second aspect is the difficulty of identifying which 3D scanner device is the one that better fulfils the specific application needs, which vary depending on the specific scenario in which the costumer/user works (resolution, accuracy, ...). As far as this last issue is concerned, the presence of many different acquisition technologies and devices on the market is a source of confusion for the users, who sometimes choose the wrong solution instead of finding the most efficient one. Hence, in order to support the potential users in their selection, this paper aims to propose a solution able to integrate the morphological analysis of the object acquired with the costumer needs (resolution, accuracy, ...) and with the 3D scanner performances in order to help users to identify the optimal solution. © 2009 Elsevier Inc. All rights reserved.
Keywords: Free-Form | Reverse engineering | Sampling strategy | Scanner uncertainty
Abstract: During the digitization process of a physical object, the operator has to choose an acquisition pitch. Currently, 3D scanners employ constant pitches. For this reason the grid dimension choice normally represents a compromise between the scanner performances and specific applications, and the resolution and accuracy of the specific application. This is a critical problem because, normally, the object shape is assumed as a combination of different geometries with different morphological complexities. As a consequence of this, while some basic geometries (i.e., planes, cylinders, and cones) require only few points to describe their behavior, others need much more information. Normally, this problem is solved with a significant operator involvement. Starting from the object morphology and from the 3D scanner performances, the author finds the optimal acquisition strategy with an iterative and refining process made of many attempts. This approach does not guarantee an efficient acquisition of the object, because it depends strongly on the subjective ability of the operator involved in the acquisition. Many approaches propose points cloud management methodologies that introduce or erase punctual information, working with statistical hypothesis after the acquisition phase. This research work proposes an operative strategy, which starts from, first, a raw point acquisition, then it partitions the object surface, identifying different morphological zone boundaries (shape changes). As a consequence, some of the identified regions will be redigitized with deeper scansions in order to reach a more precise morphological information. The proposed partitioning methodology has been developed to directly interact with the 3D scanner. It integrates the use of a global morphological descriptor (Gaussian curvature), managed in order to be applicable in a discrete context (points cloud), with the concept of the 3D scanner measuring uncertainty. This integration has been proposed in order to provide an automatic procedure and a "curvature variation threshold,"able to identify real significant shape changes. The proposed methodology will neglect those regions where the shape changes are only correlated with the uncontrolled noise introduced by the specific 3D scanner performances. © 2010 by ASME.
Keywords: Geometric morphology | Reverse engineering | Scanning strategy
Abstract: This article compares most of the three-dimensional (3D) morphometric methods currently proposed by the technical literature to evaluate their morphological informative value, while applying them to a case study of five patients affected by the malocclusion pathology. The compared methods are: conventional cephalometric analysis (CCA), generalised Procrustes superimposition (GPS) with principal-components analysis (PCA), thin-plate spline analysis (TPS), multisectional spline (MS) and clearance vector mapping (CVM). The results show that MS provides more reliable and useful diagnostic information. © 2008 British Association of Plastic, Reconstructive and Aesthetic Surgeons.
Keywords: 3D Scanner | Facial Morphology | Shape analysis
Abstract: To obtain the best surgical results in orthognathic surgery, treatment planning and the evaluation of results should be performed. In these operations it is necessary to provide to the physicians powerful tools able to underline the behaviour of soft tissue. For this reason, considering the improvements provided by the use of 3D scanners, as photogrammetry, in the medical diagnosis this paper proposes a methodology for analysing the facial morphology working with geometrical features. The methodology has been tested over patients affected by malocclusion, in order to analyse the reliability and efficiency of the provided diagnostic results.
Keywords: 3D scanner | Facial morphology | Shape analysis | Soft tissue shifts
Abstract: The reverse engineering process represents one of the best known methodologies for creating three-dimensional (3D) virtual models starting from physical ones. Even if in the last few years its usage has significantly increased, the remarkable involvement of the operator has until now represented a significant constraint for its growth. Having regard to the fact that this process, and in particular its first step (that is the acquisition phase), strongly depends on the operator's ability and expertise, this paper aims at proposing a strategy for automatically supporting an "optimal" acquisition phase. Moreover, the acquisition phase represents the only moment in which there is a direct contact between the virtual model and the physical model. For this reason, designing an "optimal" acquisition phase will provide as output an efficient set of morphological data, which will turn out to be extremely useful for the following reverse engineering passages (pre-processing, segmentation, fitting, ...). This scenario drives the researcher to use a selective sampling plan, whose grid dimensions are correlated with the complexity of the local surface region analyzed, instead of a constant one. As a consequence, this work proposes a complete operative strategy which, starting from a first raw preliminary acquisition, will provide a new selective sampling plan during the acquisition phase, in order to allow a deeper and more efficient new scansion. The proposed solution does not require the creation of any intermediate model and relies exclusively on the analysis of the metrological performances of the 3D scanner device and of the morphological behaviour of the surface acquired. © 2009 Elsevier B.V. All rights reserved.
Keywords: Free-form | Reverse engineering | Sampling strategy | Scanner uncertainty
Abstract: Reverse engineering is a technology which generates a virtual representation of an existing part based on point data acquired through measuring techniques. Different technologies can be employed to obtain a virtual representation of a physical model, but the use of a solution (3D scanner) rather than another provides significantly different results since the available 3D scanners are characterised by different performances (resolution, accuracy, ...). However, even if great attention were focussed on the selection of the most appropriate 3D scanner device, this would not be enough to assure the achievement of a consistent virtual representation of the physical model. The selection of the most suitable 3D scanner can contribute to rendering the point acquisition more accurate, but it is not able to assure an efficient point distribution in terms of numbers and locations. These two parameters are part of the acquisition strategy, which can be implemented only after having decided which the 3D scanner is to be used. In order to support the next steps of the reverse engineering cycle (segmentation, fitting, ...), the acquisition phase should provide an organised point cloud, which can be obtained through a consistent sampling plan. For this reason, this paper aims at proposing a methodology for defining a selective sampling plan, whose grid dimensions are related to the complexity of the analysed local surface regions. © 2008 Springer-Verlag London Limited.
Keywords: 3D scanner performances | Free-form | Reverse engineering | Sampling strategy
Abstract: Many different applications of online product inspections have found a significant advantage by the use of 3D scanners, especially when working with complex surfaces (free-form,), where traditional inspection tools proved to have significant limitations. Unfortunately, there are not only success stories, but also several situations in which the approach towards 3D scanner technologies has been unsuccessful. This is mainly due to the fact that it is hard to understand which 3D scanner solution is the best to adopt and which working protocol is to be followed in order to obtain the best results from a specific application. These problems are often caused by the absence of a long expertise in 3D scanners and by the presence of inappropriate technical sheets. These last are, in fact, quite fragmented and inhomogeneous and only provide little information about the device behavior in the different working scenarios since they tend to be more oriented to the theoretical metrological performances. Most of the time, this information is not useful for users, who need to have a unique map showing both 3D scanner technical performances and their correlations to the different working scenarios in order to be able to compare the several available systems and to get a better understanding of their usage. In order to provide a solution to this problem, this paper proposes to create a customer benchmarking methodology that is a mixture of benchmark geometry designs and experiment sets. This benchmarking methodology will be focused on the simulation of a computer-aided inspection working scenario and carried out by using the quality function deployment method, in order to be oriented towards customer needs. © 2008 Springer-Verlag London Limited.
Keywords: 3D scanner | Laser scanner | Quality function deployment | Reverse engineering
Abstract: Both product design and manufacturing are intrinsically collaborative processes. From conception and design to project completion and ongoing maintenance, all points in the lifecycle of any product involve the work of fluctuating teams of designers, suppliers and customers. That is why companies are involved in the creation of a distributed design and a manufacturing environment which could provide an effective way to communicate and share information throughout the entire enterprise and the supply chain. At present, the technologies that support such a strategy are based on World Wide Web platforms and follow two different paths. The first one focuses on 2D documentation improvement and introduces 3D interactive information in order to add knowledge to drawings. The second one works directly on 3D models and tries to extend the life of 3D data moving these design information downstream through the entire product lifecycle. Unfortunately the actual lack of a unique 3D Web-based standard has stimulated the growing up of many different proprietary and open source standards and, as a consequence, a production of an incompatible information exchange over the WEB. This paper proposes a structured analysis of Web-based solutions, trying to identify the most critical aspects to promote a unique 3D digital standard model capable of sharing product and manufacturing data more effectively-regardless of geographic boundaries, data structures, processes or computing environment. © 2008 Springer-Verlag London Limited.
Keywords: Collaborative design | Product lifecycle management | Virtual enterprise | Web3D
Abstract: Reverse Engineering is a technology that generates a virtual representation of an existing part based on point data acquisition with measuring techniques. Different technologies could be employed to obtain a virtual representation of a physical model, but the use of a solution (3D scanner) rather than another, could provide significant different results because the available 3D scanners are characterised by different performances (resolution, accuracy,. ..). The selection of the right 3D scanner could guarantee an improvement in the points acquisition precision, but it could not assure an efficient points distribution in term of number and locations. These two parameters make part of the acquisition strategy, that should provide a consistent sampling plan. For this reason this paper wants to propose a methodology to define a selectively sampling plan, with grid dimensions related with the complexity of the local surface region analyzed. © 2008 IEEE.
Keywords: 3D Scanner | Free-Form | Reverse Engineering
Abstract: From a theoretical point of view, people are used to think that the adoption of innovative business solutions, for optimizing the product lifecycle, should always guarantee a sufficient return on the investment (ROI), even if the concrete benefits that the investment has given to the company are sometimes difficult to evaluate. Looking at the technical literature it is possible to find a lot of ROI's estimation metrics for the evaluation of business strategies, even if after the analysis of the theoretical concepts no one faces with a real implementation of the methodology proposed. This paper proposes a solution, based on the key performance indicator (KPI) method, for evaluating the benefits introduced by the adoption of a Product Lifecycle Management (PLM) tool in a one of a kind company. The methods have been validated with its implementation into an Aerospace and Defence company (Alcatel Alenia Space), highlighting the improvement created by the implementation of the PLM solution in the everyday activities and showing the system's contribution for some key-process as configuration, change and documentation. © 2008 Elsevier B.V. All rights reserved.
Keywords: Collaborative management | Extended enterprise | One-of-a-kind company | Product development management | Product lifecycle management
Abstract: The present attention to production cost and time reduction has encouraged many factories to employ some non-conventional technologies instead of the conventional ones for mould production. These specific non-conventional manufacturing technologies, aimed at achieving a significant time reduction for tools manufacturing, are commonly called rapid tooling techniques. Among these rapid tooling methodologies, it is possible to find one of the most direct and flexible techniques which enables, by using a non-cured silicone, the manual shaping of a physical object obtaining directly the mould after some additional handmade refinements. In order to know how the process parameters influence the final shape of the cast object, this work develops an experimental analysis for the estimation of a preliminary model for the process characterization. © 2007 Elsevier B.V. All rights reserved.
Keywords: Rapid prototyping | Rapid tooling | Spin casting
Abstract: The "segmentation", i.e., the three-dimensional point cloud partition in different morphological zones, is a necessary operation while approaching the reverse engineering cycle, because it helps the operator in generating the surface model. This operation is usually developed after the acquisition and the pre-processing phases, and it tries to define a boundary grid which the following surface fitting operation will employ for the surface model definition. Many approaches apply the segmentation methods far from the 3D scanner device. On the contrary, this research proposes an iterative strategy which starts from a first raw point acquisition and then partitions the object surface and identifies the boundary of those zones showing significant morphological features (shape-changes). As a consequence, they will be re-digitised with deeper scansions, in order to reach more precise morphological information. This partitioning operation is driven by a morphology descriptor, the gaussian curvature, giving an estimation of the local surface morphological complexity. Moreover the proposed algorithm employs the 3D scanner measuring uncertainty to define a "curvature variation threshold", in order to identify those zones showing significant morphological shape-changes. © 2006 Springer-Verlag London Limited.
Keywords: Geometric morphology | Reverse engineering | Scanning strategy
Abstract: Creation of a CAD model from a hard model is something necessary for design modification, part replication or rapid prototyping and surface inspection. This is reverse engineering. Scientific literature presents many different approaches, even if, actually all the systems, mechanical (contact devices) and optical (non contact devices), work with constant acquisition pitches. This became a great deal in relation to the different object morphology combinations that the same surface could show. Working in fact with a constant pitch on a free-form surface it is possible to struggle with an insufficient points cloud density, when the acquisition pitch would be a compromise between the complex and elementary features that describe the object to acquire, or with an excessive points cloud density, when the acquisition pitch represents the highest scanner resolution. Referring to this situation, this paper proposes, starting from a first raw acquisition, an automatic methodology, directly implemented on the acquisition device, for the selective individuation of surface zones which present sensible curvature. In this approach the curvature of the measured surfaces is analyzed by defining a threshold over which it is necessary to perform a deeper scansion of the surface. In the present paper a methodology for the definition of the threshold value based on the measurement system uncertainty is described. In the current description the method is applied to an algorithm for curvature analysis, but it could be extended to any other approaches. Furthermore, it will be demonstrated that this new methodology is simple to apply and can be easily automated directly in the control scanner software. In the end of the paper a practical example is described in order to give an experimental validation of the method. © Springer-Verlag London Limited 2007.
Keywords: Design | Geometric morphology | Reverse engineering | Scanner 3D | Scanning strategy
Abstract: Reverse engineering is a technique used during the project phase of a new product, which makes it possible to trace, in terms of mathematical expressions, the geometrical features of a given physical model. Scientific literature presents many different approaches to reverse engineering. Great part of those are based on the analysis of point clouds acquired through coordinate measuring devices, such as, for example, Coordinate Measuring Machines (CMMs), Optical Scanners or Interferometric Systems. Referring to this kind of approach, a common problem is to individuate the surface zones, which present sensible variations of curvature. Many algorithms, commonly implemented on commercial software through semi-automatic procedures, are already based on this method. In these cases the local curvature variation of the measured surfaces is analyzed by defining a threshold over which it is necessary to perform a deeper scansion of the surface zones. However, most of the problems are related to the definition of an appropriate value for the threshold level. In the present paper a methodology for the definition of the threshold value based on the measurement system uncertainty is described. In the current description the method is applied to an elementary algorithm for curvature definition, but it could be extended to any other kind of more complicated approach. Furthermore, it will be demonstrated that this new methodology is simple to apply and can be easily automated in commercial software for points selective sampling in industrial reverse engineering applications. In the end a practical example is described in order to give an experimental validation of the method. © 2005 Elsevier Ltd. All rights reserved.
Keywords: Geometric morphology | Reverse engineering | Scanning strategy
Abstract: Objectives: The aim of this study was to evaluate the effects of scanning parameters on the precision of the data acquired using a facial laser scanner and to assess the reliability of automatic model recording in humans. Methods: Data were acquired using a laser scanner (Cyberware 3030RGB); analysis and measurements were performed with Rapid Form 2004 software. A mannequin and six volunteers were scanned to investigate the effects of environmental conditions, positioning, head orientation, and software procedures. Precision and accuracy of the data were evaluated comparing six linear measures calculated on scanned data with those obtained directly. Two sessions with different head inclination were performed. The reliability of repeated scans was also assessed measuring the distance between the surfaces reconstructed from two separate scans of the same subject, at 12 anatomical points, in 5 subjects, during two sessions using a different head inclination. Differences were analysed using paired t-tests or analysis of variance (ANOVA). Results: The accuracy of scanning was ±0.65 mm. The development of a specific protocol resulted in a mean scanning error of 1-1.2 mm and a recording error of 0.3-0.4 mm on repeated scans of human subjects. Conclusions: This study indicates that scanning of the human face may be hampered by errors and artefacts, mainly due to movements. While the effect of trembling and involuntary movements during the exam may be minimized using faster scanning devices, comparative observation over time may be affected by unreal differences due to the uncertainty of facial expression. The overall error is, however, in the range useful for most clinical studies. © 2006 The British Institute of Radiology.
Keywords: Anthropometry | Face | Three-dimensional imaging | Validation studies
Abstract: The three-dimensional printing technology (3DP™) is a rapid prototyping process in which powdered material is deposited in layers and selectively joined with binder. The most common deposition method used in this rapid prototyping approach is the drop on demand. However, continuous jet deposition results in an order of magnitude increase in printing speed and that it is gaining popularity. A key component of a continuous deposition printhead is the catcher, which collects droplets that are not meant to hit the powder bed. Current catching systems face problems such as trapped air, and crystallisation that result in unwanted droplets hitting the powder bed. This work looks at redesigning the catcher, and addresses new control algorithms required for proper binder deposition. A mathematical model for binder flight trajectory is developed and validated by experiment. © 2004 Elsevier B.V. All rights reserved.
Keywords: Process simulation | Rapid prototyping | Three-dimensional printing
Abstract: Recent innovations in laser scanner technology provide a potentially useful technique for accurate three-dimensional (3D) documentation of the face. Aim of this study was to evaluate and optimise facial surface acquisition parameters and measurements obtained by a Cyberware colour laser scanner and to compare those measures to the common anthropometric ones made by individual examiners. This preliminary study indicates that 3D scanning is a sufficiently precise and repeatable surface 3D acquisition method.
Keywords: Anthropometry | Laser scanning | Three-dimensional face
Abstract: Focusing the attention on the first step of the reverse engineering cycle, the digitisation, it is possible to denote that the major part of the systems operate with punctual information measured with different technical approaches, light or laser beam or mechanical pointer. Anyway, even if the measuring technology employed could be different, these points, reached on a working plane, always come from a discretisation process. In fact working in a cartesian space and considering the orthogonal plane (XY) to the measuring direction (Z), the procedure divides the entire surface of the object in a grid deciding two step sizes and consequently the number of reachable points during the acquisition. So considering that the major part of the systems could employ one only step size value for the entire acquisition phase its choice became a big deal. From one side if the value is too small too many redundant information could occur, causing the unjustified growing up of the computational costs of the entire reverse engineering cycle. From the other side if the value is too large, there is the possibility to have few information about the real shape, causing in this way an inefficient reconstruction of the final mathematical model of the object. In order to solve this ambiguity the idea of the paper, working with a piezoelectric reverse engineering system, is to develop an operative methodology able to selectively subdivide the entire working plane in different zones in relation with the different step size that the specific object regions need. Working with an industrial application the procedure has been experimented in order to obtain an operative validation.
Keywords: Geometric morphology | Reverse engineering | Scanning strategy