Bianconi Francesco
Professore Associato
Università degli Studi di Perugia
francesco.bianconi@unipg.it
Sito istituzionale
SCOPUS ID: 6505937024
Orcid: 0000-0003-3371-1928
Pubblicazioni scientifiche
Abstract: Objectives: Nowadays, a wide variety of software for 3D reconstruction from CT scans is available; they differ for costs, capabilities, a priori knowledge, and, it is not trivial to identify the most suitable one for specific purposes. The article is aimed to provide some more information, having set up various metrics for the evaluation of different software's performance. Methods: Metrics include software usability, segmentation quality, geometric accuracy, mesh properties and Dice Similarity Coefficient (DSC). Five different software have been considered (Mimics, D2P, Blue Sky Plan, Relu, and 3D Slicer) and tested on four cases; the mandibular bone was used as a benchmark. Results: Relu software, being based on AI, was able to solve some very intricate geometry and proved to have a very good usability. On the other side, the time required for segmentation was significantly higher than other software (reaching over twice the time required by Mimics). Geometric distances between nodes position calculated by different software usually kept below 2.5 mm, reaching 3.1 mm in some very critical area; 75th percentile q75 is generally less than 0.5 mm, with a maximum of 1.11 mm. Dealing with consistency among software, the maximum DSC value was observed between Mimics and Slicer, D2P and Mimics, and D2P and Slicer, reaching 0.96. Significance: This work has demonstrated how mandible segmentation performance among software was generally very good. Nonetheless, differences in geometric accuracy, usability, costs and times required can be significant so that information here provided can be useful to perform an informed choice.
Keywords: 3D reconstruction | CT scans | DSC | Geometric accuracy | Mandible | Segmentation | Usability
Abstract: The extraction of quantitative parameters (features) from medical images to build prediction models for computer-assisted clinical decision making (radiomics) has attracted a lot of research attention in recent years. Radiomics features may be influenced by the size of the region of interest (ROI) and it is recommended to employ volume-independent features to reduce the risk of biased prediction models. Despite being a crucial factor, the influence of ROI volume on radiomics features has been largely overlooked in the literature, with only few studies addressing this aspect. In this work we investigated the sensitivity to ROI volume of 103 conventional radiomics features from seven classes under absolute and relative signal resampling. The analysis was carried out on a set of 90 lesion-agnostic ROIs from a CT phantom. We identified a total of 45 and 54 features significantly correlated with volume under absolute and relative signal resampling, respectively. We also determined that signal resampling had an influence on the direction of the relationship, with relative resampling leading to a significant rise in the number of features negatively correlated with volume.
Keywords: Computed Tomography | Precision medicine | Radiomics | Region of interest | Shape | Texture
Abstract: Computational image aesthetics aims at determining what makes an image look pleasing. Assessing image aesthetics usually relies on the extraction of suitable image features related, for instance, to image composition (e.g. rule of third, depth of field), texture, shape and colour. It is widely accepted that colour, in particular, plays a major role in this context. The objective of this study was to investigate potential relationships between the most significant colours in an image (the colour theme, or palette) and the aesthetic rating. To this end we defined a procedure for colour palette extraction and its characterisation by a set of 21 hand-crafted features. Rank-based correlations between the features and manually-assigned aesthetic ratings were assessed by Spearman’s correlation coefficient. Experimenting on a total of 4,647 images from the public dataset EVA we found that 12 features were significantly associated with image aesthetics, although the overall correlation strength was at best weak. In particular, perceived aesthetic rating correlated positively with saturation (indicating a slight preference for colourfulness) and negatively with colour temperature (suggesting a slight preference for warm colours). A significant positive correlation (but again weak) also emerged between perceived aesthetic and harmonic colour schemes
Keywords: Aesthetic rating | Colour | Image aesthetics | Visual perception
Abstract: This work is aimed to set-up a methodology for foot shape prediction at different flexion angles, overcoming limitations encountered when different poses are required but a limited set of acquisitions can be performed. The basic idea was to identify a fitting law able to interpolate positions of foot anatomical landmarks, and then use this information to guide the deformation of an average foot shape. First of all, mesh correspondence between foot geometries was accomplished by an established procedure based on mesh morphing. Then Procrustes analysis was applied to the dataset to remove rigid motions and estimate the average shape. Two interpolation laws (linear and quadratic) were investigated and the best one in terms of prediction of 3D landmarks’ coordinates was identified. Finally, shape geometries at any flexion angle were predicted performing a second mesh morphing guided by interpolated landmarks’ displacements from the average shape. These analyses proved that a limited number of interpolation angles provides a prediction accuracy comparable to that obtained using all the angles available in the dataset. Moreover, predicted shapes have been compared to the actual scans in terms of root mean square error between corresponding nodes, obtaining a mean value of 4.03 ± 1.39 mm, in accordance with data reported in literature.
Keywords: Accurate geometric reconstruction | Foot model | Real-time acquisition | Statistical deformation model | Statistical shape model
Abstract: Anomaly detection is the identification of any event that falls outside what is considered ‘acceptable behaviour’. This work investigates anomaly detection for automated visual inspection in the context of industry automation (‘Industry 4.0’). For this task we propose a machine vision procedure based on visual feature extraction and one-class k nearest neighbours classification. The method requires only samples of normal (non-defective) instances for the training step. We benchmarked our approach using seven traditional (‘hand-designed’) colour texture descriptors and five pre-trained convolutional neural networks (CNN) ‘off-the-shelf’. Experimenting on nine image datasets from seven classes of materials (carpet, concrete, fabric, layered fused filament, leather, paper and wood), each containing normal and abnormal samples, we found overall accuracy in the range 82.0%–90.2%. Convolutional networks off-the-shelf performed generally better than the traditional methods, although – interestingly – this was not true for all the datasets considered. No visual descriptor clearly emerged as the all-purpose best option.
Keywords: Anomaly detection | Colour | Convolutional neural networks | Texture | Visual descriptors
Abstract: Snap-fit joints represent a simple, economical and straightforward way of joining two different components. The design of the snap-fit joint is usually performed evaluating peak stresses that must be tolerated by the material without incurring into failure or plastic deformations; in addition, the force needed to join and disassemble parts is estimated in relation to ergonomic issues. Finally, the retention force, that is the force required to start disjoining parts, needs to be estimated. The evaluation of peak stresses or insertion/retention/removal forces is commonly performed through finite element method, having identified the respective deformed configuration. A different approach has been here followed considering that it is not trivial to identify the most critical condition in a full joining/disjoining cycle, when complex geometries are being considered. In detail, the snap joint has been modelled as a multibody model including a flexible body, which replicates the part that undergoes major deflections during the process. The model has been validated against experimental force – time curves, recorded for an existing joint, and it has been used to optimize a parametrised snap-fit design. As a result, the joining force has been reduced up to −84%; the disassembly force has been reduced up to −86% and the retention force has been incremented up to +7%. On the whole, a numerical framework to study these joints has been established, keeping the computational time reasonably low (about 40 min for the entire insertion and removal simulation).
Keywords: Geometrical modelling | Multibody model | Plastic components design | Snap-fit joint | Tolerance analysis
Abstract: Introduction: In the last decade, two new radionuclide-based therapies, 223Radichloride and radioligand therapy (RLT) targeting prostate-specific membrane antigen (PSMA), have been approved by the regulatory authorities for the management of castrate-resistant prostate cancer (PCa). Areas covered: The basic principles of PCa molecular imaging are illustrated, with a particular attention to the combined use of diagnosis and therapy in a unique approach (the so-called theranostics) for response prediction and assessment in patients submitted to 223Radichloride or PSMA targeted therapies. In this perspective, the potential of radiomics, an emerging discipline based on the extraction of quantitative features from medical images, is covered. Expert opinion: Theranostic metabolic and molecular probes have been successfully applied to predict and monitor response to radionuclide-based therapies. In particular, both 99mTc-MDP and 18F-NaF resulted useful tools for personalized dosimetry and prognostic stratification before 223Ra-therapy, while PSMA-ligands, alone or in combination with 18F-FDG, provided valuable information to select patients who are more likely to benefit from RLT and getting information on PCa grade of differentiation and aggressiveness. In spite of its high potential, PET-radiomics for PCa is still at an embryonic phase and needs further validation.
Keywords: artificial intelligence | positron emission computed tomography | precision medicine | Prostate cancer | radiomics | single photon emission tomography | targeted therapy | theranostics
Abstract: Indeterminate lung nodules detected on CT are common findings in the clinical practice, and the correct assessment of their size is critical for patient evaluation and management. We compared the stability of three definitions of nodule diameter (Feret's mean diameter, Martin's mean diameter and area-equivalent diameter) to inter-observer variability on a population of 336 solid nodules from 207 subjects. We found that inter-observer agreement was highest with Martin's mean diameter (intra-class correlation coefficient = 0.977, 95% Confidence interval = 0.977–0.978), followed by area-equivalent diameter (0.972, 0.971–0.973) and Feret's mean diameter (0.965, 0.964–0.966). The differences were statistically significant. In conclusion, although all the three diameter definitions achieved very good inter-observer agreement (ICC > 0.96), Martin's mean diameter was significantly better than the others. Future guidelines may consider adopting Martin's mean diameter as an alternative to the currently used Feret's (caliper) diameter for assessing the size of lung nodules on CT.
Keywords: Computed tomography | Inter-observer variability | Lung nodules | Size assessment
Abstract: Estimating the state of charge (SOC) of batteries is fundamental for the proper management and safe operation of numerous systems, including electric vehicles, smart energy grids, and portable electronics. While there is no practical method for direct measurement of SOC, several estimation approaches have been developed, including a growing number of machine-learning-based techniques. Machine learning methods are intrinsically data-driven but can also benefit from a-priori knowledge embedded in a model. In this work, we first demonstrate, through exploratory data analysis, that it is possible to discriminate between different SOC from electrochemical impedance spectroscopy (EIS) measurements. Then we propose a SOC estimation approach based on EIS and an equivalent circuit model to provide a compact way to describe the frequency domain and time-domain behavior of the impedance of a battery. We experimentally validated this approach by applying it to a dataset consisting of EIS measurements performed on four lithium-ion cylindrical cells at different SOC values. The proposed approach allows for very efficient model training and produces a low-dimensional SOC classification model that achieves above 93% accuracy. The resulting low-dimensional classification model is suitable for embedding into battery-powered systems and for online SOC estimation.
Keywords: Battery | EIS | Electrochemical impedance spectroscopy | SOC | State of charge
Abstract: The aim of this review is to provide a comprehensive overview of the existing literature concerning the applications of positron emission tomography (PET) radiomics in lung cancer patient candidates or those undergoing immunotherapy. Materials and Methods: A systematic review was conducted on databases and web sources. English-language original articles were considered. The title and abstract were independently reviewed to evaluate study inclusion. Duplicate, out-of-topic, and review papers, or editorials, articles, and letters to editors were excluded. For each study, the radiomics analysis was assessed based on the radiomics quality score (RQS 2.0). The review was registered on the PROSPERO database with the number CRD42023402302. Results: Fifteen papers were included, thirteen were qualified as using conventional radiomics approaches, and two used deep learning radiomics. The content of each study was different; indeed, seven papers investigated the potential ability of radiomics to predict PD-L1 expression and tumor microenvironment before starting immunotherapy. Moreover, two evaluated the prediction of response, and four investigated the utility of radiomics to predict the response to immunotherapy. Finally, two papers investigated the prediction of adverse events due to immunotherapy. Conclusions: Radiomics is promising for the evaluation of TME and for the prediction of response to immunotherapy, but some limitations should be overcome.
Keywords: immunotherapy | lung cancer | PD-L1 | PET | response to therapy
Abstract: Background. Head and neck cancer (HNC) is the seventh most common neoplastic disorder at the global level. Contouring HNC lesions on [(Formula presented.) F] Fluorodeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) scans plays a fundamental role for diagnosis, risk assessment, radiotherapy planning and post-treatment evaluation. However, manual contouring is a lengthy and tedious procedure which requires significant effort from the clinician. Methods. We evaluated the performance of six hand-crafted, training-free methods (four threshold-based, two algorithm-based) for the semi-automated delineation of HNC lesions on FDG PET/CT. This study was carried out on a single-centre population of (Formula presented.) subjects, and the standard of reference was manual segmentation generated by nuclear medicine specialists. Figures of merit were the Sørensen–Dice coefficient (DSC) and relative volume difference (RVD). Results. Median DSC ranged between 0.595 and 0.792, median (Formula presented.) between −22.0% and 87.4%. Click and draw and Nestle’s methods achieved the best segmentation accuracy (median DSC, respectively, 0.792 ± 0.178 and 0.762 ± 0.107; median RVD, respectively, −21.6% ± 1270.8% and −32.7% ± 40.0%) and outperformed the other methods by a significant margin. Nestle’s method also resulted in a lower dispersion of the data, hence showing stronger inter-patient stability. The accuracy of the two best methods was in agreement with the most recent state-of-the art results. Conclusions. Semi-automated PET delineation methods show potential to assist clinicians in the segmentation of HNC lesions on FDG PET/CT images, although manual refinement may sometimes be needed to obtain clinically acceptable ROIs.
Keywords: head and neck cancer | positron emission tomography | radiomics | region of interest | segmentation
Abstract: Purpose: To provide a comprehensive overview of the current literature on the applications of PET-based radiomics in patients affected by multiple myeloma (MM) and FDG-avid lymphomas. Methods: Relevant studies on the topic were selected by searching Pubmed/Medline. Retrospective or prospective cohort studies focusing on the clinical applications of PET-radiomics in lymphomas and MM were retrieved, analyzed, and discussed. Result: A total of 17 papers were ultimately selected, with 9 focusing on non-Hodgkin lymphomas, 6 on Hodgkin lymphomas, and 5 dealing with MM. Machine learning-derived models incorporating first-, second-, and third-order radiomic features extracted from baseline PET/CT scans demonstrated promising results in predicting outcomes, specifically the 2-year event-free survival (EFS) in lymphomas. Furthermore, models based on PET-radiomic features were effective in distinguishing between MM and bone metastases, as well as in assessing minimal residual disease, outperforming visual analysis. Conclusion: Preliminary results suggest that PET-radiomic features, which reflect the biological heterogeneity and spatial distribution of lesions, may play a prognostic role in both lymphomas and MM. Nevertheless, before implementing these findings in clinical practice, it is imperative to standardize the methodological approaches and validate them in large prospective trials.
Keywords: Artificial intelligence | FDG | Lymphomas | Machine learning | Multiple myeloma | PET/CT | Radiomics
Abstract: Indeterminate lung nodules detected on CT scans are common findings in clinical practice. Their correct assessment is critical, as early diagnosis of malignancy is crucial to maximise the treatment outcome. In this work, we evaluated the role of form factors as imaging biomarkers to differentiate benign vs. malignant lung lesions on CT scans. We tested a total of three conventional imaging features, six form factors, and two shape features for significant differences between benign and malignant lung lesions on CT scans. The study population consisted of 192 lung nodules from two independent datasets, containing 109 (38 benign, 71 malignant) and 83 (42 benign, 41 malignant) lung lesions, respectively. The standard of reference was either histological evaluation or stability on radiological followup. The statistical significance was determined via the Mann–Whitney U nonparametric test, and the ability of the form factors to discriminate a benign vs. a malignant lesion was assessed through multivariate prediction models based on Support Vector Machines. The univariate analysis returned four form factors (Angelidakis compactness and flatness, Kong flatness, and maximum projection sphericity) that were significantly different between the benign and malignant group in both datasets. In particular, we found that the benign lesions were on average flatter than the malignant ones; conversely, the malignant ones were on average more compact (isotropic) than the benign ones. The multivariate prediction models showed that adding form factors to conventional imaging features improved the prediction accuracy by up to 14.5 pp. We conclude that form factors evaluated on lung nodules on CT scans can improve the differential diagnosis between benign and malignant lesions.
Keywords: computed tomography | form factors | lung cancer | radiomics
Abstract: Novel therapeutic approaches, consisting of immune check-point inhibitors (ICIs) and molecularly targeted therapy, have thoroughly changed the clinical management of malignant melanoma (MM), the most frequent and deadly skin cancer. Since only 30–40% of MM patients respond to ICIs, imaging biomarkers suitable for the pre-therapeutic stratification and response assessment are warmly welcome. In this scenario, positron emission computed tomography (PET/CT) with18 F-fluorodeoxyglucose (18 F-FDG) has been successfully utilized for advanced MM staging and therapy response evaluation. Furthermore, several PET-derived parameters (SUVmax, MTV, TLG) were particularly impactful for the prognostic evaluation of patients submitted to targeted and immunotherapy. In this review, we performed a web-based and desktop research on the clinical applications of18 F-FDG PET/CT in MM, with a particular emphasis on the various metabolic criteria developed for interpreting PET/CT scan in patients undergoing immunotherapy or targeted therapy or a combination of both. Furthermore, the emerging role of radiomics, a quantitative approach to medical imaging applying analysis methodology derived by the field of artificial intelligence, was examined in the peculiar context, putting a particular emphasis on the potential of this discipline to support clinicians in the delicate process of building patient-tailored pathways of care.
Keywords: 18 F-FDG | artificial intelligence | BRAF mutation | malignant melanoma | PET/CT | precision medicine | radiomics
Abstract: Necklaces are the equivalence classes of words under the action of the cyclic group. Let a transition in a word be any change between two adjacent letters modulo the word's length. We present a closed-form solution for the enumeration of necklaces in n beads, k colours and t transitions. We show that our result provides a more general solution to the problem of counting alternating (proper) colourings of the vertices of a regular n-gon.
Keywords: Colouring | Composition | Necklace | Transition
Abstract: Radiological examination of pulmonary nodules on CT involves the assessment of the nodules’ size and morphology, a procedure usually performed manually. In recent years computer-assisted analysis of indeterminate lung nodules has been receiving increasing research attention as a potential means to improve the diagnosis, treatment and follow-up of patients with lung cancer. Computerised analysis relies on the extraction of objective, reproducible and standardised imaging features. In this context the aim of this work was to evaluate the correlation between nine IBSI-compliant morphological features and three manually-assigned radiological attributes – lobulation, sphericity and spiculation. Experimenting on 300 lung nodules from the open-access LIDC-IDRI dataset we found that the correlation between the computer-calculated features and the manually-assigned visual scores was at best moderate (Pearson’s r between -0.61 and 0.59; Spearman’s ρ between -0.59 and 0.56). We conclude that the morphological features investigated here have moderate ability to match/explain manually-annotated lobulation, sphericity and spiculation.
Keywords: Lung cancer | Morphological features | Pulmonary nodules | Radiomics
Abstract: This dataset consists of electrochemical impedance spectroscopy measurements on commonly-used batteries, namely Samsung ICR18650-26J cylindrical Lithium-Ion cells. The complex impedance of the batteries was measured at a set of fourteen different frequencies from 0.05 Hz to 1000 Hz, using a random-phase multi-sine excitation signal. For each excited frequency, the current amplitude was 50 mA, resulting in a measurement uncertainty of approximately 0.1 mΩ. Six measurement repetitions are provided at ten different states-of-charge of four different brand-new batteries. Repeated EIS measurement results were obtained, for each individual battery cell, from six separate discharge cycles. All measurements were performed with the battery placed in a temperature-controlled chamber at 25 ± 1 °C. Batteries were allowed to thermalize before each measurement.
Keywords: Batteryelectrochemical impedance spectroscopy | Broadband electrochemical impedance spectroscopy | Multisine excitation | state of charge estimation
Abstract: [No abstract available]
Abstract: Purpose: We evaluate the ability of Artificial Intelligence with automatic classification methods applied to semi-quantitative data from brain 18F-FDG PET/CT to improve the differential diagnosis between Alzheimer Disease (AD) and Mild Cognitive Impairment (MCI). Procedures: We retrospectively analyzed a total of 150 consecutive patients who underwent diagnostic evaluation for suspected AD (n = 67) or MCI (n = 83). All patients received brain 18F-FDG PET/CT according to the international guidelines, and images were analyzed both Qualitatively (QL) and Quantitatively (QN), the latter by a fully automated post-processing software that produced a z score metabolic map of 25 anatomically different cortical regions. A subset of n = 122 cases with a confirmed diagnosis of AD (n = 53) or MDI (n = 69) by 18–24-month clinical follow-up was finally included in the study. Univariate analysis and three automated classification models (classification tree –ClT-, ridge classifier –RC- and linear Support Vector Machine –lSVM-) were considered to estimate the ability of the z scores to discriminate between AD and MCI cases in. Results: The univariate analysis returned 14 areas where the z scores were significantly different between AD and MCI groups, and the classification accuracy ranged between 74.59% and 76.23%, with ClT and RC providing the best results. The best classification strategy consisted of one single split with a cut-off value of ≈ −2.0 on the z score from temporal lateral left area: cases below this threshold were classified as AD and those above the threshold as MCI. Conclusions: Our findings confirm the usefulness of brain 18F-FDG PET/CT QL and QN analyses in differentiating AD from MCI. Moreover, the combined use of automated classifications models can improve the diagnostic process since its use allows identification of a specific hypometabolic area involved in AD cases in respect to MCI. This data improves the traditional 18F-FDG PET/CT image interpretation and the diagnostic assessment of cognitive disorders.
Keywords: Alzheimer disease | artificial intelligence | automatic classification | brain 18F-FDG PET/CT | mild cognitive impairment
Abstract: Electrochemical impedance spectroscopy (EIS) is a fundamental tool used in numerous research fields and applications. In particular, EIS is commonly employed for studying and monitoring lithium-ion batteries, to ensure their safe and efficient operation. The LiBEIS software tool computes EIS data by processing the voltage and current time series acquired from a battery under test, which is excited with a broadband current signal. Furthermore, LiBEIS performs fitting of the EIS data to an equivalent circuit model, which is often employed in practice to analyse the behaviour of the battery. Finally, LiBEIS implements exploratory data analysis tools and machine-learning methods aimed at estimating the state-of-charge (SOC) from EIS data.
Keywords: Electrochemical impedance spectroscopy | Equivalent circuit models | Exploratory data analysis | Lithium-ion batteries | Machine learning | State of charge
Abstract: Colour and texture are two perceptual stimuli that determine, to a great extent, the appearance of objects, materials and scenes. The ability to process texture and colour is a fundamental skill in humans as well as in animals; therefore, reproducing such capacity in artificial (‘intelligent’) systems has attracted considerable research attention since the early 70s. Whereas the main approach to the problem was essentially theory-driven (‘hand-crafted’) up to not long ago, in recent years the focus has moved towards data-driven solutions (deep learning). In this overview we retrace the key ideas and methods that have accompanied the evolution of colour and texture analysis over the last five decades, from the ‘early years’ to convolutional networks. Specifically, we review geometric, differential, statistical and rank-based approaches. Advantages and disadvantages of traditional methods vs. deep learning are also critically discussed, including a perspective on which traditional methods have already been subsumed by deep learning or would be feasible to integrate in a data-driven approach.
Keywords: Colour | Deep learning | Texture | Visual recognition
Abstract: We present our latest research on skin characterizations by using Contact Capacitive Imaging and High-Resolution Ultrasound Imaging with Machine Learning algorithms. Contact Capacitive Imaging is a novel imaging technology based on the dielectric constant measurement principle, with which we have studied the skin water content of different skin sites and performed image classification by using pre-trained Deep Learning Neural Networks through Transfer Learning. The results show lips and nose have the lowest water content, whilst cheek, eye corner and under-eye have the highest water content. The classification yields up to 83.8% accuracy. High-Resolution Ultrasound Imaging is a state-of-the-art ultrasound technology, and can produce high-resolution images of the skin and superficial soft tissue to a vertical resolution of about 40 microns, with which we have studied the thickness of different skin layers, such as stratum corneum, epidermis and dermis, around different locations on the face and around different body parts. The results show the chin has the highest stratum corneum thickness, and the arm has the lowest stratum corneum thickness. We have also developed two feature-based image classification methods which yield promising results. The outcomes of this study could provide valuable guidelines for cosmetic/medical research, and methods developed in this study can also be extended for studying damaged skin or skin diseases. The combination of Contact Capacitive Imaging and High-Resolution Ultrasound Imaging could be a powerful tool for skin studies.
Keywords: Capacitive imaging | High-resolution ultrasound | Machine learning | Skin image analysis | Skin texture | Skin thickness | Skin water content
Abstract: Computer-assisted analysis of three-dimensional imaging data (radiomics) has received a lot of research attention as a possible means to improve the management of patients with lung cancer. Building robust predictive models for clinical decision making requires the imaging features to be stable enough to changes in the acquisition and extraction settings. Experimenting on 517 lung lesions from a cohort of 207 patients, we assessed the stability of 88 texture features from the following classes: first-order (13 features), Grey-level Co-Occurrence Matrix (24), Grey-level Difference Matrix (14), Grey-level Run-length Matrix (16), Grey-level Size Zone Matrix (16) and Neighbouring Grey-tone Difference Matrix (five). The analysis was based on a public dataset of lung nodules and open-access routines for feature extraction, which makes the study fully reproducible. Our results identified 30 features that had good or excellent stability relative to lesion delineation, 28 to intensity quantisation and 18 to both. We conclude that selecting the right set of imaging features is critical for building clinical predictive models, particularly when changes in lesion delineation and/or intensity quantisation are involved.
Keywords: Computed tomography | Intensity quantisation | Lesion delineation | Lung nodules | Radiomics | Stability | Texture features
Abstract: Background: Accurate segmentation of pulmonary nodules on computed tomography (CT) scans plays a crucial role in the evaluation and management of patients with suspicion of lung cancer (LC). When performed manually, not only the process requires highly skilled operators, but is also tiresome and time-consuming. To assist the physician in this task several automated and semi-automated methods have been proposed in the literature. In recent years, in particular, the appearance of deep learning has brought about major advances in the field. Methods: Twenty-four (12 conventional and 12 based on deep learning) semi-automated-'one-click'- methods for segmenting pulmonary nodules on CT were evaluated in this study. The experiments were carried out on two datasets: A proprietary one (383 images from a cohort of 111 patients) and a public one (259 images from a cohort of 100). All the patients had a positive transcript for suspect pulmonary nodules. Results: The methods based on deep learning clearly outperformed the conventional ones. The best performance [Sørensen-Dice coefficient (DSC)] in the two datasets was, respectively, 0.853 and 0.763 for the deep learning methods, and 0.761 and 0.704 for the traditional ones. Conclusions: Deep learning is a viable approach for semi-automated segmentation of pulmonary nodules on CT scans.
Keywords: Computed tomography (CT) | Deep learning | Lung cancer (LC) | Pulmonary nodules | Segmentation
Abstract: Purpose: The aim of this review is to discuss the most significant contributions about the role of Artificial Intelligence (AI) techniques to support the diagnosis of movement disorders through nuclear medicine modalities. Methods: The work is based on a selection of papers available on PubMed, Scopus and Web of Sciences. Articles not written in English were not considered in this study. Results: Many papers are available concerning the increasing contribution of machine learning techniques to classify Parkinson’s disease (PD), Parkinsonian syndromes and Essential Tremor (ET) using data derived from brain SPECT with dopamine transporter radiopharmaceuticals. Other papers investigate by AI techniques data obtained by 123I-MIBG myocardial scintigraphy to differentially diagnose PD and other Parkinsonian syndromes. Conclusion: The recent literature provides strong evidence that AI techniques can play a fundamental role in the diagnosis of movement disorders by means of nuclear medicine modalities, therefore paving the way towards personalized medicine.
Keywords: 123I-FP-CIT SPECT | Artificial intelligence | Artificial neural network | Machine learning | Movement disorders | Nuclear medicine techniques | Parkinson’s disease
Abstract: Automatic painting classification by author, artistic genre and/or other attributes has generated considerable research interest in recent years. Being one of the visual features that mark the difference between artists and artistic genres, colour plays a fundamental role in this process. Colour is the result of the interaction among the intrinsic properties of the material, the illumination conditions and the response of the imaging device. Consequently, the same painting/artwork will look significantly different when imaged under varied conditions, which can be a potential source of bias for automated recognition procedures. One can compensate for such variations either via colour calibration or colour pre-processing. In this work we investigate the latter, and, in particular, evaluate the effectiveness of colour constancy and colour augmentation when coupled with hand-crafted and deep learning features for painting classification by artistic genre. In our experiments neither approach showed a clear advantage compared with no pre-processing at all. Colour constancy brought some improvement in certain cases, whereas colour augmentation virtually provided no benefit despite its adding a significant computational overload to the procedure.
Abstract: Histological evaluation plays a major role in cancer diagnosis and treatment. The appearance of H&E-stained images can vary significantly as a consequence of differences in several factors, such as reagents, staining conditions, preparation procedure and image acquisition system. Such potential sources of noise can all have negative effects on computer-assisted classification. To minimize such artefacts and their potentially negative effects several color pre-processing methods have been proposed in the literature—for instance, color augmentation, color constancy, color deconvolution and color transfer. Still, little work has been done to investigate the efficacy of these methods on a quantitative basis. In this paper, we evaluated the effects of color constancy, deconvolution and transfer on automated classification of H&E-stained images representing different types of cancers—specifically breast, prostate, colorectal cancer and malignant lymphoma. Our results indicate that in most cases color pre-processing does not improve the classification accuracy, especially when coupled with color-based image descriptors. Some pre-processing methods, however, can be beneficial when used with some texture-based methods like Gabor filters and Local Binary Patterns.
Keywords: Color | H&E staining | Histology images | Texture
Abstract: In this paper, we investigate the role of shape and texture features from 18F-FDG PET/CT to discriminate between benign and malignant solitary pulmonary nodules. To this end, we retrospectively evaluated cross-sectional data from 111 patients (64 males, 47 females, age = 67.5 ± 11.0) all with histologically confirmed benign (n = 39) or malignant (n = 72) solitary pulmonary nodules. Eighteen three-dimensional imaging features, including conventional, texture, and shape features from PET and CT were tested for significant differences (Wilcoxon-Mann-Withney) between the benign and malignant groups. Prediction models based on different feature sets and three classification strategies (Classification Tree, k-Nearest Neighbours, and Naïve Bayes) were also evaluated to assess the potential benefit of shape and texture features compared with conventional imaging features alone. Eight features from CT and 15 from PET were significantly different between the benign and malignant groups. Adding shape and texture features increased the performance of both the CT-based and PET-based prediction models with overall accuracy gain being 3.4–11.2 pp and 2.2–10.2 pp, respectively. In conclusion, we found that shape and texture features from 18F-FDG PET/CT can lead to a better discrimination between benign and malignant lung nodules by increasing the accuracy of the prediction models by an appreciable margin.
Keywords: Radiomics | Shape | Solitary pulmonary nodule | Texture
Abstract: Mapping facial skin in terms of its biophysical properties plays a fundamental role in many practical applications, including, among others, forensics, medical and beauty treatments, and cosmetic and restorative surgery. In this paper we present an in vivo evaluation of the water content, trans-epidermial water loss and skin thickness in six areas of the human face: cheeks, chin, forehead, lips, neck and nose. The experiments were performed on a population of healthy subjects through innovative sensing devices which enable fast yet accurate evaluations of the above parameters. A statistical analysis was carried out to determine significant differences between the facial areas investigated and clusters of statistically-indistinguishable areas. We found that water content was higher in the cheeks and neck and lower in the lips, whereas trans-epidermal water loss had higher values for the lips and lower ones for the neck. In terms of thickness the dermis exhibited three clusters, which, from thickest to thinnest were: chin and nose, cheek and forehead and lips and neck. The epidermis showed the same three clusters too, but with a different ordering in term of thickness. Finally, the stratum corneum presented two clusters: the thickest, formed by lips and neck, and the thinnest, formed by all the remaining areas. The results of this investigation can provide valuable guidelines for the evaluation of skin moisturisers and other cosmetic products, and can help guide choices in re-constructive/cosmetic surgery.
Keywords: Face | Skin | Skin layers | Thickness | Trans-epidermal water loss | Water content
Abstract: Transparent elements of buildings account for most of the variation in the spectrum of transmitted daylight. This behavior has a considerable impact on the luminous quality of the indoor environment, because the light spectral composition affects the perceived color and brightness of the illuminated objects. As a consequence, the effect of light colour and intensity on the behavior and health of the occupants is very important. Among the innovative transparent solutions for buildings, monolithic aerogel is one of the most promising, both in terms of lighting and thermal performance. In this context, the present study investigates the colour rendering properties of monolithic aerogel. For the analysis we employed a machine vision system based on a dome-shape illuminator, a digital camera, a chroma meter, and a colour checker. We evaluated the effect of the aerogel on colour rendering by measuring the RGB and CIE Lab coordinates of the reference colours of the colour checker with and without the aerogel. The results show that, in general, the presence of aerogel tends to shift all the colours towards a blue hue, the maximum variations occurring with dark tones of red, yellow, and green. Finally the Colour Rendering Index Ra was calculated for the examined monolithic aerogel by means of a methodology developed in accordance to the EN ISO 410 standard and CIE standard procedures. The obtained values prove a rather good visual quality of the monolithic aerogel (Ra = 85).
Keywords: CIE Lab coordinates | Colour rendering | Colour shift | Monolithic aerogel | RGB coordinates
Abstract: Background/Aim: Proliferation biomarkers such as MIB-1 are strong predictors of clinical outcome and response to therapy in patients with non-small-cell lung cancer, but they require histological examination. In this work, we present a classification model to predict MIB-1 expression based on clinical parameters from positron emission tomography. Patients and Methods: We retrospectively evaluated 78 patients with histology-proven non-small-cell lung cancer (NSCLC) who underwent 18F-FDG-PET/CT for clinical examination. We stratified the population into a low and high proliferation group using MIB-1=25% as cut-off value. We built a predictive model based on binary classification trees to estimate the group label from the maximum standardized uptake value (SUVmax) and lesion diameter. Results: The proposed model showed ability to predict the correct proliferation group with overall accuracy >82% (78% and 86% for the low- and high-proliferation group, respectively). Conclusion: Our results indicate that radiotracer activity evaluated via SUVmax and lesion diameter are correlated with tumour proliferation index MIB-1.
Keywords: 18 F-FDG PET/CT | Artificial intelligence | MIB-1 | Non-small-cell lung cancer
Abstract: Purpose: To provide reliable and reproducible heart/mediastinum (H/M) ratio cut-off values for parkinsonian disorders using two machine learning techniques, Support Vector Machines (SVM) and Random Forest (RF) classifier, applied to [123I]MIBG cardiac scintigraphy. Procedures: We studied 85 subjects, 50 with idiopathic Parkinson’s disease, 26 with atypical Parkinsonian syndromes (P), and 9 with essential tremor (ET). All patients underwent planar early and delayed cardiac scintigraphy after [123I]MIBG (111 MBq) intravenous injection. Images were evaluated both qualitatively and quantitatively; the latter by the early and delayed H/M ratio obtained from regions of interest (ROIt1 and ROIt2) drawn on planar images. SVM and RF classifiers were finally used to obtain the correct cut-off value. Results: SVM and RF produced excellent classification performances: SVM classifier achieved perfect classification and RF also attained very good accuracy. The better cut-off for H/M value was 1.55 since it remains the same for both ROIt1 and ROIt2. This value allowed to correctly classify PD from P and ET: patients with H/M ratio less than 1.55 were classified as PD while those with values higher than 1.55 were considered as affected by parkinsonism and/or ET. No difference was found when early or late H/M ratio were considered separately thus suggesting that a single early evaluation could be sufficient to obtain the final diagnosis. Conclusions: Our results evidenced that the use of SVM and CT permitted to define the better cut-off value for H/M ratios both in early and in delayed phase thus underlining the role of [123I]MIBG cardiac scintigraphy and the effectiveness of H/M ratio in differentiating PD from other parkinsonism or ET. Moreover, early scans alone could be used for a reliable diagnosis since no difference was found between early and late. Definitely, a larger series of cases is needed to confirm this data.
Keywords: Automated classification techniques | Heart/mediastinum ratio cut-off value | Parkinson disease | Parkinsonism | Random Forest classifier | Support vector machines | [ I]MIBG cardiac scintigraphy 123
Abstract: Quantitative extraction of imaging features from medical scans ('radiomics') has attracted a lot of research attention in the last few years. The literature has consistently emphasized the potential use of radiomics for computer-assisted diagnosis, as well as for predicting survival and response to treatment. Radiomics is appealing in that it enables full-field analysis of the lesion, provides nearly real-time results, and is non-invasive. Still, a lot of studies suffer from a series of drawbacks such as lack of standardization and repeatability. Such limitations, along with the unmet demand for large enough image datasets for training the algorithms, are major hurdles that still limit the application of radiomics on a large scale. In this paper, we review the current developments, potential applications, limitations, and perspectives of PET/CT radiomics with specific focus on the management of patients with lung cancer.
Keywords: CT | Lung cancer | PET | Radiomics
Abstract: The identification of tissue regions within histopathological images represents a fundamental step for diagnosis, patient stratification and follow-up. However, the huge amount of image data made available by the ever improving whole-slide imaging devices gives rise to a bottleneck in manual, microscopy-based evaluation. Furthermore, manual procedures generally show a significant intra- and/or inter-observer variability. In this scenario the objective of this chapter is to investigate the effectiveness of image features from last-generation, pre-trained convolutional networks against variants of Local Binary Patterns for classifying tissue sub-regions into meaningful classes such as epithelium, stroma, lymphocytes and necrosis. Experimenting with seven datasets of histopathological images we show that both classes of methods can be quite effective for the task, but with a noticeable superiority of descriptors based on convolutional neural networks. In particular, we show that these can be seamlessly integrated with standard classifiers (e.g. Support Vector Machines) to attain overall discrimination accuracy between 95 and 99%.
Abstract: There is increasing evidence that shape and texture descriptors from imaging data could be used as image biomarkers for computer-assisted diagnosis and prognostication in a number of clinical conditions. It is believed that such quantitative features may help uncover patterns that would otherwise go unnoticed to the human eye, this way offering significant advantages against traditional visual interpretation. The objective of this paper is to provide an overview of the steps involved in the process – from image acquisition to feature extraction and classification. A significant part of the work deals with the description of the most common texture and shape features used in the literature; overall issues, perspectives and directions for future research are also discussed.
Keywords: Computer-assisted medicine | Radiomics | Shape | Texture
Abstract: Partial orders are the natural mathematical structure for comparing multivariate data that, like colours, lack a natural order. We introduce a novel, general approach to defining rank features in colour spaces based on partial orders, and show that it is possible to generalise existing rank based descriptors by replacing the order relation over intensity values by suitable partial orders in colour space. In particular, we extend a classical descriptor (the Texture Spectrum) to work with partial orders. The effectiveness of the generalised descriptor is demonstrated through a set of image classification experiments on 10 datasets of colour texture images. The results show that the partial-order version in colour space outperforms the grey-scale classic descriptor while maintaining the same number of features.
Keywords: Hand-designed image descriptors | Mathematics of colour and texture | Partial orders | Rank features
Abstract: Purpose: The study aims to investigate the correlations between positron emission tomography (PET) texture features, X-ray computed tomography (CT) texture features, and histological subtypes in non-small-cell lung cancer evaluated with 2-deoxy-2-[18F]fluoro-D-glucose PET/CT. Procedures: We retrospectively evaluated the baseline PET/CT scans of 81 patients with histologically proven non-small-cell lung cancer. Feature extraction and statistical analysis were carried out on the Matlab platform (MathWorks, Natick, USA). Results: Intra-CT correlation analysis revealed a strong positive correlation between volume of the lesion (CTvol) and maximum density (CTmax), and between kurtosis (CTkrt) and maximum density (CTmax). A moderate positive correlation was found between volume (CTvol) and average density (CTmean), and between kurtosis (CTkrt) and average density (CTmean). Intra-PET analysis identified a strong positive correlation between the radiotracer uptake (SUVmax, SUVmean) and its degree of variability/disorder throughout the lesion (SUVstd, SUVent). Conversely, there was a strong negative correlation between the uptake (SUVmax, SUVmean) and its degree of uniformity (SUVuni). There was a positive moderate correlation between the metabolic tumor volume (MTV) and radiotracer uptake (SUVmax, SUVmean). Inter (PET-CT) correlation analysis identified a very strong positive correlation between the volume of the lesion at CT (CTvol) and the metabolic volume (MTV), a moderate positive correlation between average tissue density (CTmean) and radiotracer uptake (SUVmax, SUVmean), and between kurtosis at CT (CTkrt) and metabolic tumor volume (MTV). Squamous cell carcinomas had larger volume higher uptake, stronger PET variability and lower uniformity than the other subtypes. By contrast, adenocarcinomas exhibited significantly lower uptake, lower variability and higher uniformity than the other subtypes. Conclusions: Significant associations emerged between PET features, CT features, and histological type in NSCLC. Texture analysis on PET/CT shows potential to differentiate between histological types in patients with non-small-cell lung cancer.
Keywords: Non-small-cell lung cancer | Radiomics | Texture analysis | [ F] FDG PET/CT 18
Abstract: Convolutional Neural Networks (CNN) have brought spectacular improvements in several fields of machine vision including object, scene and face recognition. Nonetheless, the impact of this new paradigm on the classification of fine-grained images-such as colour textures-is still controversial. In this work, we evaluate the effectiveness of traditional, hand-crafted descriptors against off-the-shelf CNN-based features for the classification of different types of colour textures under a range of imaging conditions. The study covers 68 image descriptors (35 hand-crafted and 33 CNN-based) and 46 compilations of 23 colour texture datasets divided into 10 experimental conditions. On average, the results indicate a marked superiority of deep networks, particularly with non-stationary textures and in the presence of multiple changes in the acquisition conditions. By contrast, hand-crafted descriptors were better at discriminating stationary textures under steady imaging conditions and proved more robust than CNN-based features to image rotation.
Keywords: Colour texture | Convolutional neural networks | Feature extraction | Hand-crafted image descriptors | Image classification
Abstract: Filtering has been one of the main approaches to texture analysis since early on. Traditionally, the process involved designing the filters essentially by hand based on some prior knowledge (e.g. perceptual models, optimal mathematical properties, etc.) In this work we propose the use of convolutional networks for refactoring traditional, hand-designed filters. Our method consists of initialising the first convolutional layer of the network with some classic banks of filters, training the network on texture images and retrieve the modified filters. Experimenting with five classes of filters and eight datasets of texture images we show that the refactored filters can be conveniently used ‘off-the-shelf’ to achieve better performance than obtained with the original filters, but at the same computational cost.
Keywords: Convolutional neural networks | Image filters | Texture analysis
Abstract: This paper compares the effects of colour pre-processing on the classification performance of H&E-stained images. Variations in the tissue preparation procedures, acquisition systems, stain conditions and reagents are all source of artifacts that can affect negatively computer-based classification. Pre-processing methods such as colour constancy, transfer and deconvolution have been proposed to compensate the artifacts. In this paper we compare quantitatively the combined effect of six colour pre-processing procedures and 12 colour texture descriptors on patch-based classification of H&E-stained images. We found that colour pre-processing had negative effects on accuracy in most cases – particularly when used with colour descriptors. However, some pre-processing procedures proved beneficial when employed in conjunction with classic texture descriptors such as co-occurrence matrices, Gabor filters and Local Binary Patterns.
Keywords: Colour | Eosin | Hematoxylin | Histology | Texture
Abstract: In this paper we investigate extensions of Local Binary Patterns (LBP), Improved Local Binary Patterns (ILBP) and Extended Local Binary Patterns (ELBP) to colour textures via two different strategies: intra-/inter-channel features and colour orderings. We experimentally evaluate the proposed methods over 15 datasets of general and biomedical colour textures. Intra- and inter-channel features from the RGB space emerged as the best descriptors and we found that the best accuracy was achieved by combining multi-resolution intra-channel features with single-resolution inter-channel features.
Keywords: Colour | Local binary patterns | Texture
Abstract: Despite the fact that image descriptors based on the statistical distribution of local patterns are very common tools, most of their mathematical underpinnings have been largely overlooked. Among them, the rigorous determination of the number of possible patterns that can arise from a given neighbourhood and kernel function – particularly when invariance under group actions (e.g., rotations and/or reflections) is taken into account – has received little or no attention in the literature. In this note we address the problem of counting local patterns in a rigorous way. We provide exact formulas for the number of the possible directional, rotation- and reflection-invariant patterns generated by neighbours of n points over alphabets of k symbols. Variations on this scheme such as rank and uniform patterns are considered, and direct applications to a number of common descriptors (e.g., Local Binary Patterns, Texture Spectrum and Full Ranking) are also presented.
Keywords: Bracelets | Local image descriptors | Necklaces | Patterns
Abstract: Visual localization across seasons and under varying weather and lighting conditions is a challenging task in robotics. In this paper, we present a new sequence-based approach to visual localization using the Partial Order Kernel (POKer), a convolution kernel for string comparison, that is able to handle appearance changes and is robust to speed variations. We use multiple sequence alignment to construct directed acyclic graph representations of the database image sequences, where sequences of images of the same place acquired at different times are represented as alternative paths in a graph. We then use the POKer to compute the pairwise similarities between these graphs and the query image sequences obtained in a subsequent traversal of the environment, and match the corresponding locations. We evaluated our approach on a dataset which features extreme appearance variations due to seasonal changes. The results demonstrate the effectiveness of our approach, where it achieves higher precision and recall than two state-of-the-art baseline methods.
Keywords: Kernel methods | Partial order graphs | Visual localization
Abstract: The ever increasing availability of digital data from the Arts and cultural heritage calls for efficient methods to organise, categorise, and retrieve such information in an effective and reliable way. In this context, painting classification has attracted significant research interest in recent years. In this work we address the problem of style classification, which involves determining the school, period and art movement to which a painting belongs. Notably, this job is peculiarly different from other machine vision applications - such as material, object and scene recognition - since the concept of 'similarity' is much more difficult to define in this case. For this specific task we evaluate, in this study, the effectiveness of an array of hand-designed visual descriptors against a set of feature extractors based on last-generation convolutional neural networks. We also investigate the effect of pre-processing methods such as image split and pyramidal decomposition. The experiments are based on the open-access Pandora dataset. The results show that pre-trained models can significantly outperform hand-designed descriptors with overall accuracy surpassing 67%. This represents an improvement on the state-of-the-art by ≈12 percentage points.
Abstract: Background/Aim. We retrospectively investigated the prognostic potential (correlation with overall survival) of 9 shape and 21 textural features from non-contrast-enhanced computed tomography (CT) in patients with non-small-cell lung cancer. Materials and Methods. We considered a public dataset of 203 individuals with inoperable, histologically- or cytologically-confirmed NSCLC. Three-dimensional shape and textural features from CT were computed using proprietary code and their prognostic potential evaluated through four different statistical protocols. Results. Volume and grey-level run length matrix (GLRLM) run length non-uniformity were the only two features to pass all four protocols. Both features correlated negatively with overall survival. The results also showed a strong dependence on the evaluation protocol used. Conclusion: Tumour volume and GLRLM run-length non-uniformity from CT were the best predictor of survival in patients with non-small-cell lung cancer. We did not find enough evidence to claim a relationship with survival for the other features.
Keywords: Computed tomography | Non-small-cell lung cancer | Radiomics. | Shape | Texture
Abstract: Texture classification plays a major role in many computer vision applications. Local binary patterns (LBP) encoding schemes have largely been proven to be very effective for this task. Improved LBP (ILBP) are conceptually simple, easy to implement, and highly effective LBP variants based on a point-to-average thresholding scheme instead of a point-to-point one. We propose the use of this encoding scheme for extracting intra- and interchannel features for color texture classification. We experimentally evaluated the resulting improved opponent color LBP alone and in concatenation with the ILBP of the local color contrast map on a set of image classification tasks over 9 datasets of generic color textures and 11 datasets of biomedical textures. The proposed approach outperformed other grayscale and color LBP variants in nearly all the datasets considered and proved competitive even against image features from last generation convolutional neural networks, particularly for the classification of biomedical images.
Keywords: Color texture | Convolutional neural networks | Image classification | Local binary patterns
Abstract: Convolutional Neural Networks have proved extremely successful in object classification applications; however, their suitability for texture analysis largely remains to be established. We investigate the use of pre-trained CNNs as texture descriptors by tapping the output of the last fully connected layer, an approach that has proved its effectiveness in other domains. Comparison with classical descriptors based on signal processing or statistics over a range of standard databases suggests that CNNs may be more effective where the intra-class variability is large. Conversely, classical approaches may be preferable where classes are well defined and homogeneous.
Keywords: Convolutional Neural Networks | Image classification | Local Binary Patterns | Texture
Abstract: Features from pre-trained Convolutional Neural Newtorks (CNN) have proved to be effective for many tasks such as object, scene and face recognition. Compared with traditional, hand-designed image descriptors, CNN-based features produce higher-dimensional feature vectors. In specific applications where the number of samples may be limited – as in the case of histopatological images – high dimensionality could potentially cause overfitting and redundancy in the information to be processed and stored. To overcome these potential problems feature reduction methods can be applied, at the cost of a moderate reduction in the discrimination accuracy. In this paper we investigate dimensionality reduction schemes for CNN-based features applied to computer-assisted classification of histopathological images. The purpose of this study is to find the best trade-off between accuracy and dimensionality. Specifically, we test two well-known techniques (i.e.: Principal Component Analysis and Gaussian Random Projection) and propose a novel reduction strategy based on the cross-correlation between the components of the feature vector. The results show that it is possible to reduce CNN-based features by a high ratio with a moderate decrease in accuracy with respect to the original values.
Keywords: Convolutional Neural Networks | Feature reduction | Histopathological images | Image classification
Abstract: OBJECTIVE: The differential diagnosis of Parkinson's disease (PD) and other conditions, such as essential tremor and drug-induced parkinsonian syndrome or normal aging brain, represents a diagnostic challenge. 123I-FP-CIT brain SPET is able to contribute to the differential diagnosis. Semiquantitative analysis of radiopharmaceutical uptake in basal ganglia (caudate nuclei and putamina) is very useful to support the diagnostic process. An artificial neural network classifier using 123I-FP-CIT brain SPET data, a classification tree (CIT), was applied. CIT is an automatic classifier composed of a set of logical rules, organized as a decision tree to produce an optimised threshold based classification of data to provide discriminative cut-off values. We applied a CIT to 123I-FP-CIT brain SPET semiquantitave data, to obtain cut-off values of radiopharmaceutical uptake ratios in caudate nuclei and putamina with the aim to diagnose PD versus other conditions. SUBJECTS AND METHOD: We retrospectively investigated 187 patients undergoing 123I-FP-CIT brain SPET (Millenium VG, G.E.M.S.) with semiquantitative analysis performed with Basal Ganglia (BasGan) V2 software according to EANM guidelines; among them 113 resulted affected by PD (PD group) and 74 (N group) by other non parkinsonian conditions, such as Essential Tremor and drug-induced PD. PD group included 113 subjects (60M and 53F of age: 60-81yrs) having Hoehn and Yahr score (HY): 0.5-1.5; Unified Parkinson Disease Rating Scale (UPDRS) score: 6-38; N group included 74 subjects (36M and 38 F range of age 60-80 yrs). All subjects were clinically followed for at least 6-18 months to confirm the diagnosis. To examinate data obtained by using CIT, for each of the 1,000 experiments carried out, 10% of patients were randomly selected as the CIT training set, while the remaining 90% validated the trained CIT, and the percentage of the validation data correctly classified in the two groups of patients was computed. The expected performance of an "average performance CIT" was evaluated. RESULTS: For CIT, the probability of correct classification in patients with PD was 84.19±11.67% (mean±SD) and in N patients 93.48±6.95%. For CIT, the first decision rule provided a value for the right putamen of 2.32±0.16. This means that patients with right putamen values <2.32 were classified as having PD. Patients with putamen values ≥2.32 underwent further analysis. They were classified as N if the right putamen uptake value was ≥3.02 or if the value for the right putamen was <3.02 and the age was ≥67.5 years. Otherwise the patients were classified as having PD. Other similar rules on the values of both caudate nuclei and left putamen could be used to refine the classification, but in our data analysis of these data did not significantly contribute to the differential diagnosis. This could be due to an increased number of more severe patients with initial prevalence of left clinical symptoms having a worsening in right putamen uptake distribution. CONCLUSION: These results show that CIT was able to accurately classify PD and non-PD patients by means of 123I-FP-CIT brain SPET data and provided also cut-off values able to differentially diagnose these groups of patients. Right putamen uptake values resulted as the most discriminant to correctly classify our patients, probably due to a certain number of subjects with initial prevalence of left clinical symptoms. Finally, the selective evaluation of the group of subjects having putamen values ≥2.32 disclosed that age was a further important feature to classify patients for certain right putamen values.
Abstract: Background/purpose: We investigate the use of skin texture features from the inner forearm as a means for personal identification. The forearm offers a number of potential advantages in that it is a fairly accessible area, and, compared with other zones such as fingertips, is less exposed to the elements and more shielded from wear. Methods: We extract and combine skin textural features from two imaging devices (optical and capacitive) with the aim of discriminating between different individuals. Skin texture images from 43 subjects were acquired from three different body parts (back of the hand, forearm and palm); testing used the two sensors either separately or in combination. Results: Skin texture features from the forearm proved effective for discriminating between different individuals with overall recognition accuracy approaching 96%. Conclusions: We found that skin texture features from the forearm are highly individual-specific and therefore suitable for personal identification. Interestingly, forearm skin texture features yielded significantly better accuracy compared to the skin of the back of the hand and of the palm of the same subjects.
Keywords: image processing | personal identification | skin texture | texture analysis
Abstract: Color information is generally considered useful for texture analysis. However, an important category of highly effective texture descriptors - namely rank features - has no obvious extension to color spaces, on which no canonical order is defined. In this work, we explore the use of partial orders in conjunction with rank features. We introduce the rank transform based on product ordering, that generalizes the classic rank transform to RGB space by a combined tally of dominated and non-comparable pixels. Experimental results on nine heterogeneous standard databases confirm that our approach outperforms the standard rank transform and its extension to lexicographic and bit mixing total orders, as well as to the preorders based on the Euclidean distance to a reference color. The low computational complexity and compact codebook size of the transform make it suitable for multi-scale approaches.
Abstract: Skin appearance is almost universally the object of gender-related expectations and stereotypes. This not with standing, remarkably little work has been done on establishing quantitatively whether skin texture can be used for gender discrimination. We present a detailed analysis of the skin texture of 43 subjects based on two complementary imaging modalities afforded by a visible-light dermoscope and the recently developed Epsilon sensor for capacitive imaging. We consider an array of established texture features in combination with two supervised classification techniques (1-NN and SVM) and a state-of-the-art unsupervised approach (t-SNE). A statistical analysis of the results suggests that skin microtexture carries very little information on gender.
Keywords: Gender recognition | Skin | SVM | Texture
Abstract: In this paper we introduce Improved Opponent Colour Local Binary Patterns (IOCLBP), a conceptually simple yet effective descriptor for colour texture classification. The method was experimentally validated over eight datasets of colour texture images. The results show that IOCLBP outperformed other LBP variants and was at least as effective as last generation features from Convolutional Neural Networks.
Keywords: Colour texture | Convolutional Neural Networks | Image classification | Local Binary Patterns
Abstract: Solid anaerobic digestion batch (SADB) with liquid digestate recirculation and wet anaerobic digestion of organic waste were experimentally investigated. SADB was operated at an organic loading rate (OLR) of 4.55 kgVS/m3 day, generating about 252 NL CH4/kgVS, whereas the wet digester was operated at an OLR of 0.9 kgVS/m3 day, generating about 320 NL CH4/kgVS. The initial total volatile fatty acids concentrations for SADB and wet digestion were about 12,500 mg/L and 4500 mg/L, respectively. There were higher concentrations of ammonium and COD for the SADB compared to the wet one. The genomic analysis performed by high throughput sequencing returned a number of sequences for each sample ranging from 110,619 to 373,307. More than 93% were assigned to the Bacteria domain. Seven and nine major phyla were sequenced for the SADB and wet digestion, respectively, with Bacteroidetes, Firmicutes and Proteobacteria being the dominant phyla in both digesters. Taxonomic profiles suggested a methanogenic pathway characterized by a relevant syntrophic acetate-oxidizing metabolism mainly in the liquid digestate of the SADB. This result also confirms the benefits of liquid digestate recirculation for improving the efficiency of AD performed with high solids (>30%w/w) content.
Keywords: High throughput sequencing | Liquid digestate recirculation | Methane | Microbial population | Organic waste | Solid anaerobic digestion batch | Wet anaerobic digestion
Abstract: This paper presents a comparison of color spaces for material classification. The study includes three device-independent (CIELAB, CIELUV, and CIE XYZ) and seven device-dependent spaces (RGB, HSV, YIQ, YUV, YCbCr, Ohta's I1I2I3, and RG-YeB-WhBl). The pros and cons of the different spaces and the procedures for converting color data among them are discussed in detail. An experiment based on 12 different image data sets was carried out to comparatively evaluate the performance of each space for material classification purposes. The results showed that CIELAB markedly outperformed the other spaces followed by HSV and CIELUV. Conversely, CIE XYZ came out as the worst performing space. Interestingly, no significant difference emerged among the performance of the other device-dependent spaces.
Keywords: classification | color spaces | materials | visual appearance
Abstract: The subject of this study is the use of local multi-dimensional patterns for image classification. The contribution is both theoretical and experimental: on the one hand the paper introduces a complete and general mathematical model for encoding multi-resolution, rotation-invariant local patterns; on the other experimentally evaluates the use of multi-resolution patterns for image classification both from an information- and performance-based standpoint. The results indicate that the joint multi-resolution model proposed in the paper can actually convey an additional amount of information with respect to the marginal model; but also that the marginal model (i.e. concatenation of features computed at different resolutions) can be a good enough approximation for practical applications.
Keywords: Image classification | Local binary patterns | Rotation-invariant multi-resolution patterns | Texture spectrum
Abstract: Automatic recognition of different tissue types in histological images is an essential part in the digital pathology toolbox. Texture analysis is commonly used to address this problem; mainly in the context of estimating the tumour/stroma ratio on histological samples. However, although histological images typically contain more than two tissue types, only few studies have addressed the multi-class problem. For colorectal cancer, one of the most prevalent tumour types, there are in fact no published results on multiclass texture separation. In this paper we present a new dataset of 5,000 histological images of human colorectal cancer including eight different types of tissue. We used this set to assess the classification performance of a wide range of texture descriptors and classifiers. As a result, we found an optimal classification strategy that markedly outperformed traditional methods, improving the state of the art for tumour-stroma separation from 96.9% to 98.6% accuracy and setting a new standard for multiclass tissue separation (87.4% accuracy for eight classes). We make our dataset of histological images publicly available under a Creative Commons license and encourage other researchers to use it as a benchmark for their studies.
Abstract: A family of 26 non-parametric texture descriptors based on Histograms of Equivalent Patterns (HEP) has been tested, many of them for the first time in remote sensing applications, to improve urban classification through object-based image analysis of GeoEye-1 imagery. These HEP descriptors have been compared to the widely known texture measures derived from the gray-level co-occurrence matrix (GLCM). All the five finally selected HEP descriptors (Local Binary Patterns, Improved Local Binary Patterns, Binary Gradient Contours and two different combinations of Completed Local Binary Patterns) performed faster in terms of execution time and yielded significantly better accuracy figures than GLCM features. Moreover, the HEP texture descriptors provided additional information to the basic spectral features from the GeoEye-1’s bands (R, G, B, NIR, PAN) significantly improving overall accuracy values by around 3%. Conversely, and in statistic terms, strategies involving GLCM texture derivatives did not improve the classification accuracy achieved from only the spectral information. Lastly, both approaches (HEP and GLCM) showed similar behavior with regard to the training set size applied.
Keywords: GeoEye-1 | Histograms of equivalent patterns | OBIA | Texture
Abstract: The size distribution of aggregates has direct and important effects on fundamental properties of construction materials such as workability, strength and durability. The size distribution of aggregates from construction and demolition waste (C&D) is one of the parameters which determine the degree of recyclability and therefore the quality of such materials. Unfortunately, standard methods like sieving or laser diffraction can be either very time consuming (sieving) or possible only in laboratory conditions (laser diffraction). As an alternative we propose and evaluate the use of image analysis to estimate the size distribution of aggregates from C&D in a fast yet accurate manner. The effectiveness of the procedure was tested on aggregates generated by an existing C&D mechanical treatment plant. Experimental comparison with manual sieving showed agreement in the range 81-85%. The proposed technique demonstrated potential for being used on on-line systems within mechanical treatment plants of C&D.
Keywords: Construction and demolition waste | Image analysis | Particle size distribution | Texture
Abstract: This paper presents a new methodology for computing grey-scale granulometries and estimating the mean size of fine and coarse aggregates. The proposed approach employs area morphology and combines the information derived from both openings and closings to determine the size distribution. The method, which we refer to as bipolar area morphology (BAM), is general and can operate on particles of different size and shape. The effectiveness of the procedure was validated on a set of 13 classes of aggregates of size ranging from 0.125 to 16 mm and made a comparison with standard, fixed-shape granulometry. In the experiments our model consistently outperformed the standard approach and predicted the correct size class with overall accuracy over 92 %. Tests on three classes from real samples also confirmed the potential of the method for application in real scenarios.
Keywords: Aggregates | Area morphology | Granulometry | Image analysis
Abstract: This paper investigates the problem of learning sets of discriminative patterns from local binary patterns (LBP). Such patterns are usually referred to as 'dominant local binary patterns' (DLBP). The strategies to obtain the dominant patterns may either keep knowledge of the patterns labels or discard it. It is the aim of this work to determine which is the best option. To this end the paper studies the effectiveness of different strategies in terms of accuracy, data compression ratio and time complexity. The results show that DLBP provides a significant compression rate with only a slight accuracy decrease with respect to LBP, and that retaining information about the patterns' labels improves the discrimination capability of DLBP. Theoretical analysis of time complexity revealed that the gain/loss provided by DLBP vs. LBP depends on the classification strategy: we show that, asymptotically, there is in principle no advantage when classification is based on computationally-cheap methods (such as nearest neighbour and nearest mean classifiers), because in this case determining the dominant patterns is computationally more expensive than classifying using the whole feature vector; by contrast, pattern selection can be beneficial with more complex classifiers such as support vector machines.
Keywords: Dominant local binary patterns | Feature selection | Texture classification
Abstract: The injection of urea-water sprays within selective catalytic reduction systems is currently the leading technique for reducing the emission of nitrogen oxides from Diesel engines. For the process to work properly, it is crucial to guarantee the adequate size, velocity and distribution of the spray droplets upstream of the catalyst. It is therefore extremely important to understand the process of spray formation and evolution as well as possible. In this paper we describe a new methodology for inspecting the behaviour of urea-water sprays in realistic conditions. Our approach is based on a hot-air flow tunnel enabling optical inspection of the spray through phase-Doppler anemometry and back-light imaging. The procedure was employed to investigate the global and local characteristics of urea-water sprays under different flow conditions. The results proved the significant influence exerted by the flow conditions on the spray behaviour, and confirmed that the proposed system can provide considerable insight about the evolution of urea-water sprays. Cross-comparison of the droplet size estimated through phase-Doppler anemometry and back-light imaging showed substantial agreement between the two methods. This result suggests that back-light imaging is a viable alternative in those cases where complex exhaust geometry impedes the use of phase-Doppler anemometry.
Keywords: Image processing | Nitrogen oxides | Phase-Doppler anemometry | Selective catalytic reduction | Urea-water sprays
Abstract: In this work we propose the use of image features based on visual perception for discriminating epithelium and stroma in histological images. In particular, we assess the capability of the following five visual features to correctly discriminate epithelium from stroma in digitised tissue micro-arrays of colorectal cancer: coarseness, contrast, directionality, line-likeliness and roughness. The use of features directly related to human perception makes it possible to evaluate the tissue's appearance on the basis of a set of meaningful parameters; moreover, the number of features used to discriminate epithelium from stroma is very small. In the experiments we used histologically-verified, well-defined images of epithelium and stroma to train three classifiers based on Support Vector Machines (SVM), Nearest Neighbour rule (1-NN) and Naïve Bayes rule (NB). We optimised SVM's parameters on a validation set, and estimated the accuracy of the three classifiers on a independent test set. The experiments demonstrate that the proposed features can correctly discriminate epithelium from stroma with state-of-the-art accuracy.
Keywords: Colorectal cancer | Epithelium | Image analysis | Perceptual features | Stroma
Abstract: This paper investigates the use of digital polygons as a replacement for circular interpolated neighbourhoods for extracting texture features through Local Binary Patterns. The use of digital polygons has two main advantages: reduces the computational cost, and avoids the high-frequency loss resulting from pixel interpolation. The solution proposed in this work employs a sub-sampling scheme over Andres’ digital circles. The effectiveness of the method was evaluated in a supervised texture classification experiment over eight different datasets. The results showed that digital polygons outperformed interpolated circular neighbourhoods in most cases.
Keywords: Digital circles | Digital polygons | Local Binary Patterns | Rotation invariance | Texture classification
Abstract: In this paper we investigate the problem of choosing the adequate colour representation for automated surface grading. Specifically, we discuss the pros and cons of different colour spaces, point out some common misconceptions about their use, and propose a number of ‘best practices’ for colour conversion. To put the discussion into practice we generated a new dataset of 25 classes of natural stone products which we used to systematically compare and evaluate the performance of seven device-dependent and three device-independent colour spaces through two classification strategies. With the nearest neighbour classifiers no significant difference emerged among the colour spaces considered, whereas with the linear classifier it was found that device-independent Lab and Luv spaces performed significantly better than the others.
Keywords: Colour spaces | Natural stones | Soft colour descriptors | Visual appearance
Abstract: The use of co-occurrences of patterns in image analysis has been recently suggested as one of the possible strategies to improve on the bag-of-features model. The intrinsically high number of features of the method, however, is a potential limit to its widespread application. Its extension into rotation invariant versions also requires careful consideration. In this paper we present a general, rotation invariant framework for co-occurrences of patterns and investigate possible solutions to the dimensionality problem. Using local binary patterns as bag-of-features model, we experimentally evaluate the potential advantages that co-occurrences can provide in comparison with bag-of-features. The results show that co-occurrences remarkably improve classification accuracy in some datasets, but in others the gain is negligible, or even negative. We found that this surprising outcome has an interesting explanation in terms of the degree of association between pairs of patterns in an image, and, in particular, that the higher the degree of association, the lower the gain provided by co-occurrences in comparison with bag-of-features.
Abstract: Grey-level co-occurrence matrices (GLCM) have been on the scene for almost forty years and continue to be widely used today. In this paper we present a method to improve accuracy and robustness against rotation of GLCM features for image classification. In our approach co-occurrences are computed through digital circles as an alternative to the standard four directions. We use discrete Fourier transform normalisation to convert rotation dependent features into rotation invariant ones. We tested our method on four different datasets of natural and synthetic images. Experimental results show that our approach is more accurate and robust against rotation than the standard GLCM features. © 2014 Elsevier B.V. All rights reserved.
Keywords: Co-occurrence matrices | Digital circles | Discrete Fourier transform | Rotation invariance | Texture classification
Abstract: Texture analysis is an area of intense research activity. Like in other fields, the availability of public data for benchmarking is vital to the development of the discipline. In "Texture databases - A comprehensive survey", Hossain and Serikawa recently provided a precious review of a good number of texture datasets, and put an order into this scattered field. The aim of this appendix is to complement the cited work by providing reference to additional image databases of bio-medical textures, textures of materials and natural textures that have been recently employed in experiments with texture analysis. There is in fact a good number of little-known texture databases which have very interesting features, and for this reason are likely to receive increasing attention in the near future. We are convinced that this extension, along with the original article, will be useful to many researchers and practitioners working in the field of texture analysis. © 2014 Elsevier B.V. All rights reserved.
Keywords: Bio-medical images | Database | Materials | Texture
Abstract: We present a sequential, two-step procedure based on machine vision for detecting and characterizing impurities in paper. The method is based on a preliminary classification step to differentiate defective paper patches (i.e., with impurities) from non-defective ones (i.e., with no impurities), followed by a thresholding step to separate the impurities from the background. This approach permits to avoid the artifacts which occur when thresholding is applied to paper samples that contain no impurities. We discuss and compare different solutions and methods to implement the procedure and experimentally validate it on a datasets of 11 paper classes. The results show that a marked increase in detection accuracy can be obtained with the two-step procedure in comparison with thresholding alone. © 2013 Elsevier B.V. All rights reserved.
Keywords: Image processing | Machine vision | Paper
Abstract: Remote sensing technologies have been commonly used to perform greenhouse detection and mapping. In this research, stereo pairs acquired by very high-resolution optical satellites GeoEye-1 (GE1) and WorldView-2 (WV2) have been utilized to carry out the land cover classification of an agricultural area through an object-based image analysis approach, paying special attention to greenhouses extraction. The main novelty of this work lies in the joint use of single-source stereo-photogrammetrically derived heights and multispectral information from both panchromatic and pan-sharpened orthoimages. The main features tested in this research can be grouped into different categories, such as basic spectral information, elevation data (normalized digital surface model; nDSM), band indexes and ratios, texture and shape geometry. Furthermore, spectral information was based on both single orthoimages and multiangle orthoimages. The overall accuracy attained by applying nearest neighbor and support vector machine classifiers to the four multispectral bands of GE1 were very similar to those computed from WV2, for either four or eight multispectral bands. Height data, in the form of nDSM, were the most important feature for greenhouse classification. The best overall accuracy values were close to 90%, and they were not improved by using multiangle orthoimages. © 2014 by the authors.
Keywords: GeoEye-1 | Greenhouses | Multiangle image | Normalized digital surface model | Object-based classification | WorldView-2
Abstract: In this chapter we describe a unifying framework for local binary patterns and variants which we refer to as histograms of equivalent patterns (HEP). In presenting this concept we discuss some basic issues in texture analysis: the problem of defining what texture is; the problem of classifying the many existing texture descriptors; the concept of bag-of-features and the design choices that one has to deal with when designing a texture descriptor. We show how this relates to local binary patterns and related methods and propose a unifying mathematical formalism to express them within the HEP. Finally, we give a geometrical interpretation of these methods as partitioning operators in a high-dimensional space, showing how this representation can propound possible directions for future research. © 2014 Springer-Verlag Berlin Heidelberg.
Abstract: We present an overview of methods and applications of automatic characterization of the appearance of materials through colour and texture analysis. We propose a taxonomy based on three classes of methods (spectral, spatial, and hybrid) and discuss their general advantages and disadvantages. For each class we present a set of methods that are computationally cheap and easy to implement and that was proved to be reliable in many applications. We put these methods in the context of typical industrial environments and provide examples of their application in the following tasks: surface grading, surface inspection, and content-based image retrieval. We emphasize the potential benefits that would come from a wide implementation of these methods, such as better product quality, new services, and higher customer satisfaction. © 2013 Elena González et al.
Abstract: Automatic detection and assessment of dirt particles in pulp and paper plays a pivotal role in the papermaking industry. Traditional visual inspection by human operators is giving the way to machine vision, which provides many potential advantages in terms of speed, accuracy and repeatability. Such systems make use of image processing algorithms which aim at separating paper and pulp impurities from the background. The most common approach is based on image thresholding, which consists of determining a set of intensity values that split an image into one or more classes, each representing either the background (i.e.: An area with no defects) or an area with some types of contraries. In this paper we present a quantitative experimental evaluation of four image thresholding methods (i.e.: Otsu's, Kapur's, Kittler's and Yen's) for dirt analysis in paper. The results show that Kittler's method is the most stable and reliable for this task. © 2013 DIME UNIVERSITÀ DI GENOVA.
Keywords: Image thresholding | Machine vision | Paper | Quality assessment
Abstract: In this paper we consider the problem of colour-based sorting hardwood parquet slabs into lots of similar visual appearance. As a basis for the development of an expert system to perform this task, we experimentally investigate and compare the performance of various colour descriptors (i.e.: soft descriptors, percentiles, marginal histograms and 3D histogram), and colour spaces (i.e.: RGB, HSV and CIE Lab). The results show that simple and compact colour descriptors, such as the mean of each colour channel, are as accurate as more complicated features. Likewise, we found no statistically significant difference in the accuracy attainable through the colour spaces considered in the paper. Our experiments also show that most methods are fast enough for real-time processing. The results suggest the use of simple statistical descriptors along with RGB data as the best practice to approach the problem. © 2012 Elsevier Ltd. All rights reserved.
Keywords: Colour descriptors | Grading | Parquet | Sorting | Wood
Abstract: The aim of this paper is to describe a general framework for texture analysis which we refer to as the HEP (histograms of equivalent patterns). The HEP, of which we give a clear and unambiguous mathematical definition, is based on partitioning the feature space associated to image patches of predefined shape and size. This task is approached by defining, a priori, suitable local or global functions of the pixels' intensities. In a comprehensive survey we show that diverse texture descriptors, such as co-occurrence matrices, gray-level differences and local binary patterns, can be seen all to be examples of the HEP. In the experimental part we comparatively evaluate a comprehensive set of these descriptors on an extensive texture classification task. Within the class of HEP schemes, improved local ternary patterns (ILTP) and completed local binary patterns (CLBP) emerge as the best of parametric and non-parametric methods, respectively. The results also show the following patterns: (1) higher effectiveness of multi-level discretization in comparison with binarization; (2) higher accuracy of parametric methods when compared to non-parametric ones; (3) a general trend of increasing performance with increasing dimensionality; and (4) better performance of point-to-average thresholding against point-to-point thresholding. © 2012 Springer Science+Business Media, LLC.
Keywords: BGC | Image classification | LBP | LTP | Texture features
Abstract: This paper is about the development of an expert system for automatic classification of granite tiles through computer vision. We discuss issues and possible solutions related to image acquisition, robustness against noise factors, extraction of visual features and classification, with particular focus on the last two. In the experiments we compare the performance of different visual features and classifiers over a set of 12 granite classes. The results show that classification based on colour and texture is highly effective and outperforms previous methods based on textural features alone. As for the classifiers, Support Vector Machines show to be superior to the others, provided that the governing parameters are tuned properly. © 2012 Elsevier Ltd. All rights reserved.
Keywords: Classification | Colour | Grading | Granite | Texture
Abstract: Offshore wind farms are undergoing unprecedented development as EU member states focus on complying with 2020 renewable energy mandates. However, wind farm site placement requires great care, to avoid compromising protected habitats, such as Sabellaria spinulosa reefs. This paper presents an investigation into the potential of different feature generation methods for identifying sidescan sonar image textures characteristic of Sabellaria spinulosa colonies. We propose an extensible test methodology and carry out a detailed comparison of several textural features. Our results show that Gabor filter bank features yield good (up to 89.4% overall) classification accuracies and often outperform other methods in identifying the Sabellaria spinulosa textural class. A Dual- Tree Complex Wavelet Transform, Ring filters and some statistical methods also produce encouraging results. © 2011 IEEE.
Keywords: Sabellaria spinulosa | Sonar | Texture analysis
Abstract: The aim of the paper is to conduct a performance evaluation where several texture descriptors such as Local Binary Patterns (LBP), Coordinated Clusters Representation (CCR) and (Improved Local Binary Patterns) ILBP are applied for granite texture classification. In our work we were particularly interested to assess the robustness of the analysed texture descriptors to image rotation when they were implemented in both the standard and rotation-invariant forms. In order to attain this goal, we have generated a database of granite textures that were rotated using hardware and software procedures. The experimental data indicate that the ILBP features return improved performance when compared with those achieved by the LBP and CCR descriptors. Another important finding resulting from this investigation reveals that the classification results obtained when the texture analysis techniques were applied to granite image data rotated by software procedures are inconsistent with those achieved when the hardware-rotated data are used for classification purposes. This discovery is surprising and suggests that the results obtained when the texture analysis techniques are evaluated on synthetically rotated data need to be interpreted with care, as the principal characteristics of the texture are altered by the data interpolation that is applied during the image rotation process. © 2010 Springer-Verlag.
Keywords: CCR | Granite grading | ILBP | LBP | Rotation invariance | Texture classification
Abstract: Color texture classification has been an area of intensive research activity. From the very onset, approaches to combining color and texture have been the subject of much discussion, and in particular, whether they should be considered joint or separately. We present a comprehensive comparison of the most prominent approaches both from a theoretical and experimental standpoint. The main contributions of our work are: (i) the establishment of a generic and extensible framework to classify methods for color texture classification on a mathematical basis, and (ii) a theoretical and experimental comparison of the most salient existing methods. Starting from an extensive set of experiments based on the Outex dataset, we highlight those texture descriptors that provide good accuracy along with low dimensionality. The results suggest that separate color and texture processing is the best practice when one seeks for optimal compromise between accuracy and limited number of features. We believe that our work may serve as a guide for those who need to choose the appropriate method for a specific application, as well as a basis for the development of new methods. © 2011 SPIE and IST.
Abstract: It is well-known that local binary pattern (LBP) histograms of real textures exhibit a markedly uneven distribution, which is dominated by the so-called uniform patterns. The widely accepted interpretation of this phenomenon is that uniform patterns correspond to texture microfeatures, such as edges, corners, and spots. In this paper we present a theoretical study about the relative occurrence of LBPs based on the consideration that the LBP operator partitions the set of grayscale patterns into an ensemble of disjoint multidimensional polytopes. We derive exact prior probabilities of LBPs by calculating the volume of such polytopes. Our study puts in evidence that both the uneven distribution of the LBP histogram and the high occurrence of uniform patterns are direct consequences of the mathematical structure of the method rather than an intrinsic property of real textures. © 2011 Springer Science+Business Media, LLC.
Keywords: Local binary patterns | Polytopes | Texture
Abstract: In this work we present a new family of computationally simple texture descriptors, referred to as binary gradient contours (BGC). The BGC methodology relies on computing a set of eight binary gradients between pairs of pixels all along a closed path around the central pixel of a 3×3 grayscale image patch. We developed three different versions of BGC features, namely single-loop, double-loop and triple-loop. To quantitatively assess the effectiveness of the proposed approach we performed an ensemble of texture classification experiments over 10 different datasets. The obtained results make it apparent that the single-loop version is the best performer of the BGC family. Experiments also show that the single-loop BGC texture operator outperforms the well-known LBP. Statistical significance of the achieved accuracy improvement has been demonstrated through the Wilkoxon signed rank test. © 2011 Elsevier Ltd. All rights reserved.
Keywords: BGC | LBP | Texture features
Abstract: In this paper we study the feasibility of developing a search engine capable of retrieving images from a granite image database based on a query image that is similar to the intended targets. The main focus was on the determination of the set of colour and/or texture features which yields highest retrieval accuracy. To assess the performance of the considered image descriptors we created a granite image database, formed by images recorded at our laboratory as well as taken from the Internet. Experimental results show that colour and texture features can be successfully employed to retrieve granite images from a database. We also found that improved accuracy is achieved by combining different colour and texture feature sets through classifier fusion schemes.
Keywords: CBIR | Colour | Granite | Image retrieval systems | Texture | Visual appearance
Abstract: We present a set of multiscale, multidirectional, rotation-invariant features for color texture characterization. The proposed model is based on the ranklet transform, a technique relying on the calculation of the relative rank of the intensity level of neighboring pixels. Color and texture are merged into a compact descriptor by computing the ranklet transform of each color channel separately and of couples of color channels jointly. Robustness against rotation is based on the use of circularly symmetric neighborhoods together with the discrete Fourier transform. Experimental results demonstrate that the approach shows good robustness and accuracy. © 2009 SPIE and IS&T.
Abstract: The Coordinated Clusters Representation (CCR) is a texture descriptor based on the probability of occurrence of elementary binary patterns (texels) defined over a square window. The CCR was originally proposed for binary textures, and it was later extended to grayscale texture images through global image thresholding. The required global binarization is a critical point of the method, since this preprocessing stage can wipe out textural information. Another important drawback of the original CCR model is its sensitivity against rotation. In this paper we present a rotation-invariant CCR-based model for colour textures which yields a twofold improvement over the grayscale CCR: first, the use of rotation-invariant texels makes the model insensitive against rotation; secondly, the new texture model benefits from colour information and does not need global thresholding. The basic idea of the method is to describe the textural and colour content of an image by splitting the original colour image into a stack of binary images, each one representing a colour of a predefined palette. The binary layers are characterized by the probability of occurrence of rotation-invariant texels, and the overall feature vector is obtained by concatenating the histograms computed for each layer. In order to quantitatively assess our approach, we performed experiments over two datasets of colour texture images using five different colour spaces. The obtained results show robust invariance against rotation and a marked increase in classification accuracy with respect to grayscale versions of CCR and LBP. © 2009 Elsevier B.V. All rights reserved.
Keywords: CCR | Colour texture classification | Rotation invariance
Abstract: Reverse Engineering aims at constructing CAD models of existing objects starting from measurement data. In the classical approach a reverse engineering procedure completes in three main steps: data acquisition data processing and modeling. Herein we describe a novel semi-automatic approach for integrating CAD and reverse engineering. The basic idea consists in constructing a parametric and feature-based approximate model and matching it with measurement data to determine the unknown parameters. The approach is semi-automatic since part of the model is constructed manually by the user, while the exact values of the parameters are computed through an optimization procedure based on genetic algorithms. Two case studies are presented and critically discussed.
Keywords: genetic algorithms | geometric modeling | Reverse engineering
Abstract: This paper describes an approach for comparative evaluation of manufacturing cost in the early stage of the design process of products and assemblies. The methodological approach used here refers to Design For Manufacturing and Assembly technique (DFMA) and is addressed towards the development of optimized design solutions.
Abstract: This paper describes an approach for comparative evaluation of the response of a system in different domains in the early stage of the design process. The solution proposed here is based on the following ideas: feature based product representation, which serves as a 'shared product description'; estimation of the response of the system in a set of points (training points) through automatic simulation; categorization of the system response; generalization to other design regions through classification trees; evaluation of reliability. The result is an approximated 'model' of the system we are investigating. The level of detail of the model can be modified by changing the number of classes used to categorize the response.
Keywords: Classification trees | Cost estimation | Design-by-feature | Product design
Abstract: The sequential approach to colour texture classification relies on colour histogram clustering before extracting texture features from indexed images. The basic idea of such methods is to replace the colour triplet (RGB, HSV, Lab, etc.) associated to a pixel, by a scalar value, which represents an index of a colour palette. In this paper we studied different implementations of such approach. An experimental campaign was carried out over a database of 100 textures. The results show that the choice of a particular colour representation can improve classification performance with respect to grayscale conversion. We also found strong interaction effects between colour representation and feature space. In order to improve accuracy and robustness of classification, we have tested three well known expert fusion schemes: weighted vote, and a posteriori probability fusion (sum and product rules). The results demonstrate that combining different sequential approaches through classifier fusion is an effective strategy for colour texture classification. © Springer-Verlag Berlin Heidelberg 2007.
Keywords: Classifier fusion | Colour texture classification
Abstract: Gabor filtering is a widely adopted technique for texture analysis. The design of a Gabor filter bank is a complex task. In texture classification, in particular, Gabor filters show a strong dependence on a certain number of parameters, the values of which may significantly affect the outcome of the classification procedures. Many different approaches to Gabor filter design, based on mathematical and physiological consideration, are documented in literature. However, the effect of each parameter, as well as the effects of their interaction, remain unclear. The overall aim of this work is to investigate the effects of Gabor filter parameters on texture classification. An extensive experimental campaign has been conducted. The outcomes of the experimental activity show a significant dependence of the percentage of correct classification on the smoothing parameter of the Gabor filters. On the contrary, the correlation between the number of frequencies and orientations used to define a filter bank and the percentage of correct classification appeared to be poor. © 2007 Pattern Recognition Society.
Keywords: Design of experiments | Gabor filters | Texture classification
Abstract: Computing how a camera samples a scene is a common task in engineering and computer graphics. Though this concept is rather intuitive (sampling density is proportional to the number of pixels which sample a point in the scene), it is necessary to define it in a formal and rigorous way. This paper describes a method to evaluate the sampling density distribution of a camera. A definition of camera sampling density is given, and a simple computation method is presented. The proposed approach is based on the evaluation of the area reduction coefficient associated to the function which maps a generic point of the scene to a point in the image plane. The approach can be applied either to the ideal (pinhole) camera model or to real camera models with distortion. Experimental tests on a specimen are also presented and discussed.
Keywords: Camera sampling density | Image acquisition | Reverse engineering
Abstract: While relational simulation games have been widely used for learning activities in areas such as business, management and communication, their development in the engineering field is still very limited. In this paper a web-based simulation game was designed in order to fill this existing gap in the development of relational simulation games related to engineering aspects and in particular to the design process of complex systems. An evaluation of a first web trial competition, carried out by questionnaires, shows a satisfactory achievement of the proposed learning tool objectives. © 2006 Inderscience Enterprises Ltd.
Keywords: complex systems | engineering-design education | learning tools | simulation | web-based games
Abstract: Interoperability among CAD/CAM/CAE systems is a well known problem in product design and development. At present geometrical data exchange among different software packages is usually carried out through neutral file formats (IGES or STEP) or through proprietary formats. Data exchange processes are usually afflicted by several problems, such as: information loss, redundancy, one-way data exchange and static data exchange. These drawbacks do not permit a really geometric-centric design, and even if the model is transmitted without loss of information, the exchanged data do not incorporate details such as sketches, constraints and features, which represent the designer's intent. As a result, the model can hardly be modified, and the original intent of the designer may be misunderstood. During the last five years various solutions have been proposed to solve the above mentioned problems. The aim of this work is to investigate and discuss recent research trends in this topic. © 2006 IEEE.
Abstract: The overall aim of this paper is to present a web-based system to simulate the process of aircraft design in a competitive environment. The main purpose is to provide a tool to introduce design and simulation problems in engineering-design education. The structure of the system is a game where a group of teams carry out the design of an aircraft trying to optimize product performances and minimizing product and development costs. The web-based framework is a three-tier system developed in a pure Java platform using the JavaServer Faces technology for the presentation layer and Hiber-nate/HSQL for the persistence layer.
Keywords: Competitive environments | Engineering education | Lean simulation | Product design | Web-based simulation
Abstract: In this work a web-based tool to simulate a competitive design environment is described. The speci c application, in particular, has been implemented to simulate a competitive tender for the design of a military aircraft. The teams which participate to the game are asked to design an aircraft which has to satisfy given speci cation. The tender is won by the team which gain the higher score, as detailed by the game rules. The proposed application can be used as an active teaching tool where students can be stimulated to develop their relational and practical capabilities. In particular the scholars can be encouraged to adopt the design techniques that can be applied to the management of complex models that cannot be represented through an explicit mathematical description.
Abstract: This article describes a procedure based on reverse engineering and CFD analysis to numerically evaluate the aerodynamic performances of physical components. The case study is represented by the investigation of the aerodynamic behaviour of a racing car air-box. An existing air-box has been digitised using a proprietary system which enables the acquisition of a point cloud of the surface of the object. A procedure has been developed to obtain a CAD model suitable for CFD analysis from the raw 3D data. A computational fluid-dynamics analysis has been performed using the digital model. The procedure described in this paper can be easily extended to the reconstruction and the analysis of similar classes of shapes such as wings and profiles.
Keywords: Aerodynamic analysis | Geometric modelling | Reverse engineering
Abstract: This paper describes a method for the recognition of the semantics of parts (features) of a component from a pure geometric representation. It is suitable for verifying product life-cycle requirements from the early stages of the design process. The proposed method is appropriate to analyse B-rep geometric models, and it is not limited to models described by planar and cylindrical surfaces, but it can handle several types of face shapes. In this work the concept of semanteme is introduced. A semanteme represents the minimal element of engineering meaning that can be recognised in a geometric model. The semantemes recognised in a part of the model, which are potentially of engineering significance, are used to associate an engineering meaning to the part. This approach gives a wide flexibility to the proposed system, which is suitable to be used in different contexts of application, since it is possible to describe the reference context using the semanteme that the system can manage. In the paper the implemented prototype system is briefly described. The prototype system takes advantage of neutral interfaces that allow geometrical and topological information to be retrieved from a commercial CAD system. © 2003 Elsevier Ltd. All rights reserved.
Keywords: Feature recognition | Intermediate geometric model | Semantics recognition
Abstract: The presence of fillets, rounds, chamfers and grooves is typical of many real-world mechanical components. Such features introduces in the model various topological and geometrical modifications which may alter the correct understanding of its main features. Another drawback which hinders the correct engineering evaluation of a part is that its geometrical and topological representation is not unique, since it depends either on the different procedures used to generate CAD models or on the different internal representation of the geometric kernels used by CAD packages. The overall aim of this work is to define an intermediate representation scheme which incorporates information about minor features such as chamfers, fillets, rounds and grooves into a graph based representation and overcomes the problem of non-uniqueness in the geometric representation. The representation scheme discussed here takes as input a solid model in B-rep form, and provides a description of a part at an higher level of abstraction in comparison with the raw B-rep description. A procedure for secondary features recognition is also described in this paper. The proposed algorithm consists of two steps where both topological/geometrical properties and dimensional attributes are investigated in order to fulfil a correct recognition. The recognized secondary features are then inserted into the model as labels which qualify the relationships between the faces of the graph representation. The resulting modelling scheme can be conveniently used for primary feature recognition as well as for other engineering analysis/simulation purposes. © 2003 IEEE.
Abstract: Collaborative solid modelling among distributed designers is becoming an effective chance for product design. Nevertheless the available solutions usually rely on proprietary systems and formats and do not enable heterogeneous collaboration. The overall aim of the work presented in this paper is to introduce a frame-work for heterogeneous collaboration. The proposed solution is based on the definition of an intermediate neutral modelling language capable of capturing the modelling activities carried out by distributed users. The described system is based on the XML Schema technology.