8 resultados para Classification image technique
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Resumo:
The abundance of visual data and the push for robust AI are driving the need for automated visual sensemaking. Computer Vision (CV) faces growing demand for models that can discern not only what images "represent," but also what they "evoke." This is a demand for tools mimicking human perception at a high semantic level, categorizing images based on concepts like freedom, danger, or safety. However, automating this process is challenging due to entropy, scarcity, subjectivity, and ethical considerations. These challenges not only impact performance but also underscore the critical need for interoperability. This dissertation focuses on abstract concept-based (AC) image classification, guided by three technical principles: situated grounding, performance enhancement, and interpretability. We introduce ART-stract, a novel dataset of cultural images annotated with ACs, serving as the foundation for a series of experiments across four key domains: assessing the effectiveness of the end-to-end DL paradigm, exploring cognitive-inspired semantic intermediaries, incorporating cultural and commonsense aspects, and neuro-symbolic integration of sensory-perceptual data with cognitive-based knowledge. Our results demonstrate that integrating CV approaches with semantic technologies yields methods that surpass the current state of the art in AC image classification, outperforming the end-to-end deep vision paradigm. The results emphasize the role semantic technologies can play in developing both effective and interpretable systems, through the capturing, situating, and reasoning over knowledge related to visual data. Furthermore, this dissertation explores the complex interplay between technical and socio-technical factors. By merging technical expertise with an understanding of human and societal aspects, we advocate for responsible labeling and training practices in visual media. These insights and techniques not only advance efforts in CV and explainable artificial intelligence but also propel us toward an era of AI development that harmonizes technical prowess with deep awareness of its human and societal implications.
Resumo:
The subject of this doctoral dissertation concerns the definition of a new methodology for the morphological and morphometric study of fossilized human teeth, and therefore strives to provide a contribution to the reconstruction of human evolutionary history that proposes to extend to the different species of hominid fossils. Standardized investigative methodologies are lacking both regarding the orientation of teeth subject to study and in the analysis that can be carried out on these teeth once they are oriented. The opportunity to standardize a primary analysis methodology is furnished by the study of certain early Neanderthal and preneanderthal molars recovered in two caves in southern Italy [Grotta Taddeo (Taddeo Cave) and Grotta del Poggio (Poggio Cave), near Marina di Camerata, Campania]. To these we can add other molars of Neanderthal and modern man of the upper Paleolithic era, specifically scanned in the paleoanthropology laboratory of the University of Arkansas (Fayetteville, Arkansas, USA), in order to increase the paleoanthropological sample data and thereby make the final results of the analyses more significant. The new analysis methodology is rendered as follows: 1. Standardization of an orientation system for primary molars (superior and inferior), starting from a scan of a sample of 30 molars belonging to modern man (15 M1 inferior and 15 M1 superior), the definition of landmarks, the comparison of various systems and the choice of a system of orientation for each of the two dental typologies. 2. The definition of an analysis procedure that considers only the first 4 millimeters of the dental crown starting from the collar: 5 sections parallel to the plane according to which the tooth has been oriented are carried out, spaced 1 millimeter between them. The intention is to determine a method that allows for the differentiation of fossilized species even in the presence of worn teeth. 3. Results and Conclusions. The new approach to the study of teeth provides a considerable quantity of information that can better be evaluated by increasing the fossil sample data. It has been demonstrated to be a valid tool in evolutionary classification that has allowed (us) to differentiate the Neanderthal sample from that of modern man. In a particular sense the molars of Grotta Taddeo, which up until this point it has not been possible to determine with exactness their species of origin, through the present research they are classified as Neanderthal.
Resumo:
Satellite remote sensing has proved to be an effective support in timely detection and monitoring of marine oil pollution, mainly due to illegal ship discharges. In this context, we have developed a new methodology and technique for optical oil spill detection, which make use of MODIS L2 and MERIS L1B satellite top of atmosphere (TOA) reflectance imagery, for the first time in a highly automated way. The main idea was combining wide swaths and short revisit times of optical sensors with SAR observations, generally used in oil spill monitoring. This arises from the necessity to overcome the SAR reduced coverage and long revisit time of the monitoring area. This can be done now, given the MODIS and MERIS higher spatial resolution with respect to older sensors (250-300 m vs. 1 km), which consents the identification of smaller spills deriving from illicit discharge at sea. The procedure to obtain identifiable spills in optical reflectance images involves removal of oceanic and atmospheric natural variability, in order to enhance oil-water contrast; image clustering, which purpose is to segment the oil spill eventually presents in the image; finally, the application of a set of criteria for the elimination of those features which look like spills (look-alikes). The final result is a classification of oil spill candidate regions by means of a score based on the above criteria.
Resumo:
This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
Dans l’Antiquité, la recherche sur la technique permet les premières réalisations de dispositifs ingénieux, tels que des appareils qui accomplissent une série d’actions par le biais de stimulus externes et de mécanismes cachés. Les organismes politiques et religieux saisissent rapidement la puissance communicative de ces machines, en devenant les promoteurs et patrons privilégiés de leur production. L’Empire sassanide (224-650) ne constitue pas une exception. En effet, les souverains perses consacrent, au moins à l’époque tardive, une grande attention à la conception et au déploiement de dispositifs savants. De même, un siècle plus tard, dans le milieu du califat islamique, les Abbassides (750-1258) semblent s’entourer de tels dispositifs. La continuité entre les deux empires dans plusieurs domaines, de la théorie politique à l’administration, est bien connue. Cependant, la question de la réutilisation du patrimoine technique et scientifique ancien, et notamment sassanide, par la cour abbasside, demeure encore largement inexplorée. L’étude d’un corpus de sources, aussi vaste qu’hétérogène, rassemblant des ouvrages historiographiques, géographiques, poétiques et d’adab, ainsi que des traités scientifiques et techniques en plusieurs langues, permet d’analyser différents aspects de la production et de l’usage politique des machines. Au sein de la cour sassanide, comme de la cour abbasside, la machine s’avère constituer un véhicule préférentiel de représentation et de diffusion de l’idéologie politique. À travers sa mise en scène publique, elle contribue de manière substantielle à la définition de l’espace du pouvoir, en participant à la création d’une image de la cour comme un microcosme au cœur duquel le Roi des rois, et plus tard le calife, occupaient le rôle cardinal de maître incontesté du monde. La continuité entre les empires sassanide et abbasside dans le domaine technique ne se limite donc pas à une récupération de savoirs, mais s’opère aussi sous la forme d’une véritable réactivation d’un patrimoine symbolique