6 resultados para Deep-focus earthquake
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.
Resumo:
The interaction between disciplines in the study of human population history is of primary importance, profiting from the biological and cultural characteristics of humankind. In fact, data from genetics, linguistics, archaeology and cultural anthropology can be combined to allow for a broader research perspective. This multidisciplinary approach is here applied to the study of the prehistory of sub-Saharan African populations: in this continent, where Homo sapiens originally started his evolution and diversification, the understanding of the patterns of human variation has a crucial relevance. For this dissertation, molecular data is interpreted and complemented with a major contribution from linguistics: linguistic data are compared to the genetic data and the research questions are contextualized within a linguistic perspective. In the four articles proposed, we analyze Y chromosome SNPs and STRs profiles and full mtDNA genomes on a representative number of samples to investigate key questions of African human variability. Some of these questions address i) the amount of genetic variation on a continental scale and the effects of the widespread migration of Bantu speakers, ii) the extent of ancient population structure, which has been lost in present day populations, iii) the colonization of the southern edge of the continent together with the degree of population contact/replacement, and iv) the prehistory of the diverse Khoisan ethnolinguistic groups, who were traditionally understudied despite representing one of the most ancient divergences of modern human phylogeny. Our results uncover a deep level of genetic structure within the continent and a multilayered pattern of contact between populations. These case studies represent a valuable contribution to the debate on our prehistory and open up further research threads.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
Although the debate of what data science is has a long history and has not reached a complete consensus yet, Data Science can be summarized as the process of learning from data. Guided by the above vision, this thesis presents two independent data science projects developed in the scope of multidisciplinary applied research. The first part analyzes fluorescence microscopy images typically produced in life science experiments, where the objective is to count how many marked neuronal cells are present in each image. Aiming to automate the task for supporting research in the area, we propose a neural network architecture tuned specifically for this use case, cell ResUnet (c-ResUnet), and discuss the impact of alternative training strategies in overcoming particular challenges of our data. The approach provides good results in terms of both detection and counting, showing performance comparable to the interpretation of human operators. As a meaningful addition, we release the pre-trained model and the Fluorescent Neuronal Cells dataset collecting pixel-level annotations of where neuronal cells are located. In this way, we hope to help future research in the area and foster innovative methodologies for tackling similar problems. The second part deals with the problem of distributed data management in the context of LHC experiments, with a focus on supporting ATLAS operations concerning data transfer failures. In particular, we analyze error messages produced by failed transfers and propose a Machine Learning pipeline that leverages the word2vec language model and K-means clustering. This provides groups of similar errors that are presented to human operators as suggestions of potential issues to investigate. The approach is demonstrated on one full day of data, showing promising ability in understanding the message content and providing meaningful groupings, in line with previously reported incidents by human operators.
Resumo:
The term Artificial intelligence acquired a lot of baggage since its introduction and in its current incarnation is synonymous with Deep Learning. The sudden availability of data and computing resources has opened the gates to myriads of applications. Not all are created equal though, and problems might arise especially for fields not closely related to the tasks that pertain tech companies that spearheaded DL. The perspective of practitioners seems to be changing, however. Human-Centric AI emerged in the last few years as a new way of thinking DL and AI applications from the ground up, with a special attention at their relationship with humans. The goal is designing a system that can gracefully integrate in already established workflows, as in many real-world scenarios AI may not be good enough to completely replace its humans. Often this replacement may even be unneeded or undesirable. Another important perspective comes from, Andrew Ng, a DL pioneer, who recently started shifting the focus of development from “better models” towards better, and smaller, data. He defined his approach Data-Centric AI. Without downplaying the importance of pushing the state of the art in DL, we must recognize that if the goal is creating a tool for humans to use, more raw performance may not align with more utility for the final user. A Human-Centric approach is compatible with a Data-Centric one, and we find that the two overlap nicely when human expertise is used as the driving force behind data quality. This thesis documents a series of case-studies where these approaches were employed, to different extents, to guide the design and implementation of intelligent systems. We found human expertise proved crucial in improving datasets and models. The last chapter includes a slight deviation, with studies on the pandemic, still preserving the human and data centric perspective.
Resumo:
The discovery of scaling relations between the mass of the SMBH and some key physical properties of the host galaxy suggests that the growth of the SMBH and that of the galaxy are coupled, with the AGN activity and the star-formation (SF) processes influencing each other. Although the mechanism of this co-evolution are still a matter of debate, all scenarios agree that a key phase of the co-evolution is represented by the obscured accretion phase. This phase is of the co-evolution is the least studied, mostly due to the challenge in detecting and recognizing such obscured AGN. My thesis aims at investigating the AGN-galaxy co-evolution paradigm by identifying and studying AGN in the obscured accretion phase. The study of obscured AGN is key for our understanding of the feedback processes and of the mutual influence of the SF and the AGN activity. Moreover, these obscured and elusive AGN are needed to explain the X-ray background spectrum and to reconcile the measurements and the theoretical prediction of the BH accretion rate density. In this thesis, we firstly investigate the synergies between IR and X-ray missions in detecting and characterizing AGN, with a particular focus on the most obscured ones. We exploited UV/optical emission lines to select high-redshift obscured AGN at the cosmic noon, where the highest SFR density and BH accretion rate density are expected. We provide X-ray spectral analysis and UV-to-far-IR SED-fitting. We show that our samples host a significant fraction of very obscured sources; many of these are highly accreting. Finally, we performe a thoughtful investigation of a galaxy at z~5 with unusual and peculiar features, that lead us to identify a second extremely young population of stars and hidden AGN activity.