5 resultados para Deep integration
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
Psychological characterisation of the somatosensory system often focusses on minimal units of perception, such as detection, localisation, and magnitude estimation of single events. Research on how multiple simultaneous stimuli are aggregated to create integrated, synthetic experiences is rarer. This thesis aims to shed a light on the mechanisms underlying the integration of multiple simultaneous stimuli, within and between different sub-modalities of the somatosensory system. First, we investigated the ability of healthy individuals to perceive the total intensity of composite somatosensory patterns. We found that the overall intensity of tactile, cold, or warm patterns was systematically overestimated when the multiple simultaneous stimuli had different intensities. Perception of somatosensory totals was biased towards the most salient element in the pattern. Furthermore, we demonstrated that peak-biased aggregation is a genuine perceptual phenomenon which does not rely on the discrimination of the parts, but is rather based on the salience of each stimulus. Next, we studied a classical thermal illusion to assess participants’ ability to localise thermal stimuli delivered on the fingers either in isolation, or in uniform and non-uniform patterns. We found that despite a surprisingly high accuracy in reporting the location of a single stimulus, when participants were presented with non-uniform patterns, their ability to identify the thermal state of a specific finger was completely abolished. Lastly, we investigated the perceptual and neural correlates of thermo-nociceptive interaction during the presentation of multiple thermal stimuli. We found that inhibition of pain by warmth was independent from both the position and the number of thermal stimuli administered. Our results suggest that nonlinear integration of multiple stimuli, within and between somatosensory sub-modalities, may be an efficient way by which the somatosensory system synthesises the complexity of reality, providing an extended and coherent perception of the world, in spite of its deep bandwidth limitations.
Resumo:
In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases. In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge. We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP. We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI. We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach. Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution.
Resumo:
The aim of this dissertation is to describe the methodologies required to design, operate, and validate the performance of ground stations dedicated to near and deep space tracking, as well as the models developed to process the signals acquired, from raw data to the output parameters of the orbit determination of spacecraft. This work is framed in the context of lunar and planetary exploration missions by addressing the challenges in receiving and processing radiometric data for radio science investigations and navigation purposes. These challenges include the designing of an appropriate back-end to read, convert and store the antenna voltages, the definition of appropriate methodologies for pre-processing, calibration, and estimation of radiometric data for the extraction of information on the spacecraft state, and the definition and integration of accurate models of the spacecraft dynamics to evaluate the goodness of the recorded signals. Additionally, the experimental design of acquisition strategies to perform direct comparison between ground stations is described and discussed. In particular, the evaluation of the differential performance between stations requires the designing of a dedicated tracking campaign to maximize the overlap of the recorded datasets at the receivers, making it possible to correlate the received signals and isolate the contribution of the ground segment to the noise in the single link. Finally, in support of the methodologies and models presented, results from the validation and design work performed on the Deep Space Network (DSN) affiliated nodes DSS-69 and DSS-17 will also be reported.
Resumo:
In medicine, innovation depends on a better knowledge of the human body mechanism, which represents a complex system of multi-scale constituents. Unraveling the complexity underneath diseases proves to be challenging. A deep understanding of the inner workings comes with dealing with many heterogeneous information. Exploring the molecular status and the organization of genes, proteins, metabolites provides insights on what is driving a disease, from aggressiveness to curability. Molecular constituents, however, are only the building blocks of the human body and cannot currently tell the whole story of diseases. This is why nowadays attention is growing towards the contemporary exploitation of multi-scale information. Holistic methods are then drawing interest to address the problem of integrating heterogeneous data. The heterogeneity may derive from the diversity across data types and from the diversity within diseases. Here, four studies conducted data integration using customly designed workflows that implement novel methods and views to tackle the heterogeneous characterization of diseases. The first study devoted to determine shared gene regulatory signatures for onco-hematology and it showed partial co-regulation across blood-related diseases. The second study focused on Acute Myeloid Leukemia and refined the unsupervised integration of genomic alterations, which turned out to better resemble clinical practice. In the third study, network integration for artherosclerosis demonstrated, as a proof of concept, the impact of network intelligibility when it comes to model heterogeneous data, which showed to accelerate the identification of new potential pharmaceutical targets. Lastly, the fourth study introduced a new method to integrate multiple data types in a unique latent heterogeneous-representation that facilitated the selection of important data types to predict the tumour stage of invasive ductal carcinoma. The results of these four studies laid the groundwork to ease the detection of new biomarkers ultimately beneficial to medical practice and to the ever-growing field of Personalized Medicine.