864 resultados para Data analysis
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
The objective of this dissertation is to study the structure and behavior of the Atmospheric Boundary Layer (ABL) in stable conditions. This type of boundary layer is not completely well understood yet, although it is very important for many practical uses, from forecast modeling to atmospheric dispersion of pollutants. We analyzed data from the SABLES98 experiment (Stable Atmospheric Boundary Layer Experiment in Spain, 1998), and compared the behaviour of this data using Monin-Obukhov's similarity functions for wind speed and potential temperature. Analyzing the vertical profiles of various variables, in particular the thermal and momentum fluxes, we identified two main contrasting structures describing two different states of the SBL, a traditional and an upside-down boundary layer. We were able to determine the main features of these two states of the boundary layer in terms of vertical profiles of potential temperature and wind speed, turbulent kinetic energy and fluxes, studying the time series and vertical structure of the atmosphere for two separate nights in the dataset, taken as case studies. We also developed an original classification of the SBL, in order to separate the influence of mesoscale phenomena from turbulent behavior, using as parameters the wind speed and the gradient Richardson number. We then compared these two formulations, using the SABLES98 dataset, verifying their validity for different variables (wind speed and potential temperature, and their difference, at different heights) and with different stability parameters (zita or Rg). Despite these two classifications having completely different physical origins, we were able to find some common behavior, in particular under weak stability conditions.
Resumo:
Supernovae are among the most energetic events occurring in the universe and are so far the only verified extrasolar source of neutrinos. As the explosion mechanism is still not well understood, recording a burst of neutrinos from such a stellar explosion would be an important benchmark for particle physics as well as for the core collapse models. The neutrino telescope IceCube is located at the Geographic South Pole and monitors the antarctic glacier for Cherenkov photons. Even though it was conceived for the detection of high energy neutrinos, it is capable of identifying a burst of low energy neutrinos ejected from a supernova in the Milky Way by exploiting the low photomultiplier noise in the antarctic ice and extracting a collective rate increase. A signal Monte Carlo specifically developed for water Cherenkov telescopes is presented. With its help, we will investigate how well IceCube can distinguish between core collapse models and oscillation scenarios. In the second part, nine years of data taken with the IceCube precursor AMANDA will be analyzed. Intensive data cleaning methods will be presented along with a background simulation. From the result, an upper limit on the expected occurrence of supernovae within the Milky Way will be determined.
Resumo:
The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
The aging process is characterized by the progressive fitness decline experienced at all the levels of physiological organization, from single molecules up to the whole organism. Studies confirmed inflammaging, a chronic low-level inflammation, as a deeply intertwined partner of the aging process, which may provide the “common soil” upon which age-related diseases develop and flourish. Thus, albeit inflammation per se represents a physiological process, it can rapidly become detrimental if it goes out of control causing an excess of local and systemic inflammatory response, a striking risk factor for the elderly population. Developing interventions to counteract the establishment of this state is thus a top priority. Diet, among other factors, represents a good candidate to regulate inflammation. Building on top of this consideration, the EU project NU-AGE is now trying to assess if a Mediterranean diet, fortified for the elderly population needs, may help in modulating inflammaging. To do so, NU-AGE enrolled a total of 1250 subjects, half of which followed a 1-year long diet, and characterized them by mean of the most advanced –omics and non –omics analyses. The aim of this thesis was the development of a solid data management pipeline able to efficiently cope with the results of these assays, which are now flowing inside a centralized database, ready to be used to test the most disparate scientific hypotheses. At the same time, the work hereby described encompasses the data analysis of the GEHA project, which was focused on identifying the genetic determinants of longevity, with a particular focus on developing and applying a method for detecting epistatic interactions in human mtDNA. Eventually, in an effort to propel the adoption of NGS technologies in everyday pipeline, we developed a NGS variant calling pipeline devoted to solve all the sequencing-related issues of the mtDNA.
Resumo:
Nella fisica delle particelle, onde poter effettuare analisi dati, è necessario disporre di una grande capacità di calcolo e di storage. LHC Computing Grid è una infrastruttura di calcolo su scala globale e al tempo stesso un insieme di servizi, sviluppati da una grande comunità di fisici e informatici, distribuita in centri di calcolo sparsi in tutto il mondo. Questa infrastruttura ha dimostrato il suo valore per quanto riguarda l'analisi dei dati raccolti durante il Run-1 di LHC, svolgendo un ruolo fondamentale nella scoperta del bosone di Higgs. Oggi il Cloud computing sta emergendo come un nuovo paradigma di calcolo per accedere a grandi quantità di risorse condivise da numerose comunità scientifiche. Date le specifiche tecniche necessarie per il Run-2 (e successivi) di LHC, la comunità scientifica è interessata a contribuire allo sviluppo di tecnologie Cloud e verificare se queste possano fornire un approccio complementare, oppure anche costituire una valida alternativa, alle soluzioni tecnologiche esistenti. Lo scopo di questa tesi è di testare un'infrastruttura Cloud e confrontare le sue prestazioni alla LHC Computing Grid. Il Capitolo 1 contiene un resoconto generale del Modello Standard. Nel Capitolo 2 si descrive l'acceleratore LHC e gli esperimenti che operano a tale acceleratore, con particolare attenzione all’esperimento CMS. Nel Capitolo 3 viene trattato il Computing nella fisica delle alte energie e vengono esaminati i paradigmi Grid e Cloud. Il Capitolo 4, ultimo del presente elaborato, riporta i risultati del mio lavoro inerente l'analisi comparata delle prestazioni di Grid e Cloud.
Resumo:
Over the time, Twitter has become a fundamental source of information for news. As a one step forward, researchers have tried to analyse if the tweets contain predictive power. In the past, in financial field, a lot of research has been done to propose a function which takes as input all the tweets for a particular stock or index s, analyse them and predict the stock or index price of s. In this work, we take an alternative approach: using the stock price and tweet information, we investigate following questions. 1. Is there any relation between the amount of tweets being generated and the stocks being exchanged? 2. Is there any relation between the sentiment of the tweets and stock prices? 3. What is the structure of the graph that describes the relationships between users?
Resumo:
VIRTIS, a bordo di Venus Express, è uno spettrometro in grado di operare da 0.25 a 5 µm. Nel periodo 2006-2011 ha ricavato un'enorme mole di dati ma a tutt'oggi le osservazioni al lembo sono poco utilizzate per lo studio delle nubi e delle hazes, specialmente di notte. Gli spettri al lembo a quote mesosferiche sono dominati dalla radianza proveniente dalle nubi e scatterata in direzione dello strumento dalle hazes. L'interpretazione degli spettri al lembo non può quindi prescindere dalla caratterizzazione dell'intera colonna atmosferica. L'obiettivo della tesi è di effettuare un’analisi statistica sulle osservazioni al nadir e proporre una metodologia per ricavare una caratterizzazione delle hazes combinando osservazioni al nadir e al lembo. La caratterizzazione delle nubi è avvenuta su un campione di oltre 3700 osservazioni al nadir. È stato creato un ampio dataset di spettri sintetici modificando, in un modello iniziale, vari parametri di nube quali composizione chimica, numero e dimensione delle particelle. Un processo di fit è stato applicato alle osservazioni per stabilire quale modello potesse descrivere gli spettri osservati. Si è poi effettuata una analisi statistica sui risultati del campione. Si è ricavata una concentrazione di acido solforico molto elevata nelle nubi basse, pari al 96% in massa, che si discosta dal valore generalmente utilizzato del 75%. Si sono poi integrati i risultati al nadir con uno studio mirato su poche osservazioni al lembo, selezionate in modo da intercettare nel punto di tangenza la colonna atmosferica osservata al nadir, per ricavare informazioni sulle hazes. I risultati di un modello Monte Carlo indicano che il numero e le dimensioni delle particelle previste dal modello base devono essere ridotte in maniera significativa. In particolare si osserva un abbassamento della quota massima delle hazes rispetto ad osservazioni diurne.
Resumo:
Advances in the area of mobile and wireless communication for healthcare (m-Health) along with the improvements in information science allow the design and development of new patient-centric models for the provision of personalised healthcare services, increase of patient independence and improvement of patient's self-control and self-management capabilities. This paper comprises a brief overview of the m-Health applications towards the self-management of individuals with diabetes mellitus and the enhancement of their quality of life. Furthermore, the design and development of a mobile phone application for Type 1 Diabetes Mellitus (T1DM) self-management is presented. The technical evaluation of the application, which permits the management of blood glucose measurements, blood pressure measurements, insulin dosage, food/drink intake and physical activity, has shown that the use of the mobile phone technologies along with data analysis methods might improve the self-management of T1DM.
Resumo:
Background In Switzerland there are about 150,000 equestrians. Horse related injuries, including head and spinal injuries, are frequently treated at our level I trauma centre. Objectives To analyse injury patterns, protective factors, and risk factors related to horse riding, and to define groups of safer riders and those at greater risk Methods We present a retrospective and a case-control survey at conducted a tertiary trauma centre in Bern, Switzerland. Injured equestrians from July 2000 - June 2006 were retrospectively classified by injury pattern and neurological symptoms. Injured equestrians from July-December 2008 were prospectively collected using a questionnaire with 17 variables. The same questionnaire was applied in non-injured controls. Multiple logistic regression was performed, and combined risk factors were calculated using inference trees. Results Retrospective survey A total of 528 injuries occured in 365 patients. The injury pattern revealed as follows: extremities (32%: upper 17%, lower 15%), head (24%), spine (14%), thorax (9%), face (9%), pelvis (7%) and abdomen (2%). Two injuries were fatal. One case resulted in quadriplegia, one in paraplegia. Case-control survey 61 patients and 102 controls (patients: 72% female, 28% male; controls: 63% female, 37% male) were included. Falls were most frequent (65%), followed by horse kicks (19%) and horse bites (2%). Variables statistically significant for the controls were: Older age (p = 0.015), male gender (p = 0.04) and holding a diploma in horse riding (p = 0.004). Inference trees revealed typical groups less and more likely to suffer injury. Conclusions Experience with riding and having passed a diploma in horse riding seem to be protective factors. Educational levels and injury risk should be graded within an educational level-injury risk index.