10 resultados para Partial data fusion

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this research is to improve the understanding of the factors that control the formation of karst porosity in hypogene settings and its associated patterns of void-conduit networks. Subsurface voids created by hypogene dissolution may span from few microns to decametric tubes providing interconnected conduit systems and forming highly anisotropic permeability domains in many reservoirs. Characterizing the spatial-morphological organization of hypogene karst is a challenging task that has dramatic implications for the applied industry, given that only partial data can be acquired from the subsurface by indirect techniques. Therefore, two outcropping cave analogues are examined: the Cavallone-Bove Cave in the Majella Massif (Italy), and the karst systems of the Salitre Formation (Brazil). In the latter, a peculiar example of hypogene speleogenesis associated with silicification has been studied, providing an analogue of many karstified reservoirs hosted in cherts or cherty-carbonates within mixed sedimentary sequences. The first part of the thesis is focused on the relationships between fracture patterns and flow pathways in deformed units in: 1) a fold-and-thrust setting (Majella Massif); 2) a cratonic block (Brazil). These settings represent potential playgrounds for the migration and accumulation of geofluids, where hypogene conduits may affect flow pathways, fluid storage, and reservoir properties. The results indicate that localized deformation producing cross-formational fracture zones associated with anticline hinges or fault damage zones is critical for hypogene fluid migration and karstification. The second part of the thesis deals with the multidisciplinary study of hydrothermal silicification and hypogene dissolution in Calixto Cave (Brazil). Petrophysical analyses and a geochemical characterization of silica deposits are used to unravel the spatial-morphological organization of the conduit system and its speleogenesis. The novel results obtained from this cave shed new light on the relationship between hydrothermal silicification, hypogene dissolution and the development of multistorey cave systems in layered carbonate-siliciclastic sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work has been developed in the framework of a new experimental campaign, proposed by the NUCL-EX Collaboration (INFN III Group), in order to progress in the understanding of the statistical properties of light nuclei, at excitation energies above particle emission threshold, by measuring exclusive data from fusion-evaporation reactions. The determination of the nuclear level density in the A~20 region, the understanding of the statistical behavior of light nuclei with excitation energies ~3 A.MeV, and the measurement of observables linked to the presence of cluster structures of nuclear excited levels are the main physics goals of this work. On the theory side, the contribution to this project given by this work lies in the development of a dedicated Monte-Carlo Hauser-Feshbach code for the evaporation of the compound nucleus. The experimental part of this thesis has consisted in the participation to the measurement 12C+12C at 95 MeV beam energy, at Laboratori Nazionali di Legnaro - INFN, using the GARFIELD+Ring Counter(RCo) set-up, from the beam-time request to the data taking, data reduction, detector calibrations and data analysis. Different results of the data analysis are presented in this thesis, together with a theoretical study of the system, performed with the new statistical decay code. As a result of this work, constraints on the nuclear level density at high excitation energy for light systems ranging from C up to Mg are given. Moreover, pre-equilibrium effects, tentatively interpreted as alpha-clustering effects, are put in evidence, both in the entrance channel of the reaction and in the dissipative dynamics on the path towards thermalisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cone penetration test (CPT), together with its recent variation (CPTU), has become the most widely used in-situ testing technique for soil profiling and geotechnical characterization. The knowledge gained over the last decades on the interpretation procedures in sands and clays is certainly wide, whilst very few contributions can be found as regards the analysis of CPT(u) data in intermediate soils. Indeed, it is widely accepted that at the standard rate of penetration (v = 20 mm/s), drained penetration occurs in sands while undrained penetration occurs in clays. However, a problem arise when the available interpretation approaches are applied to cone measurements in silts, sandy silts, silty or clayey sands, since such intermediate geomaterials are often characterized by permeability values within the range in which partial drainage is very likely to occur. Hence, the application of the available and well-established interpretation procedures, developed for ‘standard’ clays and sands, may result in invalid estimates of soil parameters. This study aims at providing a better understanding on the interpretation of CPTU data in natural sand and silt mixtures, by taking into account two main aspects, as specified below: 1)Investigating the effect of penetration rate on piezocone measurements, with the aim of identifying drainage conditions when cone penetration is performed at a standard rate. This part of the thesis has been carried out with reference to a specific CPTU database recently collected in a liquefaction-prone area (Emilia-Romagna Region, Italy). 2)Providing a better insight into the interpretation of piezocone tests in the widely studied silty sediments of the Venetian lagoon (Italy). Research has focused on the calibration and verification of some site-specific correlations, with special reference to the estimate of compressibility parameters for the assessment of long-term settlements of the Venetian coastal defences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary aim of the research activity presented in this PhD thesis was the development of an innovative hardware and software solution for creating a unique tool for kinematics and electromyographic analysis of the human body in an ecological setting. For this purpose, innovative algorithms have been proposed regarding different aspects of inertial and magnetic data elaboration: magnetometer calibration and magnetic field mapping (Chapter 2), data calibration (Chapter 3) and sensor-fusion algorithm. Topics that may conflict with the confidentiality agreement between University of Bologna and NCS Lab will not be covered in this thesis. After developing and testing the wireless platform, research activities were focused on its clinical validation. The first clinical study aimed to evaluate the intra and interobserver reproducibility in order to evaluate three-dimensional humero-scapulo-thoracic kinematics in an outpatient setting (Chapter 4). A second study aimed to evaluate the effect of Latissimus Dorsi Tendon Transfer on shoulder kinematics and Latissimus Dorsi activation in humerus intra - extra rotations (Chapter 5). Results from both clinical studies have demonstrated the ability of the developed platform to enter into daily clinical practice, providing useful information for patients' rehabilitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The innovation in several industrial sectors has been recently characterized by the need for reducing the operative temperature either for economic or environmental related aspects. Promising technological solutions require the acquisition of fundamental-based knowledge to produce safe and robust systems. In this sense, reactive systems often represent the bottleneck. For these reasons, this work was focused on the integration of chemical (i.e., detailed kinetic mechanism) and physical (i.e., computational fluid dynamics) models. A theoretical-based kinetic mechanism mimicking the behaviour of oxygenated fuels and their intermediates under oxidative conditions in a wide range of temperature and pressure was developed. Its validity was tested against experimental data collected in this work by using the heat flux burner, as well as measurements retrieved from the current literature. Besides, estimations deriving from existing models considered as the benchmark in the combustion field were compared with the newly generated mechanism. The latter was found to be the most accurate for the investigated conditions and fuels. Most influential species and reactions on the combustion of butyl acetate were identified. The corresponding thermodynamic parameter and rate coefficients were quantified through ab initio calculations. A reduced detailed kinetic mechanism was produced and implemented in an open-source computational fluid dynamics model to characterize pool fires caused by the accidental release of aviation fuel and liquefied natural gas, at first. Eventually, partial oxidation processes involving light alkenes were optimized following the quick, fair, and smoot (QFS) paradigm. The proposed procedure represents a comprehensive and multidisciplinary approach for the construction and validation of accurate models, allowing for the characterization of developing industrial sectors and techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nuclear cross sections are the pillars onto which the transport simulation of particles and radiations is built on. Since the nuclear data libraries production chain is extremely complex and made of different steps, it is mandatory to foresee stringent verification and validation procedures to be applied to it. The work here presented has been focused on the development of a new python based software called JADE, whose objective is to give a significant help in increasing the level of automation and standardization of these procedures in order to reduce the time passing between new libraries releases and, at the same time, increasing their quality. After an introduction to nuclear fusion (which is the field where the majority of the V\&V action was concentrated for the time being) and to the simulation of particles and radiations transport, the motivations leading to JADE development are discussed. Subsequently, the code general architecture and the implemented benchmarks (both experimental and computational) are described. After that, the results coming from the major application of JADE during the research years are presented. At last, after a final discussion on the objective reached by JADE, the possible brief, mid and long time developments for the project are discussed.