949 resultados para Statistical Model
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
The beta-decay of free neutrons is a strongly over-determined process in the Standard Model (SM) of Particle Physics and is described by a multitude of observables. Some of those observables are sensitive to physics beyond the SM. For example, the correlation coefficients of the involved particles belong to them. The spectrometer aSPECT was designed to measure precisely the shape of the proton energy spectrum and to extract from it the electron anti-neutrino angular correlation coefficient "a". A first test period (2005/ 2006) showed the “proof-of-principles”. The limiting influence of uncontrollable background conditions in the spectrometer made it impossible to extract a reliable value for the coefficient "a" (publication: Baessler et al., 2008, Europhys. Journ. A, 38, p.17-26). A second measurement cycle (2007/ 2008) aimed to under-run the relative accuracy of previous experiments (Stratowa et al. (1978), Byrne et al. (2002)) da/a =5%. I performed the analysis of the data taken there which is the emphasis of this doctoral thesis. A central point are background studies. The systematic impact of background on a was reduced to da/a(syst.)=0.61 %. The statistical accuracy of the analyzed measurements is da/a(stat.)=1.4 %. Besides, saturation effects of the detector electronics were investigated which were initially observed. These turned out not to be correctable on a sufficient level. An applicable idea how to avoid the saturation effects will be discussed in the last chapter.
Resumo:
The uncertainties in the determination of the stratigraphic profile of natural soils is one of the main problems in geotechnics, in particular for landslide characterization and modeling. The study deals with a new approach in geotechnical modeling which relays on a stochastic generation of different soil layers distributions, following a boolean logic – the method has been thus called BoSG (Boolean Stochastic Generation). In this way, it is possible to randomize the presence of a specific material interdigitated in a uniform matrix. In the building of a geotechnical model it is generally common to discard some stratigraphic data in order to simplify the model itself, assuming that the significance of the results of the modeling procedure would not be affected. With the proposed technique it is possible to quantify the error associated with this simplification. Moreover, it could be used to determine the most significant zones where eventual further investigations and surveys would be more effective to build the geotechnical model of the slope. The commercial software FLAC was used for the 2D and 3D geotechnical model. The distribution of the materials was randomized through a specifically coded MatLab program that automatically generates text files, each of them representing a specific soil configuration. Besides, a routine was designed to automate the computation of FLAC with the different data files in order to maximize the sample number. The methodology is applied with reference to a simplified slope in 2D, a simplified slope in 3D and an actual landslide, namely the Mortisa mudslide (Cortina d’Ampezzo, BL, Italy). However, it could be extended to numerous different cases, especially for hydrogeological analysis and landslide stability assessment, in different geological and geomorphological contexts.
Resumo:
The subject of this work concerns the study of the immigration phenomenon, with emphasis on the aspects related to the integration of an immigrant population in a hosting one. Aim of this work is to show the forecasting ability of a recent finding where the behavior of integration quantifiers was analyzed and investigated with a mathematical model of statistical physics origins (a generalization of the monomer dimer model). After providing a detailed literature review of the model, we show that not only such a model is able to identify the social mechanism that drives a particular integration process, but it also provides correct forecast. The research reported here proves that the proposed model of integration and its forecast framework are simple and effective tools to reduce uncertainties about how integration phenomena emerge and how they are likely to develop in response to increased migration levels in the future.
Resumo:
This study aims at a comprehensive understanding of the effects of aerosol-cloud interactions and their effects on cloud properties and climate using the chemistry-climate model EMAC. In this study, CCN activation is regarded as the dominant driver in aerosol-cloud feedback loops in warm clouds. The CCN activation is calculated prognostically using two different cloud droplet nucleation parameterizations, the STN and HYB CDN schemes. Both CDN schemes account for size and chemistry effects on the droplet formation based on the same aerosol properties. The calculation of the solute effect (hygroscopicity) is the main difference between the CDN schemes. The kappa-method is for the first time incorporated into Abdul-Razzak and Ghan activation scheme (ARG) to calculate hygroscopicity and critical supersaturation of aerosols (HYB), and the performance of the modied scheme is compared with the osmotic coefficient model (STN), which is the standard in the ARG scheme. Reference simulations (REF) with the prescribed cloud droplet number concentration have also been carried out in order to understand the effects of aerosol-cloud feedbacks. In addition, since the calculated cloud coverage is an important determinant of cloud radiative effects and is influencing the nucleation process two cloud cover parameterizations (i.e., a relative humidity threshold; RH-CLC and a statistical cloud cover scheme; ST-CLC) have been examined together with the CDN schemes, and their effects on the simulated cloud properties and relevant climate parameters have been investigated. The distinct cloud droplet spectra show strong sensitivity to aerosol composition effects on cloud droplet formation in all particle sizes, especially for the Aitken mode. As Aitken particles are the major component of the total aerosol number concentration and CCN, and are most sensitive to aerosol chemical composition effect (solute effect) on droplet formation, the activation of Aitken particles strongly contribute to total cloud droplet formation and thereby providing different cloud droplet spectra. These different spectra influence cloud structure, cloud properties, and climate, and show regionally varying sensitivity to meteorological and geographical condition as well as the spatiotemporal aerosol properties (i.e., particle size, number, and composition). The changes responding to different CDN schemes are more pronounced at lower altitudes than higher altitudes. Among regions, the subarctic regions show the strongest changes, as the lower surface temperature amplifies the effects of the activated aerosols; in contrast, the Sahara desert, where is an extremely dry area, is less influenced by changes in CCN number concentration. The aerosol-cloud coupling effects have been examined by comparing the prognostic CDN simulations (STN, HYB) with the reference simulation (REF). Most pronounced effects are found in the cloud droplet number concentration, cloud water distribution, and cloud radiative effect. The aerosol-cloud coupling generally increases cloud droplet number concentration; this decreases the efficiency of the formation of weak stratiform precipitation, and increases the cloud water loading. These large-scale changes lead to larger cloud cover and longer cloud lifetime, and contribute to high optical thickness and strong cloud cooling effects. This cools the Earth's surface, increases atmospheric stability, and reduces convective activity. These changes corresponding to aerosol-cloud feedbacks are also differently simulated depending on the cloud cover scheme. The ST-CLC scheme is more sensitive to aerosol-cloud coupling, since this scheme uses a tighter linkage of local dynamics and cloud water distributions in cloud formation process than the RH-CLC scheme. For the calculated total cloud cover, the RH-CLC scheme simulates relatively similar pattern to observations than the ST-CLC scheme does, but the overall properties (e.g., total cloud cover, cloud water content) in the RH simulations are overestimated, particularly over ocean. This is mainly originated from the difference in simulated skewness in each scheme: the RH simulations calculate negatively skewed distributions of cloud cover and relevant cloud water, which is similar to that of the observations, while the ST simulations yield positively skewed distributions resulting in lower mean values than the RH-CLC scheme does. The underestimation of total cloud cover over ocean, particularly over the intertropical convergence zone (ITCZ) relates to systematic defficiency of the prognostic calculation of skewness in the current set-ups of the ST-CLC scheme.rnOverall, the current EMAC model set-ups perform better over continents for all combinations of the cloud droplet nucleation and cloud cover schemes. To consider aerosol-cloud feedbacks, the HYB scheme is a better method for predicting cloud and climate parameters for both cloud cover schemes than the STN scheme. The RH-CLC scheme offers a better simulation of total cloud cover and the relevant parameters with the HYB scheme and single-moment microphysics (REF) than the ST-CLC does, but is not very sensitive to aerosol-cloud interactions.
Resumo:
The Curie-Weiss model is defined by ah Hamiltonian according to spins interact. For some particular values of the parameters, the sum of the spins normalized with square-root normalization converges or not toward Gaussian distribution. In the thesis we investigate some correlations between the behaviour of the sum and the central limit for interacting random variables.
Resumo:
Monomer-dimer models are amongst the models in statistical mechanics which found application in many areas of science, ranging from biology to social sciences. This model describes a many-body system in which monoatomic and diatomic particles subject to hard-core interactions get deposited on a graph. In our work we provide an extension of this model to higher-order particles. The aim of our work is threefold: first we study the thermodynamic properties of the newly introduced model. We solve analytically some regular cases and find that, differently from the original, our extension admits phase transitions. Then we tackle the inverse problem, both from an analytical and numerical perspective. Finally we propose an application to aggregation phenomena in virtual messaging services.
Resumo:
Statistical shape models (SSMs) have been used widely as a basis for segmenting and interpreting complex anatomical structures. The robustness of these models are sensitive to the registration procedures, i.e., establishment of a dense correspondence across a training data set. In this work, two SSMs based on the same training data set of scoliotic vertebrae, and registration procedures were compared. The first model was constructed based on the original binary masks without applying any image pre- and post-processing, and the second was obtained by means of a feature preserving smoothing method applied to the original training data set, followed by a standard rasterization algorithm. The accuracies of the correspondences were assessed quantitatively by means of the maximum of the mean minimum distance (MMMD) and Hausdorf distance (H(D)). Anatomical validity of the models were quantified by means of three different criteria, i.e., compactness, specificity, and model generalization ability. The objective of this study was to compare quasi-identical models based on standard metrics. Preliminary results suggest that the MMMD distance and eigenvalues are not sensitive metrics for evaluating the performance and robustness of SSMs.
Resumo:
Purpose Accurate three-dimensional (3D) models of lumbar vertebrae can enable image-based 3D kinematic analysis. The common approach to derive 3D models is by direct segmentation of CT or MRI datasets. However, these have the disadvantages that they are expensive, timeconsuming and/or induce high-radiation doses to the patient. In this study, we present a technique to automatically reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image. Methods Our technique is based on a hybrid 2D/3D deformable registration strategy combining a landmark-to-ray registration with a statistical shape model-based 2D/3D reconstruction scheme. Fig. 1 shows different stages of the reconstruction process. Four cadaveric lumbar spine segments (total twelve lumbar vertebrae) were used to validate the technique. To evaluate the reconstruction accuracy, the surface models reconstructed from the lateral fluoroscopic images were compared to the associated ground truth data derived from a 3D CT-scan reconstruction technique. For each case, a surface-based matching was first used to recover the scale and the rigid transformation between the reconstructed surface model Results Our technique could successfully reconstruct 3D surface models of all twelve vertebrae. After recovering the scale and the rigid transformation between the reconstructed surface models and the ground truth models, the average error of the 2D/3D surface model reconstruction over the twelve lumbar vertebrae was found to be 1.0 mm. The errors of reconstructing surface models of all twelve vertebrae are shown in Fig. 2. It was found that the mean errors of the reconstructed surface models in comparison to their associated ground truths after iterative scaled rigid registrations ranged from 0.7 mm to 1.3 mm and the rootmean squared (RMS) errors ranged from 1.0 mm to 1.7 mm. The average mean reconstruction error was found to be 1.0 mm. Conclusion An accurate, scaled 3D reconstruction of the lumbar vertebra can be obtained from a single lateral fluoroscopic image using a statistical shape model based 2D/3D reconstruction technique. Future work will focus on applying the reconstructed model for 3D kinematic analysis of lumbar vertebrae, an extension of our previously-reported imagebased kinematic analysis. The developed method also has potential applications in surgical planning and navigation.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
Dahl salt-sensitive (DS) and salt-resistant (DR) inbred rat strains represent a well established animal model for cardiovascular research. Upon prolonged administration of high-salt-containing diet, DS rats develop systemic hypertension, and as a consequence they develop left ventricular hypertrophy, followed by heart failure. The aim of this work was to explore whether this animal model is suitable to identify biomarkers that characterize defined stages of cardiac pathophysiological conditions. The work had to be performed in two stages: in the first part proteomic differences that are attributable to the two separate rat lines (DS and DR) had to be established, and in the second part the process of development of heart failure due to feeding the rats with high-salt-containing diet has to be monitored. This work describes the results of the first stage, with the outcome of protein expression profiles of left ventricular tissues of DS and DR rats kept under low salt diet. Substantial extent of quantitative and qualitative expression differences between both strains of Dahl rats in heart tissue was detected. Using Principal Component Analysis, Linear Discriminant Analysis and other statistical means we have established sets of differentially expressed proteins, candidates for further molecular analysis of the heart failure mechanisms.
Resumo:
Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors.
Resumo:
OBJECTIVES: Donation after circulatory declaration of death (DCDD) could significantly improve the number of cardiac grafts for transplantation. Graft evaluation is particularly important in the setting of DCDD given that conditions of cardio-circulatory arrest and warm ischaemia differ, leading to variable tissue injury. The aim of this study was to identify, at the time of heart procurement, means to predict contractile recovery following cardioplegic storage and reperfusion using an isolated rat heart model. Identification of reliable approaches to evaluate cardiac grafts is key in the development of protocols for heart transplantation with DCDD. METHODS: Hearts isolated from anaesthetized male Wistar rats (n = 34) were exposed to various perfusion protocols. To simulate DCDD conditions, rats were exsanguinated and maintained at 37°C for 15-25 min (warm ischaemia). Isolated hearts were perfused with modified Krebs-Henseleit buffer for 10 min (unloaded), arrested with cardioplegia, stored for 3 h at 4°C and then reperfused for 120 min (unloaded for 60 min, then loaded for 60 min). Left ventricular (LV) function was assessed using an intraventricular micro-tip pressure catheter. Statistical significance was determined using the non-parametric Spearman rho correlation analysis. RESULTS: After 120 min of reperfusion, recovery of LV work measured as developed pressure (DP)-heart rate (HR) product ranged from 0 to 15 ± 6.1 mmHg beats min(-1) 10(-3) following warm ischaemia of 15-25 min. Several haemodynamic parameters measured during early, unloaded perfusion at the time of heart procurement, including HR and the peak systolic pressure-HR product, correlated significantly with contractile recovery after cardioplegic storage and 120 min of reperfusion (P < 0.001). Coronary flow, oxygen consumption and lactate dehydrogenase release also correlated significantly with contractile recovery following cardioplegic storage and 120 min of reperfusion (P < 0.05). CONCLUSIONS: Haemodynamic and biochemical parameters measured at the time of organ procurement could serve as predictive indicators of contractile recovery. We believe that evaluation of graft suitability is feasible prior to transplantation with DCDD, and may, consequently, increase donor heart availability.