10 resultados para linear calibration model
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
Deformability is often a crucial to the conception of many civil-engineering structural elements. Also, design is all the more burdensome if both long- and short-term deformability has to be considered. In this thesis, long- and short-term deformability has been studied from the material and the structural modelling point of view. Moreover, two materials have been handled: pultruded composites and concrete. A new finite element model for thin-walled beams has been introduced. As a main assumption, cross-sections rigid are considered rigid in their plane; this hypothesis replaces that of the classical beam theory of plane cross-sections in the deformed state. That also allows reducing the total number of degrees of freedom, and therefore making analysis faster compared with twodimensional finite elements. Longitudinal direction warping is left free, allowing describing phenomena such as the shear lag. The new finite-element model has been first applied to concrete thin-walled beams (such as roof high span girders or bridge girders) subject to instantaneous service loadings. Concrete in his cracked state has been considered through a smeared crack model for beams under bending. At a second stage, the FE-model has been extended to the viscoelastic field and applied to pultruded composite beams under sustained loadings. The generalized Maxwell model has been adopted. As far as materials are concerned, long-term creep tests have been carried out on pultruded specimens. Both tension and shear tests have been executed. Some specimen has been strengthened with carbon fibre plies to reduce short- and long- term deformability. Tests have been done in a climate room and specimens kept 2 years under constant load in time. As for concrete, a model for tertiary creep has been proposed. The basic idea is to couple the UMLV linear creep model with a damage model in order to describe nonlinearity. An effective strain tensor, weighting the total and the elasto-damaged strain tensors, controls damage evolution through the damage loading function. Creep strains are related to the effective stresses (defined by damage models) and so associated to the intact material.
Resumo:
Introduction The “eversion” technique for carotid endarterectomy (e-CEA), that involves the transection of the internal carotid artery at the carotid bulb and its eversion over the atherosclerotic plaque, has been associated with an increased risk of postoperative hypertension possibly due to a direct iatrogenic damage to the carotid sinus fibers. The aim of this study is to assess the long-term effect of the e-CEA on arterial baroreflex and peripheral chemoreflex function in humans. Methods A retrospective review was conducted on a prospectively compiled computerized database of 3128 CEAs performed on 2617 patients at our Center between January 2001 and March 2006. During this period, a total of 292 patients who had bilateral carotid stenosis ≥70% at the time of the first admission underwent staged bilateral CEAs. Of these, 93 patients had staged bilateral e-CEAs, 126 staged bilateral s- CEAs and 73 had different procedures on each carotid. CEAs were performed with either the eversion or the standard technique with routine Dacron patching in all cases. The study inclusion criteria were bilateral CEA with the same technique on both sides and an uneventful postoperative course after both procedures. We decided to enroll patients submitted to bilateral e-CEA to eliminate the background noise from contralateral carotid sinus fibers. Exclusion criteria were: age >70 years, diabetes mellitus, chronic pulmonary disease, symptomatic ischemic cardiac disease or medical therapy with b-blockers, cardiac arrhythmia, permanent neurologic deficits or an abnormal preoperative cerebral CT scan, carotid restenosis and previous neck or chest surgery or irradiation. Young and aged-matched healthy subjects were also recruited as controls. Patients were assessed by the 4 standard cardiovascular reflex tests, including Lying-to-standing, Orthostatic hypotension, Deep breathing, and Valsalva Maneuver. Indirect autonomic parameters were assessed with a non-invasive approach based on spectral analysis of EKG RR interval, systolic arterial pressure, and respiration variability, performed with an ad hoc software. From the analysis of these parameters the software provides the estimates of spontaneous baroreflex sensitivity (BRS). The ventilatory response to hypoxia was assessed in patients and controls by means of classic rebreathing tests. Results A total of 29 patients (16 males, age 62.4±8.0 years) were enrolled. Overall, 13 patients had undergone bilateral e-CEA (44.8%) and 16 bilateral s-CEA (55.2%) with a mean interval between the procedures of 62±56 days. No patient showed signs or symptoms of autonomic dysfunction, including labile hypertension, tachycardia, palpitations, headache, inappropriate diaphoresis, pallor or flushing. The results of standard cardiovascular autonomic tests showed no evidence of autonomic dysfunction in any of the enrolled patients. At spectral analysis, a residual baroreflex performance was shown in both patient groups, though reduced, as expected, compared to young controls. Notably, baroreflex function was better maintained in e-CEA, compared to standard CEA. (BRS at rest: young controls 19.93 ± 2.45 msec/mmHg; age-matched controls 7.75 ± 1.24; e-CEA 13.85 ± 5.14; s-CEA 4.93 ± 1.15; ANOVA P=0.001; BRS at stand: young controls 7.83 ± 0.66; age-matched controls 3.71 ± 0.35; e-CEA 7.04 ± 1.99; s-CEA 3.57 ± 1.20; ANOVA P=0.001). In all subjects ventilation (VÝ E) and oximetry data fitted a linear regression model with r values > 0.8. Oneway analysis of variance showed a significantly higher slope both for ΔVE/ΔSaO2 in controls compared with both patient groups which were not different from each other (-1.37 ± 0.33 compared with -0.33±0.08 and -0.29 ±0.13 l/min/%SaO2, p<0.05, Fig.). Similar results were observed for and ΔVE/ΔPetO2 (-0.20 ± 0.1 versus -0.01 ± 0.0 and -0.07 ± 0.02 l/min/mmHg, p<0.05). A regression model using treatment, age, baseline FiCO2 and minimum SaO2 achieved showed only treatment as a significant factor in explaining the variance in minute ventilation (R2= 25%). Conclusions Overall, we demonstrated that bilateral e-CEA does not imply a carotid sinus denervation. As a result of some expected degree of iatrogenic damage, such performance was lower than that of controls. Interestingly though, baroreflex performance appeared better maintained in e-CEA than in s-CEA. This may be related to the changes in the elastic properties of the carotid sinus vascular wall, as the patch is more rigid than the endarterectomized carotid wall that remains in the e-CEA. These data confirmed the safety of CEA irrespective of the surgical technique and have relevant clinical implication in the assessment of the frequent hemodynamic disturbances associated with carotid angioplasty stenting.
Resumo:
In this thesis we will investigate some properties of one-dimensional quantum systems. From a theoretical point of view quantum models in one dimension are particularly interesting because they are strongly interacting, since particles cannot avoid each other in their motion, and you we can never ignore collisions. Yet, integrable models often generate new and non-trivial solutions, which could not be found perturbatively. In this dissertation we shall focus on two important aspects of integrable one- dimensional models: Their entanglement properties at equilibrium and their dynamical correlators after a quantum quench. The first part of the thesis will be therefore devoted to the study of the entanglement entropy in one- dimensional integrable systems, with a special focus on the XYZ spin-1/2 chain, which, in addition to being integrable, is also an interacting model. We will derive its Renyi entropies in the thermodynamic limit and its behaviour in different phases and for different values of the mass-gap will be analysed. In the second part of the thesis we will instead study the dynamics of correlators after a quantum quench , which represent a powerful tool to measure how perturbations and signals propagate through a quantum chain. The emphasis will be on the Transverse Field Ising Chain and the O(3) non-linear sigma model, which will be both studied by means of a semi-classical approach. Moreover in the last chapter we will demonstrate a general result about the dynamics of correlation functions of local observables after a quantum quench in integrable systems. In particular we will show that if there are not long-range interactions in the final Hamiltonian, then the dynamics of the model (non equal- time correlations) is described by the same statistical ensemble that describes its statical properties (equal-time correlations).
Resumo:
Finite element techniques for solving the problem of fluid-structure interaction of an elastic solid material in a laminar incompressible viscous flow are described. The mathematical problem consists of the Navier-Stokes equations in the Arbitrary Lagrangian-Eulerian formulation coupled with a non-linear structure model, considering the problem as one continuum. The coupling between the structure and the fluid is enforced inside a monolithic framework which computes simultaneously for the fluid and the structure unknowns within a unique solver. We used the well-known Crouzeix-Raviart finite element pair for discretization in space and the method of lines for discretization in time. A stability result using the Backward-Euler time-stepping scheme for both fluid and solid part and the finite element method for the space discretization has been proved. The resulting linear system has been solved by multilevel domain decomposition techniques. Our strategy is to solve several local subproblems over subdomain patches using the Schur-complement or GMRES smoother within a multigrid iterative solver. For validation and evaluation of the accuracy of the proposed methodology, we present corresponding results for a set of two FSI benchmark configurations which describe the self-induced elastic deformation of a beam attached to a cylinder in a laminar channel flow, allowing stationary as well as periodically oscillating deformations, and for a benchmark proposed by COMSOL multiphysics where a narrow vertical structure attached to the bottom wall of a channel bends under the force due to both viscous drag and pressure. Then, as an example of fluid-structure interaction in biomedical problems, we considered the academic numerical test which consists in simulating the pressure wave propagation through a straight compliant vessel. All the tests show the applicability and the numerical efficiency of our approach to both two-dimensional and three-dimensional problems.
Resumo:
This doctoral thesis presents a project carried out in secondary schools located in the city of Ferrara with the primary objective of demonstrating the effectiveness of an intervention based on Well-Being Therapy (Fava, 2016) in reducing alcohol use and improving lifestyles. In the first part (chapters 1-3), an introduction on risky behaviors and unhealthy lifestyle in adolescence is presented, followed by an examination of the phenomenon of binge drinking and of the concept of psychological well-being. In the second part (chapters 4-6), the experimental study is presented. A three-arm cluster randomized controlled trial including three test periods was implemented. The study involved eleven classes that were randomly assigned to receive well-being intervention (WBI), lifestyle intervention (LI) or not receive intervention (NI). Results were analyzed by linear mixed model and mixed-effects logistic regression with the aim to test the efficacy of WBI in comparison with LI and NI. AUDIT-C total score increased more in NI in comparison with WBI (p=0.008) and LI (p=0.003) at 6-month. The odds to be classified as at-risk drinker was lower in WBI (OR 0.01; 95%CI 0.01–0.14) and LI (OR 0.01; 95%CI 0.01–0.03) than NI at 6-month. The odds to use e-cigarettes at 6-month (OR 0.01; 95%CI 0.01–0.35) and cannabis at post-test (OR 0.01; 95%CI 0.01–0.18) were less in WBI than NI. Sleep hours at night decreased more in NI than in WBI (p = 0.029) and LI (p = 0.006) at 6-month. Internet addiction scores decreased more in WBI (p = 0.003) and LI (p = 0.004) at post-test in comparison with NI. Conclusions about the obtained results, limitations of the study, and future implications are discussed. In the seventh chapter, the data of the project collected during the pandemic are presented and compared with those from recent literature.
Resumo:
The great challenges of today pose great pressure on the food chain to provide safe and nutritious food that meets regulations and consumer health standards. In this context, Risk Analysis is used to produce an estimate of the risks to human health and to identify and implement effective risk-control measures. The aims of this work were 1) describe how QRA is used to evaluate the risk for consumers health, 2) address the methodology to obtain models to apply in QMRA; 3) evaluate solutions to mitigate the risk. The application of a QCRA to the Italian milk industry enabled the assessment of Aflatoxin M1 exposure, impact on different population categories, and comparison of risk-mitigation strategies. The results highlighted the most sensitive population categories, and how more stringent sampling plans reduced risk. The application of a QMRA to Spanish fresh cheeses evidenced how the contamination of this product with Listeria monocytogenes may generate a risk for the consumers. Two risk-mitigation actions were evaluated, i.e. reducing shelf life and domestic refrigerator temperature, both resulting effective in reducing the risk of listeriosis. A description of the most applied protocols for data generation for predictive model development, was provided to increase transparency and reproducibility and to provide the means to better QMRA. The development of a linear regression model describing the fate of Salmonella spp. in Italian salami during the production process and HPP was described. Alkaline electrolyzed water was evaluated for its potential use to reduce microbial loads on working surfaces, with results showing its effectiveness. This work showed the relevance of QRA, of predictive microbiology, and of new technologies to ensure food safety on a more integrated way. Filling of data gaps, the development of better models and the inclusion of new risk-mitigation strategies may lead to improvements in the presented QRAs.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.
Resumo:
Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.