925 resultados para Capability curves
Resumo:
The research is aimed at contributing to the identification of reliable fully predictive Computational Fluid Dynamics (CFD) methods for the numerical simulation of equipment typically adopted in the chemical and process industries. The apparatuses selected for the investigation, specifically membrane modules, stirred vessels and fluidized beds, were characterized by a different and often complex fluid dynamic behaviour and in some cases the momentum transfer phenomena were coupled with mass transfer or multiphase interactions. Firs of all, a novel modelling approach based on CFD for the prediction of the gas separation process in membrane modules for hydrogen purification is developed. The reliability of the gas velocity field calculated numerically is assessed by comparison of the predictions with experimental velocity data collected by Particle Image Velocimetry, while the applicability of the model to properly predict the separation process under a wide range of operating conditions is assessed through a strict comparison with permeation experimental data. Then, the effect of numerical issues on the RANS-based predictions of single phase stirred tanks is analysed. The homogenisation process of a scalar tracer is also investigated and simulation results are compared to original passive tracer homogenisation curves determined with Planar Laser Induced Fluorescence. The capability of a CFD approach based on the solution of RANS equations is also investigated for describing the fluid dynamic characteristics of the dispersion of organics in water. Finally, an Eulerian-Eulerian fluid-dynamic model is used to simulate mono-disperse suspensions of Geldart A Group particles fluidized by a Newtonian incompressible fluid as well as binary segregating fluidized beds of particles differing in size and density. The results obtained under a number of different operating conditions are compared with literature experimental data and the effect of numerical uncertainties on axial segregation is also discussed.
Resumo:
The present research aims at shedding light on the demanding puzzle characterizing the issue of child undernutrition in India. Indeed, the so called ‘Indian development paradox’ identifies the phenomenon according to which higher level of income per capita is recorded alongside a lethargic reduction in the proportion of underweight children aged below three years. Thus, in the time period occurring from 2000 to 2005, real Gross Domestic Production per capita has annually grown at 5.4%, whereas the proportion of children who are underweight has declined from 47% to 46%, a mere one point percent. Such trend opens up the space for discussing the traditionally assumed linkage between income-poverty and undernutrition as well as food intervention as the main focus of policies designed to fight child hunger. Also, it unlocks doors for evaluating the role of an alternative economic approach aiming at explaining undernutrition, such as the Capability Approach. The Capability Approach argues for widening the informational basis to account not only for resources, but also for variables related to liberties, opportunities and autonomy in pursuing what individuals value.The econometric analysis highlights the relevance of including behavioral factors when explaining child undernutrition. In particular, the ability of the mother to move freely in the community without the need of asking permission to her husband or mother-in-law is statistically significant when included in the model, which accounts also for confounding traditional variables, such as economic wealth and food security. Also, focusing on agency, results indicates the necessity of measuring autonomy in different domains and the need of improving the measurement scale for agency data, especially with regards the domain of household duties. Finally, future research is required to investigate policy venues for increasing agency in women and in the communities they live in as viable strategy for reducing the plague of child undernutrition in India.
Resumo:
Laser shock peening is a technique similar to shot peening that imparts compressive residual stresses in materials for improving fatigue resistance. The ability to use a high energy laser pulse to generate shock waves, inducing a compressive residual stress field in metallic materials, has applications in multiple fields such as turbo-machinery, airframe structures, and medical appliances. The transient nature of the LSP phenomenon and the high rate of the laser's dynamic make real time in-situ measurement of laser/material interaction very challenging. For this reason and for the high cost of the experimental tests, reliable analytical methods for predicting detailed effects of LSP are needed to understand the potential of the process. Aim of this work has been the prediction of residual stress field after Laser Peening process by means of Finite Element Modeling. The work has been carried out in the Stress Methods department of Airbus Operations GmbH (Hamburg) and it includes investigation on compressive residual stresses induced by Laser Shock Peening, study on mesh sensitivity, optimization and tuning of the model by using physical and numerical parameters, validation of the model by comparing it with experimental results. The model has been realized with Abaqus/Explicit commercial software starting from considerations done on previous works. FE analyses are “Mesh Sensitive”: by increasing the number of elements and by decreasing their size, the software is able to probe even the details of the real phenomenon. However, these details, could be only an amplification of real phenomenon. For this reason it was necessary to optimize the mesh elements' size and number. A new model has been created with a more fine mesh in the trough thickness direction because it is the most involved in the process deformations. This increment of the global number of elements has been paid with an "in plane" size reduction of the elements far from the peened area in order to avoid too high computational costs. Efficiency and stability of the analyses has been improved by using bulk viscosity coefficients, a merely numerical parameter available in Abaqus/Explicit. A plastic rate sensitivity study has been also carried out and a new set of Johnson Cook's model coefficient has been chosen. These investigations led to a more controllable and reliable model, valid even for more complex geometries. Moreover the study about the material properties highlighted a gap of the model about the simulation of the surface conditions. Modeling of the ablative layer employed during the real process has been used to fill this gap. In the real process ablative layer is a super thin sheet of pure aluminum stuck on the masterpiece. In the simulation it has been simply reproduced as a 100µm layer made by a material with a yield point of 10MPa. All those new settings has been applied to a set of analyses made with different geometry models to verify the robustness of the model. The calibration of the model with the experimental results was based on stress and displacement measurements carried out on the surface and in depth as well. The good correlation between the simulation and experimental tests results proved this model to be reliable.
Resumo:
The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.
Resumo:
This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
In questa tesi si studiano alcune proprietà fondamentali delle funzioni Zeta e L associate ad una curva ellittica. In particolare, si dimostra la razionalità della funzione Zeta e l'ipotesi di Riemann per due famiglie specifiche di curve ellittiche. Si studia poi il problema dell'esistenza di un prolungamento analitico al piano complesso della funzione L di una curva ellittica con moltiplicazione complessa, attraverso l'analisi diretta di due casi particolari.
Resumo:
Arterial pressure-based cardiac output monitors (APCOs) are increasingly used as alternatives to thermodilution. Validation of these evolving technologies in high-risk surgery is still ongoing. In liver transplantation, FloTrac-Vigileo (Edwards Lifesciences) has limited correlation with thermodilution, whereas LiDCO Plus (LiDCO Ltd.) has not been tested intraoperatively. Our goal was to directly compare the 2 proprietary APCO algorithms as alternatives to pulmonary artery catheter thermodilution in orthotopic liver transplantation (OLT). The cardiac index (CI) was measured simultaneously in 20 OLT patients at prospectively defined surgical landmarks with the LiDCO Plus monitor (CI(L)) and the FloTrac-Vigileo monitor (CI(V)). LiDCO Plus was calibrated according to the manufacturer's instructions. FloTrac-Vigileo did not require calibration. The reference CI was derived from pulmonary artery catheter intermittent thermodilution (CI(TD)). CI(V)-CI(TD) bias ranged from -1.38 (95% confidence interval = -2.02 to -0.75 L/minute/m(2), P = 0.02) to -2.51 L/minute/m(2) (95% confidence interval = -3.36 to -1.65 L/minute/m(2), P < 0.001), and CI(L)-CI(TD) bias ranged from -0.65 (95% confidence interval = -1.29 to -0.01 L/minute/m(2), P = 0.047) to -1.48 L/minute/m(2) (95% confidence interval = -2.37 to -0.60 L/minute/m(2), P < 0.01). For both APCOs, bias to CI(TD) was correlated with the systemic vascular resistance index, with a stronger dependence for FloTrac-Vigileo. The capability of the APCOs for tracking changes in CI(TD) was assessed with a 4-quadrant plot for directional changes and with receiver operating characteristic curves for specificity and sensitivity. The performance of both APCOs was poor in detecting increases and fair in detecting decreases in CI(TD). In conclusion, the calibrated and uncalibrated APCOs perform differently during OLT. Although the calibrated APCO is less influenced by changes in the systemic vascular resistance, neither device can be used interchangeably with thermodilution to monitor cardiac output during liver transplantation.
Resumo:
Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.
Resumo:
As the number of solutions to the Einstein equations with realistic matter sources that admit closed time-like curves (CTC's) has grown drastically, it has provoked some authors [10] to call for a physical interpretation of these seemingly exotic curves that could possibly allow for causality violations. A first step in drafting a physical interpretation would be to understand how CTC's are created because the recent work of [16] has suggested that, to follow a CTC, observers must counter-rotate with the rotating matter, contrary to the currently accepted explanation that it is due to inertial frame dragging that CTC's are created. The exact link between inertialframe dragging and CTC's is investigated by simulating particle geodesics and the precession of gyroscopes along CTC's and backward in time oriented circular orbits in the van Stockum metric, known to have CTC's that could be traversal, so the van Stockum cylinder could be exploited as a time machine. This study of gyroscopeprecession, in the van Stockum metric, supports the theory that CTC's are produced by inertial frame dragging due to rotating spacetime metrics.
Improvement of vulnerability curves using data from extreme events: debris flow event in South Tyrol