958 resultados para stochastic numerical methods
Resumo:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
Resumo:
Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.
Resumo:
Magnetic resonance imaging (MRI) is today precluded to patients bearing active implantable medical devices AIMDs). The great advantages related to this diagnostic modality, together with the increasing number of people benefiting from implantable devices, in particular pacemakers(PM)and carioverter/defibrillators (ICD), is prompting the scientific community the study the possibility to extend MRI also to implanted patients. The MRI induced specific absorption rate (SAR) and the consequent heating of biological tissues is one of the major concerns that makes patients bearing metallic structures contraindicated for MRI scans. To date, both in-vivo and in-vitro studies have demonstrated the potentially dangerous temperature increase caused by the radiofrequency (RF) field generated during MRI procedures in the tissues surrounding thin metallic implants. On the other side, the technical evolution of MRI scanners and of AIMDs together with published data on the lack of adverse events have reopened the interest in this field and suggest that, under given conditions, MRI can be safely performed also in implanted patients. With a better understanding of the hazards of performing MRI scans on implanted patients as well as the development of MRI safe devices, we may soon enter an era where the ability of this imaging modality may be more widely used to assist in the appropriate diagnosis of patients with devices. In this study both experimental measures and numerical analysis were performed. Aim of the study is to systematically investigate the effects of the MRI RF filed on implantable devices and to identify the elements that play a major role in the induced heating. Furthermore, we aimed at developing a realistic numerical model able to simulate the interactions between an RF coil for MRI and biological tissues implanted with a PM, and to predict the induced SAR as a function of the particular path of the PM lead. The methods developed and validated during the PhD program led to the design of an experimental framework for the accurate measure of PM lead heating induced by MRI systems. In addition, numerical models based on Finite-Differences Time-Domain (FDTD) simulations were validated to obtain a general tool for investigating the large number of parameters and factors involved in this complex phenomenon. The results obtained demonstrated that the MRI induced heating on metallic implants is a real risk that represents a contraindication in extending MRI scans also to patient bearing a PM, an ICD, or other thin metallic objects. On the other side, both experimental data and numerical results show that, under particular conditions, MRI procedures might be consider reasonably safe also for an implanted patient. The complexity and the large number of variables involved, make difficult to define a unique set of such conditions: when the benefits of a MRI investigation cannot be obtained using other imaging techniques, the possibility to perform the scan should not be immediately excluded, but some considerations are always needed.
Resumo:
Ion channels are pore-forming proteins that regulate the flow of ions across biological cell membranes. Ion channels are fundamental in generating and regulating the electrical activity of cells in the nervous system and the contraction of muscolar cells. Solid-state nanopores are nanometer-scale pores located in electrically insulating membranes. They can be adopted as detectors of specific molecules in electrolytic solutions. Permeation of ions from one electrolytic solution to another, through a protein channel or a synthetic pore is a process of considerable importance and realistic analysis of the main dependencies of ion current on the geometrical and compositional characteristics of these structures are highly required. The project described by this thesis is an effort to improve the understanding of ion channels by devising methods for computer simulation that can predict channel conductance from channel structure. This project describes theory, algorithms and implementation techniques used to develop a novel 3-D numerical simulator of ion channels and synthetic nanopores based on the Brownian Dynamics technique. This numerical simulator could represent a valid tool for the study of protein ion channel and synthetic nanopores, allowing to investigate at the atomic-level the complex electrostatic interactions that determine channel conductance and ion selectivity. Moreover it will provide insights on how parameters like temperature, applied voltage, and pore shape could influence ion translocation dynamics. Furthermore it will help making predictions of conductance of given channel structures and it will add information like electrostatic potential or ionic concentrations throughout the simulation domain helping the understanding of ion flow through membrane pores.
Resumo:
This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
A way to investigate turbulence is through experiments where hot wire measurements are performed. Analysis of the in turbulence of a temperature gradient on hot wire measurements is the aim of this thesis work. Actually - to author's knowledge - this investigation is the first attempt to document, understand and ultimately correct the effect of temperature gradients on turbulence statistics. However a numerical approach is used since instantaneous temperature and streamwise velocity fields are required to evaluate this effect. A channel flow simulation at Re_tau = 180 is analyzed to make a first evaluation of the amount of error introduced by temperature gradient inside the domain. Hot wire data field is obtained processing the numerical flow field through the application of a proper version of the King's law, which connect voltage, velocity and temperature. A drift in mean streamwise velocity profile and rms is observed when temperature correction is performed by means of centerline temperature. A correct mean velocity pro�le is achieved correcting temperature through its mean value at each wall normal position, but a not negligible error is still present into rms. The key point to correct properly the sensed velocity from the hot wire is the knowledge of the instantaneous temperature field. For this purpose three correction methods are proposed. At the end a numerical simulation at Re_tau =590 is also evaluated in order to confirm the results discussed earlier.
Resumo:
The goal of this thesis is the application of an opto-electronic numerical simulation to heterojunction silicon solar cells featuring an all back contact architecture (Interdigitated Back Contact Hetero-Junction IBC-HJ). The studied structure exhibits both metal contacts, emitter and base, at the back surface of the cell with the objective to reduce the optical losses due to the shadowing by front contact of conventional photovoltaic devices. Overall, IBC-HJ are promising low-cost alternatives to monocrystalline wafer-based solar cells featuring front and back contact schemes, in fact, for IBC-HJ the high concentration doping diffusions are replaced by low-temperature deposition processes of thin amorphous silicon layers. Furthermore, another advantage of IBC solar cells with reference to conventional architectures is the possibility to enable a low-cost assembling of photovoltaic modules, being all contacts on the same side. A preliminary extensive literature survey has been helpful to highlight the specific critical aspects of IBC-HJ solar cells as well as the state-of-the-art of their modeling, processing and performance of practical devices. In order to perform the analysis of IBC-HJ devices, a two-dimensional (2-D) numerical simulation flow has been set up. A commercial device simulator based on finite-difference method to solve numerically the whole set of equations governing the electrical transport in semiconductor materials (Sentuarus Device by Synopsys) has been adopted. The first activity carried out during this work has been the definition of a 2-D geometry corresponding to the simulation domain and the specification of the electrical and optical properties of materials. In order to calculate the main figures of merit of the investigated solar cells, the spatially resolved photon absorption rate map has been calculated by means of an optical simulator. Optical simulations have been performed by using two different methods depending upon the geometrical features of the front interface of the solar cell: the transfer matrix method (TMM) and the raytracing (RT). The first method allows to model light prop-agation by plane waves within one-dimensional spatial domains under the assumption of devices exhibiting stacks of parallel layers with planar interfaces. In addition, TMM is suitable for the simulation of thin multi-layer anti reflection coating layers for the reduction of the amount of reflected light at the front interface. Raytracing is required for three-dimensional optical simulations of upright pyramidal textured surfaces which are widely adopted to significantly reduce the reflection at the front surface. The optical generation profiles are interpolated onto the electrical grid adopted by the device simulator which solves the carriers transport equations coupled with Poisson and continuity equations in a self-consistent way. The main figures of merit are calculated by means of a postprocessing of the output data from device simulation. After the validation of the simulation methodology by means of comparison of the simulation result with literature data, the ultimate efficiency of the IBC-HJ architecture has been calculated. By accounting for all optical losses, IBC-HJ solar cells result in a theoretical maximum efficiency above 23.5% (without texturing at front interface) higher than that of both standard homojunction crystalline silicon (Homogeneous Emitter HE) and front contact heterojuction (Heterojunction with Intrinsic Thin layer HIT) solar cells. However it is clear that the criticalities of this structure are mainly due to the defects density and to the poor carriers transport mobility in the amorphous silicon layers. Lastly, the influence of the most critical geometrical and physical parameters on the main figures of merit have been investigated by applying the numerical simulation tool set-up during the first part of the present thesis. Simulations have highlighted that carrier mobility and defects level in amorphous silicon may lead to a potentially significant reduction of the conversion efficiency.
Resumo:
For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.
Resumo:
This Ph.D thesis focuses on iterative regularization methods for regularizing linear and nonlinear ill-posed problems. Regarding linear problems, three new stopping rules for the Conjugate Gradient method applied to the normal equations are proposed and tested in many numerical simulations, including some tomographic images reconstruction problems. Regarding nonlinear problems, convergence and convergence rate results are provided for a Newton-type method with a modified version of Landweber iteration as an inner iteration in a Banach space setting.
Resumo:
This work presents a comprehensive methodology for the reduction of analytical or numerical stochastic models characterized by uncertain input parameters or boundary conditions. The technique, based on the Polynomial Chaos Expansion (PCE) theory, represents a versatile solution to solve direct or inverse problems related to propagation of uncertainty. The potentiality of the methodology is assessed investigating different applicative contexts related to groundwater flow and transport scenarios, such as global sensitivity analysis, risk analysis and model calibration. This is achieved by implementing a numerical code, developed in the MATLAB environment, presented here in its main features and tested with literature examples. The procedure has been conceived under flexibility and efficiency criteria in order to ensure its adaptability to different fields of engineering; it has been applied to different case studies related to flow and transport in porous media. Each application is associated with innovative elements such as (i) new analytical formulations describing motion and displacement of non-Newtonian fluids in porous media, (ii) application of global sensitivity analysis to a high-complexity numerical model inspired by a real case of risk of radionuclide migration in the subsurface environment, and (iii) development of a novel sensitivity-based strategy for parameter calibration and experiment design in laboratory scale tracer transport.
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
Resumo:
The topic of this work concerns nonparametric permutation-based methods aiming to find a ranking (stochastic ordering) of a given set of groups (populations), gathering together information from multiple variables under more than one experimental designs. The problem of ranking populations arises in several fields of science from the need of comparing G>2 given groups or treatments when the main goal is to find an order while taking into account several aspects. As it can be imagined, this problem is not only of theoretical interest but it also has a recognised relevance in several fields, such as industrial experiments or behavioural sciences, and this is reflected by the vast literature on the topic, although sometimes the problem is associated with different keywords such as: "stochastic ordering", "ranking", "construction of composite indices" etc., or even "ranking probabilities" outside of the strictly-speaking statistical literature. The properties of the proposed method are empirically evaluated by means of an extensive simulation study, where several aspects of interest are let to vary within a reasonable practical range. These aspects comprise: sample size, number of variables, number of groups, and distribution of noise/error. The flexibility of the approach lies mainly in the several available choices for the test-statistic and in the different types of experimental design that can be analysed. This render the method able to be tailored to the specific problem and the to nature of the data at hand. To perform the analyses an R package called SOUP (Stochastic Ordering Using Permutations) has been written and it is available on CRAN.