982 resultados para Simulations, Quantum Models, Resonant Tunneling Diode
Resumo:
International audience
Resumo:
Computational models for the investigation of flows in deformable tubes are developed and implemented in the open source computing environment OpenFOAM. Various simulations for Newtonian and non-Newtonian fluids under various flow conditions are carried out and analyzed. First, simulations are performed to investigate the flow of a shear-thinning, non-Newtonian fluid in a collapsed elastic tube and comparisons are made with experimental data. The fluid is modeled by means of the Bird-Carreau viscosity law. The computational domain of the deformed tube is constructed from data obtained via computer tomography imaging. Comparison of the computed velocity fields with the ultrasound Doppler velocity profile measurements show good agreement, as does the adjusted pressure drop along the tube's axis. Analysis of the shear rates show that the shear-thinning effect of the fluid becomes relevant in the cross-sections with the biggest deformation. The peristaltic motion is simulated by means of upper and lower rollers squeezing the fluid along a tube. Two frames of reference are considered. In the moving frame the computational domain is fixed and the coordinate system is moving with the roller speed, and in the fixed frame the roller is represented by a deforming mesh. Several two-dimensional simulations are carried out for Newtonian and non-Newtonian fluids. The effect of the shear-thinning behavior of the fluid on the transport efficiency is examined. In addition, the influence of the roller speed and the gap width between the rollers on the xxvii transport efficiency is discussed. Comparison with experimental data is also presented and different types of moving waves are implemented. In addition, the influence of the roller speed and the gap width between the rollers on the transport efficiency is discussed. Comparison with experimental data is also presented and different types of moving waves are implemented.
Resumo:
Many types of materials at nanoscale are currently being used in everyday life. The production and use of such products based on engineered nanomaterials have raised concerns of the possible risks and hazards associated with these nanomaterials. In order to evaluate and gain a better understanding of their effects on living organisms, we have performed first-principles quantum mechanical calculations and molecular dynamics simulations. Specifically, we will investigate the interaction of nanomaterials including semiconducting quantum dots and metallic nanoparticles with various biological molecules, such as dopamine, DNA nucleobases and lipid membranes. Firstly, interactions of semiconducting CdSe/CdS quantum dots (QDs) with the dopamine and the DNA nucleobase molecules are investigated using similar quantum mechanical approach to the one used for the metallic nanoparticles. A variety of interaction sites are explored. Our results show that small-sized Cd4Se4 and Cd4S4 QDs interact strongly with the DNA nucleobase if a DNA nucleobase has the amide or hydroxyl chemical group. These results indicate that these QDs are suitable for detecting subcellular structures, as also reported by experiments. The next two chapters describe a preparation required for the simulation of nanoparticles interacting with membranes leading to accurate structure models for the membranes. We develop a method for the molecular crystalline structure prediction of 1,2-Dimyristoyl-sn-glycero-3-phosphorylcholine (DMPC), 1,2-Dimyristoyl-sn-glycero-3-phosphorylethanolamine (DMPE) and cyclic di-amino acid peptide using first-principles methods. Since an accurate determination of the structure of an organic crystal is usually an extremely difficult task due to availability of the large number of its conformers, we propose a new computational scheme by applying knowledge of symmetry, structural chemistry and chemical bonding to reduce the sampling size of the conformation space. The interaction of metal nanoparticles with cell membranes is finally carried out by molecular dynamics simulations, and the results are reported in the last chapter. A new force field is developed which accurately describes the interaction forces between the clusters representing small-sized metal nanoparticles and the lipid bilayer molecules. The permeation of nanoparticles into the cell membrane is analyzed together with the RMSD values of the membrane modeled by a lipid bilayer. The simulation results suggest that the AgNPs could cause the same amount of deformation as the AuNPs for the dysfunction of the membrane.
Resumo:
Vapor sensors have been used for many years. Their applications range from detection of toxic gases and dangerous chemicals in industrial environments, the monitoring of landmines and other explosives, to the monitoring of atmospheric conditions. Microelectrical mechanical systems (MEMS) fabrication technologies provide a way to fabricate sensitive devices. One type of MEMS vapor sensors is based on mass changing detection and the sensors have a functional chemical coating for absorbing the chemical vapor of interest. The principle of the resonant mass sensor is that the resonant frequency will experience a large change due to a small mass of gas vapor change. This thesis is trying to build analytical micro-cantilever and micro-tilting plate models, which can make optimization more efficient. Several objectives need to be accomplished: (1) Build an analytical model of MEMS resonant mass sensor based on micro-tilting plate with the effects of air damping. (2) Perform design optimization of micro-tilting plate with a hole in the center. (3) Build an analytical model of MEMS resonant mass sensor based on micro-cantilever with the effects of air damping. (4) Perform design optimization of micro-cantilever by COMSOL. Analytical models of micro-tilting plate with a hole in the center are compared with a COMSOL simulation model and show good agreement. The analytical models have been used to do design optimization that maximizes sensitivity. The micro-cantilever analytical model does not show good agreement with a COMSOL simulation model. To further investigate, the air damping pressures at several points on the micro-cantilever have been compared between analytical model and COMSOL model. The analytical model is inadequate for two reasons. First, the model’s boundary condition assumption is not realistic. Second, the deflection shape of the cantilever changes with the hole size, and the model does not account for this. Design optimization of micro-cantilever is done by COMSOL.
Resumo:
Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. This research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of . Showing this asymptotic normality, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others.
Resumo:
A compact, all-room-temperature, widely tunable, continuous wave laser source in the green spectral region (502.1–544.2 nm) with a maximum output power of 14.7 mW is demonstrated. This was made possible by utilizing second-harmonic generation (SHG) in a periodically poled potassium titanyl phosphate (PPKTP) crystal waveguide pumped by a quantum-well external-cavity fiber-coupled diode laser and exploiting the multimode-matching approach in nonlinear crystal waveguides. The dual-wavelength SHG in the wavelength region between 505.4 and 537.7 nm (with a wavelength difference ranging from 1.8 to 32.3 nm) and sum-frequency generation in a PPKTP waveguide is also demonstrated.
Resumo:
Symmetrization of topologically ordered wave functions is a powerful method for constructing new topological models. Here we study wave functions obtained by symmetrizing quantum double models of a group G in the projected entangled pair states (PEPS) formalism. We show that symmetrization naturally gives rise to a larger symmetry group G˜ which is always non-Abelian. We prove that by symmetrizing on sufficiently large blocks, one can always construct wave functions in the same phase as the double model of G˜. In order to understand the effect of symmetrization on smaller patches, we carry out numerical studies for the toric code model, where we find strong evidence that symmetrizing on individual spins gives rise to a critical model which is at the phase transitions of two inequivalent toric codes, obtained by anyon condensation from the double model of G˜.
Resumo:
A subfilter-scale (SFS) stress model is developed for large-eddy simulations (LES) and is tested on various benchmark problems in both wall-resolved and wall-modelled LES. The basic ingredients of the proposed model are the model length-scale, and the model parameter. The model length-scale is defined as a fraction of the integral scale of the flow, decoupled from the grid. The portion of the resolved scales (LES resolution) appears as a user-defined model parameter, an advantage that the user decides the LES resolution. The model parameter is determined based on a measure of LES resolution, the SFS activity. The user decides a value for the SFS activity (based on the affordable computational budget and expected accuracy), and the model parameter is calculated dynamically. Depending on how the SFS activity is enforced, two SFS models are proposed. In one approach the user assigns the global (volume averaged) contribution of SFS to the transport (global model), while in the second model (local model), SFS activity is decided locally (locally averaged). The models are tested on isotropic turbulence, channel flow, backward-facing step and separating boundary layer. In wall-resolved LES, both global and local models perform quite accurately. Due to their near-wall behaviour, they result in accurate prediction of the flow on coarse grids. The backward-facing step also highlights the advantage of decoupling the model length-scale from the mesh. Despite the sharply refined grid near the step, the proposed SFS models yield a smooth, while physically consistent filter-width distribution, which minimizes errors when grid discontinuity is present. Finally the model application is extended to wall-modelled LES and is tested on channel flow and separating boundary layer. Given the coarse resolution used in wall-modelled LES, near the wall most of the eddies become SFS and SFS activity is required to be locally increased. The results are in very good agreement with the data for the channel. Errors in the prediction of separation and reattachment are observed in the separated flow, that are somewhat improved with some modifications to the wall-layer model.
Resumo:
The application of 3D grain-based modelling techniques is investigated in both small and large scale 3DEC models, in order to simulate brittle fracture processes in low-porosity crystalline rock. Mesh dependency in 3D grain-based models (GBMs) is examined through a number of cases to compare Voronoi and tetrahedral grain assemblages. Various methods are used in the generation of tessellations, each with a number of issues and advantages. A number of comparative UCS test simulations capture the distinct failure mechanisms, strength profiles, and progressive damage development using various Voronoi and tetrahedral GBMs. Relative calibration requirements are outlined to generate similar macro-strength and damage profiles for all the models. The results confirmed a number of inherent model behaviors that arise due to mesh dependency. In Voronoi models, inherent tensile failure mechanisms are produced by internal wedging and rotation of Voronoi grains. This results in a combined dependence on frictional and cohesive strength. In tetrahedral models, increased kinematic freedom of grains and an abundance of straight, connected failure pathways causes a preference for shear failure. This results in an inability to develop significant normal stresses causing cohesional strength dependence. In general, Voronoi models require high relative contact tensile strength values, with lower contact stiffness and contact cohesional strength compared to tetrahedral tessellations. Upscaling of 3D GBMs is investigated for both Voronoi and tetrahedral tessellations using a case study from the AECL’s Mine-by-Experiment at the Underground Research Laboratory. An upscaled tetrahedral model was able to reasonably simulate damage development in the roof forming a notch geometry by adjusting the cohesive strength. An upscaled Voronoi model underestimated the damage development in the roof and floor, and overestimated the damage in the side-walls. This was attributed to the discretization resolution limitations.
Resumo:
The mechanical behaviour and performance of a ductile iron component is highly dependent on the local variations in solidification conditions during the casting process. Here we show a framework which combine a previously developed closed chain of simulations for cast components with a micro-scale Finite Element Method (FEM) simulation of the behaviour and performance of the microstructure. A casting process simulation, including modelling of solidification and mechanical material characterization, provides the basis for a macro-scale FEM analysis of the component. A critical region is identified to which the micro-scale FEM simulation of a representative microstructure, generated using X-ray tomography, is applied. The mechanical behaviour of the different microstructural phases are determined using a surrogate model based optimisation routine and experimental data. It is discussed that the approach enables a link between solidification- and microstructure-models and simulations of as well component as microstructural behaviour, and can contribute with new understanding regarding the behaviour and performance of different microstructural phases and morphologies in industrial ductile iron components in service.
Resumo:
Myocardial fibrosis detected via delayed-enhanced magnetic resonance imaging (MRI) has been shown to be a strong indicator for ventricular tachycardia (VT) inducibility. However, little is known regarding how inducibility is affected by the details of the fibrosis extent, morphology, and border zone configuration. The objective of this article is to systematically study the arrhythmogenic effects of fibrosis geometry and extent, specifically on VT inducibility and maintenance. We present a set of methods for constructing patient-specific computational models of human ventricles using in vivo MRI data for patients suffering from hypertension, hypercholesterolemia, and chronic myocardial infarction. Additional synthesized models with morphologically varied extents of fibrosis and gray zone (GZ) distribution were derived to study the alterations in the arrhythmia induction and reentry patterns. Detailed electrophysiological simulations demonstrated that (1) VT morphology was highly dependent on the extent of fibrosis, which acts as a structural substrate, (2) reentry tended to be anchored to the fibrosis edges and showed transmural conduction of activations through narrow channels formed within fibrosis, and (3) increasing the extent of GZ within fibrosis tended to destabilize the structural reentry sites and aggravate the VT as compared to fibrotic regions of the same size and shape but with lower or no GZ. The approach and findings represent a significant step toward patient-specific cardiac modeling as a reliable tool for VT prediction and management of the patient. Sensitivities to approximation nuances in the modeling of structural pathology by image-based reconstruction techniques are also implicated.
Resumo:
This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.
Resumo:
Abstract : Recently, there is a great interest to study the flow characteristics of suspensions in different environmental and industrial applications, such as snow avalanches, debris flows, hydrotransport systems, and material casting processes. Regarding rheological aspects, the majority of these suspensions, such as fresh concrete, behave mostly as non-Newtonian fluids. Concrete is the most widely used construction material in the world. Due to the limitations that exist in terms of workability and formwork filling abilities of normal concrete, a new class of concrete that is able to flow under its own weight, especially through narrow gaps in the congested areas of the formwork was developed. Accordingly, self-consolidating concrete (SCC) is a novel construction material that is gaining market acceptance in various applications. Higher fluidity characteristics of SCC enable it to be used in a number of special applications, such as densely reinforced sections. However, higher flowability of SCC makes it more sensitive to segregation of coarse particles during flow (i.e., dynamic segregation) and thereafter at rest (i.e., static segregation). Dynamic segregation can increase when SCC flows over a long distance or in the presence of obstacles. Therefore, there is always a need to establish a trade-off between the flowability, passing ability, and stability properties of SCC suspensions. This should be taken into consideration to design the casting process and the mixture proportioning of SCC. This is called “workability design” of SCC. An efficient and non-expensive workability design approach consists of the prediction and optimization of the workability of the concrete mixtures for the selected construction processes, such as transportation, pumping, casting, compaction, and finishing. Indeed, the mixture proportioning of SCC should ensure the construction quality demands, such as demanded levels of flowability, passing ability, filling ability, and stability (dynamic and static). This is necessary to develop some theoretical tools to assess under what conditions the construction quality demands are satisfied. Accordingly, this thesis is dedicated to carry out analytical and numerical simulations to predict flow performance of SCC under different casting processes, such as pumping and tremie applications, or casting using buckets. The L-Box and T-Box set-ups can evaluate flow performance properties of SCC (e.g., flowability, passing ability, filling ability, shear-induced and gravitational dynamic segregation) in casting process of wall and beam elements. The specific objective of the study consists of relating numerical results of flow simulation of SCC in L-Box and T-Box test set-ups, reported in this thesis, to the flow performance properties of SCC during casting. Accordingly, the SCC is modeled as a heterogeneous material. Furthermore, an analytical model is proposed to predict flow performance of SCC in L-Box set-up using the Dam Break Theory. On the other hand, results of the numerical simulation of SCC casting in a reinforced beam are verified by experimental free surface profiles. The results of numerical simulations of SCC casting (modeled as a single homogeneous fluid), are used to determine the critical zones corresponding to the higher risks of segregation and blocking. The effects of rheological parameters, density, particle contents, distribution of reinforcing bars, and particle-bar interactions on flow performance of SCC are evaluated using CFD simulations of SCC flow in L-Box and T-box test set-ups (modeled as a heterogeneous material). Two new approaches are proposed to classify the SCC mixtures based on filling ability and performability properties, as a contribution of flowability, passing ability, and dynamic stability of SCC.
Resumo:
Understanding how virus strains offer protection against closely related emerging strains is vital for creating effective vaccines. For many viruses, including Foot-and-Mouth Disease Virus (FMDV) and the Influenza virus where multiple serotypes often co-circulate, in vitro testing of large numbers of vaccines can be infeasible. Therefore the development of an in silico predictor of cross-protection between strains is important to help optimise vaccine choice. Vaccines will offer cross-protection against closely related strains, but not against those that are antigenically distinct. To be able to predict cross-protection we must understand the antigenic variability within a virus serotype, distinct lineages of a virus, and identify the antigenic residues and evolutionary changes that cause the variability. In this thesis we present a family of sparse hierarchical Bayesian models for detecting relevant antigenic sites in virus evolution (SABRE), as well as an extended version of the method, the extended SABRE (eSABRE) method, which better takes into account the data collection process. The SABRE methods are a family of sparse Bayesian hierarchical models that use spike and slab priors to identify sites in the viral protein which are important for the neutralisation of the virus. In this thesis we demonstrate how the SABRE methods can be used to identify antigenic residues within different serotypes and show how the SABRE method outperforms established methods, mixed-effects models based on forward variable selection or l1 regularisation, on both synthetic and viral datasets. In addition we also test a number of different versions of the SABRE method, compare conjugate and semi-conjugate prior specifications and an alternative to the spike and slab prior; the binary mask model. We also propose novel proposal mechanisms for the Markov chain Monte Carlo (MCMC) simulations, which improve mixing and convergence over that of the established component-wise Gibbs sampler. The SABRE method is then applied to datasets from FMDV and the Influenza virus in order to identify a number of known antigenic residue and to provide hypotheses of other potentially antigenic residues. We also demonstrate how the SABRE methods can be used to create accurate predictions of the important evolutionary changes of the FMDV serotypes. In this thesis we provide an extended version of the SABRE method, the eSABRE method, based on a latent variable model. The eSABRE method takes further into account the structure of the datasets for FMDV and the Influenza virus through the latent variable model and gives an improvement in the modelling of the error. We show how the eSABRE method outperforms the SABRE methods in simulation studies and propose a new information criterion for selecting the random effects factors that should be included in the eSABRE method; block integrated Widely Applicable Information Criterion (biWAIC). We demonstrate how biWAIC performs equally to two other methods for selecting the random effects factors and combine it with the eSABRE method to apply it to two large Influenza datasets. Inference in these large datasets is computationally infeasible with the SABRE methods, but as a result of the improved structure of the likelihood, we are able to show how the eSABRE method offers a computational improvement, leading it to be used on these datasets. The results of the eSABRE method show that we can use the method in a fully automatic manner to identify a large number of antigenic residues on a variety of the antigenic sites of two Influenza serotypes, as well as making predictions of a number of nearby sites that may also be antigenic and are worthy of further experiment investigation.
Resumo:
Shockley diode equation is basic for single diode model equation, which is overly used for characterizing the photovoltaic cell output and behavior. In the standard equation, it includes series resistance (Rs) and shunt resistance (Rsh) with different types of parameters. Maximum simulation and modeling work done previously, related to single diode photovoltaic cell used this equation. However, there is another form of the standard equation which has not included Series Resistance (Rs) and Shunt Resistance (Rsh) yet, as the Shunt Resistance is much bigger than the load resistance and the load resistance is much bigger than the Series Resistance. For this phenomena, very small power loss occurs within a photovoltaic cell. This research focuses on the comparison of two forms of basic Shockley diode equation. This analysis describes a deep understanding of the photovoltaic cell, as well as gives understanding about Series Resistance (Rs) and Shunt Resistance (Rsh) behavior in the Photovoltaic cell. For making estimation of a real time photovoltaic system, faster calculation is needed. The equation without Series Resistance and Shunt Resistance is appropriate for the real time environment. Error function for both Series resistance (Rs) and Shunt resistances (Rsh) have been analyzed which shows that the total system is not affected by this two parameters' behavior.