873 resultados para Inverse computational method
Resumo:
Copper is the main interconnect material in microelectronic devices, and a 2 nm-thick continuous Cu film seed layer needs to be deposited to produce microelectronic devices with the smallest features and more functionality. Atomic layer deposition (ALD) is the most suitable method to deposit such thin films. However, the reaction mechanism and the surface chemistry of copper ALD remain unclear, which is deterring the development of better precursors and design of new ALD processes. In this thesis, we study the surface chemistries during ALD of copper by means of density functional theory (DFT). To understand the effect of temperature and pressure on the composition of copper with substrates, we used ab initio atomistic thermodynamics to obtain phase diagram of the Cu(111)/SiO2(0001) interface. We found that the interfacial oxide Cu2O phases prefer high oxygen pressure and low temperature while the silicide phases are stable at low oxygen pressure and high temperature for Cu/SiO2 interface, which is in good agreement with experimental observations. Understanding the precursor adsorption on surfaces is important for understanding the surface chemistry and reaction mechanism of the Cu ALD process. Focusing on two common Cu ALD precursors, Cu(dmap)2 and Cu(acac)2, we studied the precursor adsorption on Cu surfaces by means of van der Waals (vdW) inclusive DFT methods. We found that the adsorption energies and adsorption geometries are dependent on the adsorption sites and on the method used to include vdW in the DFT calculation. Both precursor molecules are partially decomposed and the Cu cations are partially reduced in their chemisorbed structure. It is found that clean cleavage of the ligand−metal bond is one of the requirements for selecting precursors for ALD of metals. 2 Bonding between surface and an atom in the ligand which is not coordinated with the Cu may result in impurities in the thin film. To have insight into the reaction mechanism of a full ALD cycle of Cu ALD, we proposed reaction pathways based on activation energies and reaction energies for a range of surface reactions between Cu(dmap)2 and Et2Zn. The butane formation and desorption steps are found to be extremely exothermic, explaining the ALD reaction scheme of original experimental work. Endothermic ligand diffusion and re-ordering steps may result in residual dmap ligands blocking surface sites at the end of the Et2Zn pulse, and in residual Zn being reduced and incorporated as an impurity. This may lead to very slow growth rate, as was the case in the experimental work. By investigating the reduction of CuO to metallic Cu, we elucidated the role of the reducing agent in indirect ALD of Cu. We found that CuO bulk is protected from reduction during vacuum annealing by the CuO surface and that H2 is required in order to reduce that surface, which shows that the strength of reducing agent is important to obtain fully reduced metal thin films during indirect ALD processes. Overall, in this thesis, we studied the surface chemistries and reaction mechanisms of Cu ALD processes and the nucleation of Cu to form a thin film.
Resumo:
Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.
Resumo:
Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.
We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.
We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.
Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.
This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.
Resumo:
For pt.I. see ibid. vol.1, p.301 (1985). In the first part of this work a general definition of an inverse problem with discrete data has been given and an analysis in terms of singular systems has been performed. The problem of the numerical stability of the solution, which in that paper was only briefly discussed, is the main topic of this second part. When the condition number of the problem is too large, a small error on the data can produce an extremely large error on the generalised solution, which therefore has no physical meaning. The authors review most of the methods which have been developed for overcoming this difficulty, including numerical filtering, Tikhonov regularisation, iterative methods, the Backus-Gilbert method and so on. Regularisation methods for the stable approximation of generalised solutions obtained through minimisation of suitable seminorms (C-generalised solutions), such as the method of Phillips (1962), are also considered.
Resumo:
A new general cell-centered solution procedure based upon the conventional control or finite volume (CV or FV) approach has been developed for numerical heat transfer and fluid flow which encompasses both structured and unstructured meshes for any kind of mixed polygon cell. Unlike conventional FV methods for structured and block structured meshes and both FV and FE methods for unstructured meshes, the irregular control volume (ICV) method does not require the shape of the element or cell to be predefined because it simply exploits the concept of fluxes across cell faces. That is, the ICV method enables meshes employing mixtures of triangular, quadrilateral, and any other higher order polygonal cells to be exploited using a single solution procedure. The ICV approach otherwise preserves all the desirable features of conventional FV procedures for a structured mesh; in the current implementation, collocation of variables at cell centers is used with a Rhie and Chow interpolation (to suppress pressure oscillation in the flow field) in the context of the SIMPLE pressure correction solution procedure. In fact all other FV structured mesh-based methods may be perceived as a subset of the ICV formulation. The new ICV formulation is benchmarked using two standard computational fluid dynamics (CFD) problems i.e., the moving lid cavity and the natural convection driven cavity. Both cases were solved with a variety of structured and unstructured meshes, the latter exploiting mixed polygonal cell meshes. The polygonal mesh experiments show a higher degree of accuracy for equivalent meshes (in nodal density terms) using triangular or quadrilateral cells; these results may be interpreted in a manner similar to the CUPID scheme used in structured meshes for reducing numerical diffusion for flows with changing direction.
Resumo:
A semi-Lagrangian finite volume scheme for solving viscoelastic flow problems is presented. A staggered grid arrangement is used in which the dependent variables are located at different mesh points in the computational domain. The convection terms in the momentum and constitutive equations are treated using a semi-Lagrangian approach in which particles on a regular grid are traced backwards over a single time-step. The method is applied to the 4 : 1 planar contraction problem for an Oldroyd B fluid for both creeping and inertial flow conditions. The development of vortex behaviour with increasing values of We is analyzed.
Resumo:
We consider the load-balancing problems which arise from parallel scientific codes containing multiple computational phases, or loops over subsets of the data, which are separated by global synchronisation points. We motivate, derive and describe the implementation of an approach which we refer to as the multiphase mesh partitioning strategy to address such issues. The technique is tested on example meshes containing multiple computational phases and it is demonstrated that our method can achieve high quality partitions where a standard mesh partitioning approach fails.
Resumo:
Sound waves are propagating pressure fluctuations, which are typically several orders of magnitude smaller than the pressure variations in the flow field that account for flow acceleration. On the other hand, these fluctuations travel at the speed of sound in the medium, not as a transported fluid quantity. Due to the above two properties, the Reynolds averaged Navier–Stokes equations do not resolve the acoustic fluctuations. This paper discusses a defect correction method for this type of multi-scale problems in aeroacoustics. Numerical examples in one dimensional and two dimensional are used to illustrate the concept. Copyright (C) 2002 John Wiley & Sons, Ltd.
Resumo:
Sound waves are propagating pressure fluctuations and are typically several orders of magnitude smaller than the pressure variations in the flow field that account for flow acceleration. On the other hand, these fluctuations travel at the speed of sound in the medium, not as a transported fluid quantity. Due to the above two properties, the Reynolds averaged Navier-Stokes (RANS) equations do not resolve the acoustic fluctuations. Direct numerical simulation of turbulent flow is still a prohibitively expensive tool to perform noise analysis. This paper proposes the acousticcorrectionmethod, an alternative and affordable tool based on a modified defect correction concept, which leads to an efficient algorithm for computational aeroacoustics and noise analysis.
Resumo:
The overall objective of this work is to develop a computational model of particle degradation during dilute-phasepneumatic conveying. A key feature of such a model is the prediction of particle breakage due to particle–wall collisions in pipeline bends. This paper presents a method for calculating particle impact degradation propensity under a range of particle velocities and particle sizes. It is based on interpolation on impact data obtained in a new laboratory-scale degradation tester. The method is tested and validated against experimental results for degradation at 90± impact angle of a full-size distribution sample of granulated sugar. In a subsequent work, the calculation of degradation propensity is coupled with a ow model of the solids and gas phases in the pipeline.
Resumo:
The present work uses the discrete element method (DEM) to describe assemblies of particulate bulk materials. Working numerical descriptions of entire processes using this scheme are infeasible because of the very large number of elements (1012 or more in a moderately sized industrial silo). However it is possible to capture much of the essential bulk mechanics through selective DEM on important regions of an assembly, thereafter using the information in continuum numerical descriptions of particulate processes. The continuum numerical model uses population balances of the various components in bulk solid mixtures. It depends on constitutive relationships for the internal transfer, creation and/or destruction of components within the mixture. In this paper we show the means of generating such relationships for two important flow phenomena – segregation whereby particles differing in some important property (often size) separate into discrete phases, and degradation, whereby particles break into sub-elements, through impact on each other or shearing. We perform DEM simulations under a range of representative conditions, extracting the important parameters for the relevant transfer, creation and/or destruction of particles in certain classes within the assembly over time. Continuum predictions of segregation and degradation using this scheme are currently being successfully validated against bulk experimental data and are beginning to be used in schemes to improve the design and operation of bulk solids process plant.
Resumo:
The growth of computer power allows the solution of complex problems related to compressible flow, which is an important class of problems in modern day CFD. Over the last 15 years or so, many review works on CFD have been published. This book concerns both mathematical and numerical methods for compressible flow. In particular, it provides a clear cut introduction as well as in depth treatment of modern numerical methods in CFD. This book is organised in two parts. The first part consists of Chapters 1 and 2, and is mainly devoted to theoretical discussions and results. Chapter 1 concerns fundamental physical concepts and theoretical results in gas dynamics. Chapter 2 describes the basic mathematical theory of compressible flow using the inviscid Euler equations and the viscous Navier–Stokes equations. Existence and uniqueness results are also included. The second part consists of modern numerical methods for the Euler and Navier–Stokes equations. Chapter 3 is devoted entirely to the finite volume method for the numerical solution of the Euler equations and covers fundamental concepts such as order of numerical schemes, stability and high-order schemes. The finite volume method is illustrated for 1-D as well as multidimensional Euler equations. Chapter 4 covers the theory of the finite element method and its application to compressible flow. A section is devoted to the combined finite volume–finite element method, and its background theory is also included. Throughout the book numerous examples have been included to demonstrate the numerical methods. The book provides a good insight into the numerical schemes, theoretical analysis, and validation of test problems. It is a very useful reference for applied mathematicians, numerical analysts, and practice engineers. It is also an important reference for postgraduate researchers in the field of scientific computing and CFD.
Resumo:
A multi-phase framework is typically required for the CFD modelling of metals reduction processes. Such processes typically involve the interaction of liquid metals, a gas (often air) top space, liquid droplets in the top space and injection of both solid particles and gaseous bubbles into the bath. The exchange of mass, momentum and energy between the phases is fundamental to these processes. Multi-phase algorithms are complex and can be unreliable in terms of either or both convergence behaviour or in the extent to which the physics is captured. In this contribution, we discuss these multi-phase flow issues and describe an example of each of the main “single phase” approaches to modelling this class of problems (i.e., Eulerian–Lagrangian and Eulerian–Eulerian). Their utility is illustrated in the context of two problems – one involving the injection of sparging gases into a steel continuous slab caster and the other based on the development of a novel process for aluminium electrolysis. In the steel caster, the coupling of the Lagrangian tracking of the gas phase with the continuum enables the simulation of the transient motion of the metal–flux interface. The model of the electrolysis process employs a novel method for the calculation of slip velocities of oxygen bubbles, resulting from the dissolution of alumina, which allows the efficiency of the process to be predicted.
Resumo:
A parallel time-domain algorithm is described for the time-dependent nonlinear Black-Scholes equation, which may be used to build financial analysis tools to help traders making rapid and systematic evaluation of buy/sell contracts. The algorithm is particularly suitable for problems that do not require fine details at each intermediate time step, and hence the method applies well for the present problem.
Resumo:
A number of two dimensional staggered unstructured discretisation schemes for the solution of fluid flow and heat transfer problems have been developed. All schemes store and solve velocity vector components at cell faces with scalar variables solved at cell centres. The velocity is resolved into face-normal and face-parallel components and the various schemes investigated differ in the treatment of the parallel component. Steady-state and time-dependent fluid flow and thermal energy equations are solved with the well known pressure correction scheme, SIMPLE, employed to couple continuity and momentum. The numerical methods developed are tested on well known benchmark cases: the Lid-Driven Cavity, Natural Convection in a Cavity and Melting of Gallium in a rectangular domain. The results obtained are shown to be comparable to benchmark, but with accuracy dependent on scheme selection.