966 resultados para Numerical Modeling
Resumo:
This paper proposes the implementation of different non-local Planetary Boundary Layer schemes within the Regional Atmospheric Modeling System (RAMS) model. The two selected PBL parameterizations are the Medium-Range Forecast (MRF) PBL and its updated version, known as the Yonsei University (YSU) PBL. YSU is a first-order scheme that uses non-local eddy diffusivity coefficients to compute turbulent fluxes. It is based on the MRF, and improves it with an explicit treatment of the entrainment. With the aim of evaluating the RAMS results for these PBL parameterizations, a series of numerical simulations have been performed and contrasted with the results obtained using the Mellor and Yamada (MY) scheme, also widely used, and the standard PBL scheme in the RAMS model. The numerical study carried out here is focused on mesoscale circulation events during the summer, as these meteorological situations dominate this season of the year in the Western Mediterranean coast. In addition, the sensitivity of these PBL parameterizations to the initial soil moisture content is also evaluated. The results show a warmer and moister PBL for the YSU scheme compared to both MRF and MY. The model presents as well a tendency to overestimate the observed temperature and to underestimate the observed humidity, considering all PBL schemes and a low initial soil moisture content. In addition, the bias between the model and the observations is significantly reduced moistening the initial soil moisture of the corresponding run. Thus, varying this parameter has a positive effect and improves the simulated results in relation to the observations. However, there is still a significant overestimation of the wind speed over flatter terrain, independently of the PBL scheme and the initial soil moisture used, even though a different degree of accuracy is reproduced by RAMS taking into account the different sensitivity tests.
Resumo:
Population balances of polymer species in terms 'of discrete transforms with respect to counts of groups lead to tractable first order partial differential equations when ali rate constants are independent of chain length and loop formation is negligible [l]. Average molecular weights in the absence ofgelation are long known to be readily found through integration of an initial value problem. The extension to size distribution prediction is also feasible, but its performance is often lower to the one provided by methods based upon real chain length domain [2]. Moreover, the absence ofagood starting procedure and a higher numerical sensitivity hás decisively impaired its application to non-linear reversibly deactivated polymerizations, namely NMRP [3].
Resumo:
Mode of access: Internet.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Stratum corneum (SC) desorption experiments have yielded higher calculated steady-state fluxes than those obtained by epidermal penetration studies. A possible explanation of this result is a variable diffusion or partition coefficient across the SC. We therefore developed the diffusion model for percutaneous penetration and desorption to study the effects of either a variable diffusion coefficient or variable partition coefficient in the SC over the diffusion path length. Steady-state flux, lag time, and mean desorption time were obtained from Laplace domain solutions. Numerical inversion of the Laplace domain solutions was used for simulations of solute concentration-distance and amount penetrated (desorbed)-time profiles. Diffusion and partition coefficients heterogeneity were examined using six different models. The effect of heterogeneity on predicted flux from desorption studies was compared with that obtained in permeation studies. Partition coefficient heterogeneity had a more profound effect on predicted fluxes than diffusion coefficient heterogeneity. Concentration-distance profiles show even larger dependence on heterogeneity, which is consistent with experimental tape-stripping data reported for clobetasol propionate and other solutes. The clobetasol propionate tape-stripping data were most consistent with the partition coefficient decreasing exponentially for half the SC and then becoming a constant for the remaining SC. (C) 2004 Wiley-Liss, Inc.
Resumo:
We explore both the rheology and complex flow behavior of monodisperse polymer melts. Adequate quantities of monodisperse polymer were synthesized in order that both the materials rheology and microprocessing behavior could be established. In parallel, we employ a molecular theory for the polymer rheology that is suitable for comparison with experimental rheometric data and numerical simulation for microprocessing flows. The model is capable of matching both shear and extensional data with minimal parameter fitting. Experimental data for the processing behavior of monodisperse polymers are presented for the first time as flow birefringence and pressure difference data obtained using a Multipass Rheometer with an 11:1 constriction entry and exit flow. Matching of experimental processing data was obtained using the constitutive equation with the Lagrangian numerical solver, FLOWSOLVE. The results show the direct coupling between molecular constitutive response and macroscopic processing behavior, and differentiate flow effects that arise separately from orientation and stretch. (c) 2005 The Society of Rheology.
Resumo:
A critical assessment is presented for the existing fluid flow models used for dense medium cyclones (DMCs) and hydrocyclones. As the present discussion indicates, the understanding of dense medium cyclone flow is still far from the complete. However, its similarity to the hydrocyclone provides a basis for improved understanding of fluid flow in DMCs. The complexity of fluid flow in DMCs is basically due to the existence of medium as well as the dominance of turbulent particle size and density effects on separation. Both the theoretical and experimental analysis is done with respect to two-phase motions and solid phase flow in hydrocyclones or DMCs. A detailed discussion is presented on the empirical, semiempirical, and the numerical models based upon both the vorticity-stream function approach and Navier-Stokes equations in their primitive variables and in cylindrical coordinates available in literature. The existing equations describing turbulence and multiphase flows in cyclone are also critically reviewed.
Resumo:
Due to complex field/tissue interactions, high-field magnetic resonance (MR) images suffer significant image distortions that result in compromised diagnostic quality. A new method that attempts to remove these distortions is proposed in this paper and is based on the use of transceiver-phased arrays. The proposed system uses, in the examples presented herein, a shielded four-element transceive-phased array head coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both the images together, the image distortion can be reduced several fold. A combined hybrid method of moments (MoM)/finite element method (FEM) and finite-difference time-domain (FDTD) technique is proposed and used to elucidate the concept of the new method and to accurately evaluate the electromagnetic field (EMF) in a human head model. In addition, the proposed method is used in conjunction with the generalized auto-calibrating partially parallel acquisitions (GRAPPA) reconstruction technique to enable rapid imaging of the two scans. Simulation results reported herein for 11-T (470-MHz) brain imaging applications show that the new method with GRAPPA reconstruction theoretically results in improved image quality and that the proposed combined hybrid MoM/FEM and FDTD technique is. suitable for high-field magnetic resonance imaging (MRI) numerical analysis.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.
Resumo:
A consequence of a loss of coolant accident is that the local insulation material is damaged and maybe transported to the containment sump where it can penetrate and/or block the sump strainers. An experimental and theoretical study, which examines the transport of mineral wool fibers via single and multi-effect experiments is being performed. This paper focuses on the experiments and simulations performed for validation of numerical models of sedimentation and resuspension of mineral wool fiber agglomerates in a racetrack type channel. Three velocity conditions are used to test the response of two dispersed phase fiber agglomerates to two drag correlations and to two turbulent dispersion coefficients. The Eulerian multiphase flow model is applied with either one or two dispersed phases.
Resumo:
The paper presents a comparison between the different drag models for granular flows developed in the literature and the effect of each one of them on the fast pyrolysis of wood. The process takes place on an 100 g/h lab scale bubbling fluidized bed reactor located at Aston University. FLUENT 6.3 is used as the modeling framework of the fluidized bed hydrodynamics, while the fast pyrolysis of the discrete wood particles is incorporated as an external user defined function (UDF) hooked to FLUENT’s main code structure. Three different drag models for granular flows are compared, namely the Gidaspow, Syamlal O’Brien, and Wen-Yu, already incorporated in FLUENT’s main code, and their impact on particle trajectory, heat transfer, degradation rate, product yields, and char residence time is quantified. The Eulerian approach is used to model the bubbling behavior of the sand, which is treated as a continuum. Biomass reaction kinetics is modeled according to the literature using a two-stage, semiglobal model that takes into account secondary reactions.
Resumo:
A consequence of a loss of coolant accident is the damage of adjacent insulation materials (IM). IM may then be transported to the containment sump strainers where water is drawn into the ECCS (emergency core cooling system). Blockage of the strainers by IM lead to an increased pressure drop acting on the operating ECCS pumps. IM can also penetrate the strainers, enter the reactor coolant system and then accumulate in the reactor pressure vessel. An experimental and theoretical study that concentrates on mineral wool fiber transport in the containment sump and the ECCS is being performed. The study entails fiber generation and the assessment of fiber transport in single and multi-effect experiments. The experiments include measurement of the terminal settling velocity, the strainer pressure drop, fiber sedimentation and resuspension in a channel flow and jet flow in a rectangular tank. An integrated test facility is also operated to assess the compounded effects. Each experimental facility is used to provide data for the validation of equivalent computational fluid dynamic models. The channel flow facility allows the determination of the steady state distribution of the fibers at different flow velocities. The fibers are modeled in the Eulerian-Eulerian reference frame as spherical wetted agglomerates. The fiber agglomerate size, density, the relative viscosity of the fluid-fiber mixture and the turbulent dispersion of the fibers all affect the steady state accumulation of fibers at the channel base. In the current simulations, two fiber phases are separately considered. The particle size is kept constant while the density is modified, which affects both the terminal velocity and volume fraction. The relative viscosity is only significant at higher concentrations. The numerical model finds that the fibers accumulate at the channel base even at high velocities; therefore, modifications to the drag and turbulent dispersion forces can be made to reduce fiber accumulation.
Resumo:
WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.
On the numerical solution of a Cauchy problem in an elastostatic half-plane with a bounded inclusion
Resumo:
We propose an iterative procedure for the inverse problem of determining the displacement vector on the boundary of a bounded planar inclusion given the displacement and stress fields on an infinite (planar) line-segment. At each iteration step mixed boundary value problems in an elastostatic half-plane containing the bounded inclusion are solved. For efficient numerical implementation of the procedure these mixed problems are reduced to integral equations over the bounded inclusion. Well-posedness and numerical solution of these boundary integral equations are presented, and a proof of convergence of the procedure for the inverse problem to the original solution is given. Numerical investigations are presented both for the direct and inverse problems, and these results show in particular that the displacement vector on the boundary of the inclusion can be found in an accurate and stable way with small computational cost.