947 resultados para Numerical surface modeling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The suction profile of a desiccating soil is dependent on the water table depth, the soil-water retention characteristics, and the climatic conditions. In this paper, an unsaturated flow model, which simulates both liquid and vapour flow, was used to investigate the effects of varying the water table depth and the evaporation rate on the evaporative fluxes from a desiccating tailings deposit under steady-state conditions. Results obtained showed that at a critical evaporation rate, beyond which evaporation is no longer dictated by climatic conditions, the matric suction profiles remain basically unchanged. The critical evaporation rate varies inversely with the water table depth. It is associated with the maximum evaporative flux that might be extracted from a soil at steady-state conditions. The time required to establish steady-state conditions is directly proportional to the water table depth, and it acquires a maximum value at the critical evaporation rate. A detailed investigation of the movement of the drying front demonstrated the significance of attaining a matric suction of about 3000 kPa on the contribution to flow in the vapour phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concerns mixed flows (which are characterized by the simultaneous occurrence of free-surface and pressurized flow in sewers, tunnels, culverts or under bridges), and contributes to the improvement of the existing numerical tools for modelling these phenomena. The classic Preissmann slot approach is selected due to its simplicity and capability of predicting results comparable to those of a more recent and complex two-equation model, as shown here with reference to a laboratory test case. In order to enhance the computational efficiency, a local time stepping strategy is implemented in a shock-capturing Godunov-type finite volume numerical scheme for the integration of the de Saint-Venant equations. The results of different numerical tests show that local time stepping reduces run time significantly (between −29% and −85% CPU time for the test cases considered) compared to the conventional global time stepping, especially when only a small region of the flow field is surcharged, while solution accuracy and mass conservation are not impaired. The second part of this thesis is devoted to the modelling of the hydraulic effects of potentially pressurized structures, such as bridges and culverts, inserted in open channel domains. To this aim, a two-dimensional mixed flow model is developed first. The classic conservative formulation of the 2D shallow water equations for free-surface flow is adapted by assuming that two fictitious vertical slots, normally intersecting, are added on the ceiling of each integration element. Numerical results show that this schematization is suitable for the prediction of 2D flooding phenomena in which the pressurization of crossing structures can be expected. Given that the Preissmann model does not allow for the possibility of bridge overtopping, a one-dimensional model is also presented in this thesis to handle this particular condition. The flows below and above the deck are considered as parallel, and linked to the upstream and downstream reaches of the channel by introducing suitable internal boundary conditions. The comparison with experimental data and with the results of HEC-RAS simulations shows that the proposed model can be a useful and effective tool for predicting overtopping and backwater effects induced by the presence of bridges and culverts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The calcitonin gene-related peptide (CGRP) receptor is a heterodimer of a family B G-protein-coupled receptor, calcitonin receptor-like receptor (CLR), and the accessory protein receptor activity modifying protein 1. It couples to Gs, but it is not known which intracellular loops mediate this. We have identified the boundaries of this loop based on the relative position and length of the juxtamembrane transmembrane regions 3 and 4. The loop has been analyzed by systematic mutagenesis of all residues to alanine, measuring cAMP accumulation, CGRP affinity, and receptor expression. Unlike rhodopsin, ICL2 of the CGRP receptor plays a part in the conformational switch after agonist interaction. His-216 and Lys-227 were essential for a functional CGRP-induced cAMP response. The effect of (H216A)CLR is due to a disruption to the cell surface transport or surface stability of the mutant receptor. In contrast, (K227A)CLR had wild-type expression and agonist affinity, suggesting a direct disruption to the downstream signal transduction mechanism of the CGRP receptor. Modeling suggests that the loop undergoes a significant shift in position during receptor activation, exposing a potential G-protein binding pocket. Lys-227 changes position to point into the pocket, potentially allowing it to interact with bound G-proteins. His-216 occupies a position similar to that of Tyr-136 in bovine rhodopsin, part of the DRY motif of the latter receptor. This is the first comprehensive analysis of an entire intracellular loop within the calcitonin family of G-protein-coupled receptor. These data help to define the structural and functional characteristics of the CGRP-receptor and of family B G-protein-coupled receptors in general. © 2006 by The American Society for Biochemistry and Molecular Biology, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A consequence of a loss of coolant accident is that the local insulation material is damaged and maybe transported to the containment sump where it can penetrate and/or block the sump strainers. An experimental and theoretical study, which examines the transport of mineral wool fibers via single and multi-effect experiments is being performed. This paper focuses on the experiments and simulations performed for validation of numerical models of sedimentation and resuspension of mineral wool fiber agglomerates in a racetrack type channel. Three velocity conditions are used to test the response of two dispersed phase fiber agglomerates to two drag correlations and to two turbulent dispersion coefficients. The Eulerian multiphase flow model is applied with either one or two dispersed phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel approach to water pollution detection from remotely sensed low-platform mounted visible band camera images. We examine the feasibility of unsupervised segmentation for slick (oily spills on the water surface) region labelling. Adaptive and non adaptive filtering is combined with density modeling of the obtained textural features. A particular effort is concentrated on the textural feature extraction from raw intensity images using filter banks and adaptive feature extraction from the obtained output coefficients. Segmentation in the extracted feature space is achieved using Gaussian mixture models (GMM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. A methodology for noninvasively characterizing the three-dimensional (3-D) shape of the complete human eye is not currently available for research into ocular diseases that have a structural substrate, such as myopia. A novel application of a magnetic resonance imaging (MRI) acquisition and analysis technique is presented that, for the first time, allows the 3-D shape of the eye to be investigated fully. METHODS. The technique involves the acquisition of a T2-weighted MRI, which is optimized to reveal the fluid-filled chambers of the eye. Automatic segmentation and meshing algorithms generate a 3-D surface model, which can be shaded with morphologic parameters such as distance from the posterior corneal pole and deviation from sphericity. Full details of the method are illustrated with data from 14 eyes of seven individuals. The spatial accuracy of the calculated models is demonstrated by comparing the MRI-derived axial lengths with values measured in the same eyes using interferometry. RESULTS. The color-coded eye models showed substantial variation in the absolute size of the 14 eyes. Variations in the sphericity of the eyes were also evident, with some appearing approximately spherical whereas others were clearly oblate and one was slightly prolate. Nasal-temporal asymmetries were noted in some subjects. CONCLUSIONS. The MRI acquisition and analysis technique allows a novel way of examining 3-D ocular shape. The ability to stratify and analyze eye shape, ocular volume, and sphericity will further extend the understanding of which specific biometric parameters predispose emmetropic children subsequently to develop myopia. Copyright © Association for Research in Vision and Ophthalmology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents a comparison between the different drag models for granular flows developed in the literature and the effect of each one of them on the fast pyrolysis of wood. The process takes place on an 100 g/h lab scale bubbling fluidized bed reactor located at Aston University. FLUENT 6.3 is used as the modeling framework of the fluidized bed hydrodynamics, while the fast pyrolysis of the discrete wood particles is incorporated as an external user defined function (UDF) hooked to FLUENT’s main code structure. Three different drag models for granular flows are compared, namely the Gidaspow, Syamlal O’Brien, and Wen-Yu, already incorporated in FLUENT’s main code, and their impact on particle trajectory, heat transfer, degradation rate, product yields, and char residence time is quantified. The Eulerian approach is used to model the bubbling behavior of the sand, which is treated as a continuum. Biomass reaction kinetics is modeled according to the literature using a two-stage, semiglobal model that takes into account secondary reactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A consequence of a loss of coolant accident is the damage of adjacent insulation materials (IM). IM may then be transported to the containment sump strainers where water is drawn into the ECCS (emergency core cooling system). Blockage of the strainers by IM lead to an increased pressure drop acting on the operating ECCS pumps. IM can also penetrate the strainers, enter the reactor coolant system and then accumulate in the reactor pressure vessel. An experimental and theoretical study that concentrates on mineral wool fiber transport in the containment sump and the ECCS is being performed. The study entails fiber generation and the assessment of fiber transport in single and multi-effect experiments. The experiments include measurement of the terminal settling velocity, the strainer pressure drop, fiber sedimentation and resuspension in a channel flow and jet flow in a rectangular tank. An integrated test facility is also operated to assess the compounded effects. Each experimental facility is used to provide data for the validation of equivalent computational fluid dynamic models. The channel flow facility allows the determination of the steady state distribution of the fibers at different flow velocities. The fibers are modeled in the Eulerian-Eulerian reference frame as spherical wetted agglomerates. The fiber agglomerate size, density, the relative viscosity of the fluid-fiber mixture and the turbulent dispersion of the fibers all affect the steady state accumulation of fibers at the channel base. In the current simulations, two fiber phases are separately considered. The particle size is kept constant while the density is modified, which affects both the terminal velocity and volume fraction. The relative viscosity is only significant at higher concentrations. The numerical model finds that the fibers accumulate at the channel base even at high velocities; therefore, modifications to the drag and turbulent dispersion forces can be made to reduce fiber accumulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present experimental studies and numerical modeling based on a combination of the Bidirectional Beam Propagation Method and Finite Element Modeling that completely describes the wavelength spectra of point by point femtosecond laser inscribed fiber Bragg gratings, showing excellent agreement with experiment. We have investigated the dependence of different spectral parameters such as insertion loss, all dominant cladding and ghost modes and their shape relative to the position of the fiber Bragg grating in the core of the fiber. Our model is validated by comparing model predictions with experimental data and allows for predictive modeling of the gratings. We expand our analysis to more complicated structures, where we introduce symmetry breaking; this highlights the importance of centered gratings and how maintaining symmetry contributes to the overall spectral quality of the inscribed Bragg gratings. Finally, the numerical modeling is applied to superstructure gratings and a comparison with experimental results reveals a capability for dealing with complex grating structures that can be designed with particular wavelength characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

WiMAX has been introduced as a competitive alternative for metropolitan broadband wireless access technologies. It is connection oriented and it can provide very high data rates, large service coverage, and flexible quality of services (QoS). Due to the large number of connections and flexible QoS supported by WiMAX, the uplink access in WiMAX networks is very challenging since the medium access control (MAC) protocol must efficiently manage the bandwidth and related channel allocations. In this paper, we propose and investigate a cost-effective WiMAX bandwidth management scheme, named the WiMAX partial sharing scheme (WPSS), in order to provide good QoS while achieving better bandwidth utilization and network throughput. The proposed bandwidth management scheme is compared with a simple but inefficient scheme, named the WiMAX complete sharing scheme (WCPS). A maximum entropy (ME) based analytical model (MEAM) is proposed for the performance evaluation of the two bandwidth management schemes. The reason for using MEAM for the performance evaluation is that MEAM can efficiently model a large-scale system in which the number of stations or connections is generally very high, while the traditional simulation and analytical (e.g., Markov models) approaches cannot perform well due to the high computation complexity. We model the bandwidth management scheme as a queuing network model (QNM) that consists of interacting multiclass queues for different service classes. Closed form expressions for the state and blocking probability distributions are derived for those schemes. Simulation results verify the MEAM numerical results and show that WPSS can significantly improve the network's performance compared to WCPS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present experimental results on the performance of a series of coated, D-shaped optical fiber sensors that display high spectral sensitivities to external refractive index. Sensitivity to the chosen index regime and coupling of the fiber core mode to the surface plasmon resonance (SPR) is enhanced by using specific materials as part of a multi-layered coating. We present strong evidence that this effect is enhanced by post ultraviolet radiation of the lamellar coating that results in the formation of a nano-scale surface relief corrugation structure, which generates an index perturbation within the fiber core that in turn enhances the coupling. We have found reasonable agreement when we modeling the fiber device. It was found that the SPR devices operate in air with high coupling efficiency in excess of 40 dB with spectral sensitivities that outperform a typical long period grating, with one device yielding a wavelength spectral sensitivity of 12000 nm/RIU in the important aqueous index regime. The devices generate SPRs over a very large wavelength range, (visible to 2 mu m) by alternating the polarization state of the illuminating light.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deflections of jets discharged into a reservoir with a free surface are investigated numerically. The jets are known to deflect towards either side of the free surface or the bottom, whose direction is not determined uniquely in some experimental conditions, i.e. there are multiple stable states realizable in the same condition. The origin of the multiple stable states is explored by utilizing homotopy transformations in which the top boundary of the reservoir is transformed from a rigid to a free boundary and also the location of the outlet throat is continuously moved from mid-height to the top. We depicted bifurcation diagrams of the flow compiling the data of numerical simulations, from which we identified the origin as an imperfect pitchfork bifurcation, and obtained an insight into the mechanism for the direction to be determined. The parameter region where such multiple stable states are possible is also delimited. © 2011 The Japan Society of Fluid Mechanics and IOP Publishing Ltd.