897 resultados para Numerical Analysis and Scientific Computing
Resumo:
A unique hand-held gene gun is employed for ballistically delivering biomolecules to key cells in the skin and mucosa in the treatment of the major diseases. One of these types of devices, called the Contoured Shock Tube (CST), delivers powdered micro-particles to the skin with a narrow and highly controllable velocity distribution and a nominally uniform spatial distribution. In this paper, we apply a numerical approach to gain new insights in to the behavior of the CST prototype device. The drag correlations proposed by Henderson (1976), Igra and Takayama (1993) and Kurian and Das (1997) were applied to predict the micro-particle transport in a numerically simulated gas flow. Simulated pressure histories agree well with the corresponding static and Pitot pressure measurements, validating the CFD approach. The calculated velocity distributions show a good agreement, with the best prediction from Igra & Takayama correlation (maximum discrepancy of 5%). Key features of the gas dynamics and gas-particle interaction are discussed. Statistic analyses show a tight free-jet particle velocity distribution is achieved (570 +/- 14.7 m/s) for polystyrene particles (39 +/- 1 mu m), representative of a drug payload.
Resumo:
A simplified (without phase modulator) scheme of a black box optical regenerator is proposed, where an appropriate nonlinear propagation is used to enhance regeneration. Applying semi-theoretical models the authors optimise and demonstrate feasibility of error-free long distance transmission at 40 Gbit/s.
Resumo:
The paper reports on preliminary results of an ongoing research aiming at development of an automatic procedure for recognition of discourse-compositional structure of scientific and technical texts, which is required in many NLP applications. The procedure exploits as discourse markers various domain-independent words and expressions that are specific for scientific and technical texts and organize scientific discourse. The paper discusses features of scientific discourse and common scientific lexicon comprising such words and expressions. Methodological issues of development of a computer dictionary for common scientific lexicon are concerned; basic principles of its organization are described as well. Main steps of the discourse-analyzing procedure based on the dictionary and surface syntactical analysis are pointed out.
Resumo:
One major drawback of coherent optical orthogonal frequency-division multiplexing (CO-OFDM) that hitherto remains unsolved is its vulnerability to nonlinear fiber effects due to its high peak-to-average power ratio. Several digital signal processing techniques have been investigated for the compensation of fiber nonlinearities, e.g., digital back-propagation, nonlinear pre- and post-compensation and nonlinear equalizers (NLEs) based on the inverse Volterra-series transfer function (IVSTF). Alternatively, nonlinearities can be mitigated using nonlinear decision classifiers such as artificial neural networks (ANNs) based on a multilayer perceptron. In this paper, ANN-NLE is presented for a 16QAM CO-OFDM system. The capability of the proposed approach to compensate the fiber nonlinearities is numerically demonstrated for up to 100-Gb/s and over 1000km and compared to the benchmark IVSTF-NLE. Results show that in terms of Q-factor, for 100-Gb/s at 1000km of transmission, ANN-NLE outperforms linear equalization and IVSTF-NLE by 3.2dB and 1dB, respectively.
Resumo:
In this dissertation, we study the behavior of exciton-polariton quasiparticles in semiconductor microcavities, under the sourceless and lossless conditions.
First, we simplify the original model by removing the photon dispersion term, thus effectively turn the PDEs system to an ODEs system,
and investigate the behavior of the resulting system, including the equilibrium points and the wave functions of the excitons and the photons.
Second, we add the dispersion term for the excitons to the original model and prove that the band of the discontinuous solitons now become dark solitons.
Third, we employ the Strang-splitting method to our sytem of PDEs and prove the first-order and second-order error bounds in the $H^1$ norm and the $L_2$ norm, respectively.
Using this numerical result, we analyze the stability of the steady state bright soliton solution. This solution revolves around the $x$-axis as time progresses
and the perturbed soliton also rotates around the $x$-axis and tracks closely in terms of amplitude but lags behind the exact one. Our numerical result shows orbital
stability but no $L_2$ stability.
Field data, numerical simulations and probability analyses to assess lava flow hazards at Mount Etna
Resumo:
Improving lava flow hazard assessment is one of the most important and challenging fields of volcanology, and has an immediate and practical impact on society. Here, we present a methodology for the quantitative assessment of lava flow hazards based on a combination of field data, numerical simulations and probability analyses. With the extensive data available on historic eruptions of Mt. Etna, going back over 2000 years, it has been possible to construct two hazard maps, one for flank and the other for summit eruptions, allowing a quantitative analysis of the most likely future courses of lava flows. The effective use of hazard maps of Etna may help in minimizing the damage from volcanic eruptions through correct land use in densely urbanized area with a population of almost one million people. Although this study was conducted on Mt. Etna, the approach used is designed to be applicable to other volcanic areas.
Resumo:
During the epoch when the first collapsed structures formed (6<z<50) our Universe went through an extended period of changes. Some of the radiation from the first stars and accreting black holes in those structures escaped and changed the state of the Intergalactic Medium (IGM). The era of this global phase change in which the state of the IGM was transformed from cold and neutral to warm and ionized, is called the Epoch of Reionization.In this thesis we focus on numerical methods to calculate the effects of this escaping radiation. We start by considering the performance of the cosmological radiative transfer code C2-Ray. We find that although this code efficiently and accurately solves for the changes in the ionized fractions, it can yield inaccurate results for the temperature changes. We introduce two new elements to improve the code. The first element, an adaptive time step algorithm, quickly determines an optimal time step by only considering the computational cells relevant for this determination. The second element, asynchronous evolution, allows different cells to evolve with different time steps. An important constituent of methods to calculate the effects of ionizing radiation is the transport of photons through the computational domain or ``ray-tracing''. We devise a novel ray tracing method called PYRAMID which uses a new geometry - the pyramidal geometry. This geometry shares properties with both the standard Cartesian and spherical geometries. This makes it on the one hand easy to use in conjunction with a Cartesian grid and on the other hand ideally suited to trace radiation from a radially emitting source. A time-dependent photoionization calculation not only requires tracing the path of photons but also solving the coupled set of photoionization and thermal equations. Several different solvers for these equations are in use in cosmological radiative transfer codes. We conduct a detailed and quantitative comparison of four different standard solvers in which we evaluate how their accuracy depends on the choice of the time step. This comparison shows that their performance can be characterized by two simple parameters and that the C2-Ray generally performs best.
Resumo:
Abstract not available
Resumo:
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.
Resumo:
We explore the recently developed snapshot-based dynamic mode decomposition (DMD) technique, a matrix-free Arnoldi type method, to predict 3D linear global flow instabilities. We apply the DMD technique to flows confined in an L-shaped cavity and compare the resulting modes to their counterparts issued from classic, matrix forming, linear instability analysis (i.e. BiGlobal approach) and direct numerical simulations. Results show that the DMD technique, which uses snapshots generated by a 3D non-linear incompressible discontinuous Galerkin Navier?Stokes solver, provides very similar results to classical linear instability analysis techniques. In addition, we compare DMD results issued from non-linear and linearised Navier?Stokes solvers, showing that linearisation is not necessary (i.e. base flow not required) to obtain linear modes, as long as the analysis is restricted to the exponential growth regime, that is, flow regime governed by the linearised Navier?Stokes equations, and showing the potential of this type of analysis based on snapshots to general purpose CFD codes, without need of modifications. Finally, this work shows that the DMD technique can provide three-dimensional direct and adjoint modes through snapshots provided by the linearised and adjoint linearised Navier?Stokes equations advanced in time. Subsequently, these modes are used to provide structural sensitivity maps and sensitivity to base flow modification information for 3D flows and complex geometries, at an affordable computational cost. The information provided by the sensitivity study is used to modify the L-shaped geometry and control the most unstable 3D mode.
Resumo:
The continual eruptive activity, occurrence of an ancestral catastrophic collapse, and inherent geologic features of Pacaya volcano (Guatemala) demands an evaluation of potential collapse hazards. This thesis merges techniques in the field and laboratory for a better rock mass characterization of volcanic slopes and slope stability evaluation. New field geological, structural, rock mechanical and geotechnical data on Pacaya is reported and is integrated with laboratory tests to better define the physical-mechanical rock mass properties. Additionally, this data is used in numerical models for the quantitative evaluation of lateral instability of large sector collapses and shallow landslides. Regional tectonics and local structures indicate that the local stress regime is transtensional, with an ENE-WSW sigma 3 stress component. Aligned features trending NNW-SSE can be considered as an expression of this weakness zone that favors magma upwelling to the surface. Numerical modeling suggests that a large-scale collapse could be triggered by reasonable ranges of magma pressure (greater than or equal to 7.7 MPa if constant along a central dyke) and seismic acceleration (greater than or equal to 460 cm/s2), and that a layer of pyroclastic deposits beneath the edifice could have been a factor which controlled the ancestral collapse. Finally, the formation of shear cracks within zones of maximum shear strain could provide conduits for lateral flow, which would account for long lava flows erupted at lower elevations.
Resumo:
Due to increased interest in miniaturization, great attention has been given in the recent decade to the micro heat exchanging systems. Literature survey suggests that there is still a limited understanding of gas flows in micro heat exchanging systems. The aim of the current thesis is to further the understanding of fluid flow and heat transfer phenomenon inside such geometries when a compressible working fluid is utilized. A combined experimental and numerical approach has been utilized in order to overcome the lack of employable sensors for micro dimensional channels. After conducting a detailed comparison between various data reduction methodologies employed in the literature, the best suited methodology for gas microflow experimentalists is proposed. A transitional turbulence model is extensively validated against the experimental results of the microtubes and microchannels under adiabatic wall conditions. Heat transfer analysis of single microtubes showed that when the compressible working fluid is used, Nusselt number results are in partial disagreement with the conventional theory at highly turbulent flow regime for microtubes having a hydraulic diameter less than 250 microns. Experimental and numerical analysis on a prototype double layer microchannel heat exchanger showed that compressibility is detrimental to the thermal performance. It has been found that compressibility effects for micro heat exchangers are significant when the average Mach number at the outlet of the microchannel is greater than 0.1 compared to the adiabatic limit of 0.3. Lastly, to avoid a staggering amount of the computational power needed to simulate the micro heat exchanging systems with hundreds of microchannels, a reduced order model based on the porous medium has been developed that considers the compressibility of the gas inside microchannels. The validation of the proposed model against experimental results of average thermal effectiveness and the pressure loss showed an excellent match between the two.
Resumo:
The enzyme purine nucleoside phosphorylase from Schistosoma mansoni (SmPNP) is an attractive molecular target for the development of novel drugs against schistosomiasis, a neglected tropical disease that affects about 200 million people worldwide. In the present work, enzyme kinetic studies were carried out in order to determine the potency and mechanism of inhibition of a series of SmPNP inhibitors. In addition to the biochemical investigations, crystallographic and molecular modeling studies revealed important molecular features for binding affinity towards the target enzyme, leading to the development of structure-activity relationships (SAR).
Resumo:
The HACCP system is being increasingly used to ensure food safety. This study investigated the validation of the control measures technique in order to establish performance indicators of this HACCP system in the manufacturing process of Lasagna Bolognese (meat lasagna). Samples were collected along the manufacturing process as a whole, before and after the CCPs. The following microorganism s indicator (MIs) was assessed: total mesophile and faecal coliform counts. The same MIs were analyzed in the final product, as well as, the microbiological standards required by the current legislation. A significant reduction in the total mesophile count was observed after cooking (p < 0.001). After storage, there was a numerical, however non-significant change in the MI count. Faecal coliform counts were also significantly reduced (p < 0.001) after cooking. We were able to demonstrate that the HACCP system allowed us to meet the standards set by both, the company and the Brazilian regulations, proved by the reduction in the established indicators