952 resultados para Numerical example
Resumo:
A fully relativistic four-component Dirac-Fock-Slater program for diatomics, with numerically given AO's as basis functions is presented. We discuss the problem of the errors due to the finite basis-set, and due to the influence of the negative energy solutions of the Dirac Hamiltonian. The negative continuum contributions are found to be very small.
Resumo:
While most data analysis and decision support tools use numerical aspects of the data, Conceptual Information Systems focus on their conceptual structure. This paper discusses how both approaches can be combined.
Resumo:
We consider numerical methods for the compressible time dependent Navier-Stokes equations, discussing the spatial discretization by Finite Volume and Discontinuous Galerkin methods, the time integration by time adaptive implicit Runge-Kutta and Rosenbrock methods and the solution of the appearing nonlinear and linear equations systems by preconditioned Jacobian-Free Newton-Krylov, as well as Multigrid methods. As applications, thermal Fluid structure interaction and other unsteady flow problems are considered. The text is aimed at both mathematicians and engineers.
Resumo:
To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.
Resumo:
The ongoing depletion of the coastal aquifer in the Gaza strip due to groundwater overexploitation has led to the process of seawater intrusion, which is continually becoming a serious problem in Gaza, as the seawater has further invaded into many sections along the coastal shoreline. As a first step to get a hold on the problem, the artificial neural network (ANN)-model has been applied as a new approach and an attractive tool to study and predict groundwater levels without applying physically based hydrologic parameters, and also for the purpose to improve the understanding of complex groundwater systems and which is able to show the effects of hydrologic, meteorological and anthropogenic impacts on the groundwater conditions. Prediction of the future behaviour of the seawater intrusion process in the Gaza aquifer is thus of crucial importance to safeguard the already scarce groundwater resources in the region. In this study the coupled three-dimensional groundwater flow and density-dependent solute transport model SEAWAT, as implemented in Visual MODFLOW, is applied to the Gaza coastal aquifer system to simulate the location and the dynamics of the saltwater–freshwater interface in the aquifer in the time period 2000-2010. A very good agreement between simulated and observed TDS salinities with a correlation coefficient of 0.902 and 0.883 for both steady-state and transient calibration is obtained. After successful calibration of the solute transport model, simulation of future management scenarios for the Gaza aquifer have been carried out, in order to get a more comprehensive view of the effects of the artificial recharge planned in the Gaza strip for some time on forestall, or even to remedy, the presently existing adverse aquifer conditions, namely, low groundwater heads and high salinity by the end of the target simulation period, year 2040. To that avail, numerous management scenarios schemes are examined to maintain the ground water system and to control the salinity distributions within the target period 2011-2040. In the first, pessimistic scenario, it is assumed that pumping from the aquifer continues to increase in the near future to meet the rising water demand, and that there is not further recharge to the aquifer than what is provided by natural precipitation. The second, optimistic scenario assumes that treated surficial wastewater can be used as a source of additional artificial recharge to the aquifer which, in principle, should not only lead to an increased sustainable yield of the latter, but could, in the best of all cases, revert even some of the adverse present-day conditions in the aquifer, i.e., seawater intrusion. This scenario has been done with three different cases which differ by the locations and the extensions of the injection-fields for the treated wastewater. The results obtained with the first (do-nothing) scenario indicate that there will be ongoing negative impacts on the aquifer, such as a higher propensity for strong seawater intrusion into the Gaza aquifer. This scenario illustrates that, compared with 2010 situation of the baseline model, at the end of simulation period, year 2040, the amount of saltwater intrusion into the coastal aquifer will be increased by about 35 %, whereas the salinity will be increased by 34 %. In contrast, all three cases of the second (artificial recharge) scenario group can partly revert the present seawater intrusion. From the water budget point of view, compared with the first (do nothing) scenario, for year 2040, the water added to the aquifer by artificial recharge will reduces the amount of water entering the aquifer by seawater intrusion by 81, 77and 72 %, for the three recharge cases, respectively. Meanwhile, the salinity in the Gaza aquifer will be decreased by 15, 32 and 26% for the three cases, respectively.
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
The aim of the thesis is to theoretically investigate optical/plasmonic antennas for biosensing applications. The full 3-D numerical electromagnetic simulations have been performed by using finite integration technique (FIT). The electromagnetic properties of surface plasmon polaritons (SPPs) and the localized surface plasmons (LSPs) based devices are studied for sensing purpose. The surface plasmon resonance (SPR) biosensors offer high refractive index sensitivity at a fixed wavelength but are not enough for the detection of low concentrations of molecules. It has been demonstrated that the sensitivity of SPR sensors can be increased by employing the transverse magneto-optic Kerr effect (TMOKE) in combination with SPPs. The sensor based on the phenomena of TMOKE and SPPs are known as magneto-optic SPR (MOSPR) sensors. The optimized MOSPR sensor is analyzed which provides 8 times higher sensitivity than the SPR sensor, which will be able to detect lower concentration of molecules. But, the range of the refractive index detection is limited, due to the rapid decay of the amplitude of the MOSPR-signal with the increase of the refractive indices. Whereas, LSPs based sensors can detect lower concentrations of molecules, but their sensitivity is small at a fixed wavelength. Therefore, another device configuration known as perfect plasmonic absorber (PPA) is investigated which is based on the phenomena of metal-insulator-metal (MIM) waveguide. The PPA consists of a periodic array of gold nanoparticles and a thick gold film separated by a dielectric spacer. The electromagnetic modes of the PPA system are analyzed for sensing purpose. The second order mode of the PPA at a fixed wavelength has been proposed for the first time for biosensing applications. The PPA based sensor combines the properties of the LSPR sensor and the SPR sensor, for example, it illustrates increment in sensitivity of the LSPR sensor comparable to the SPR and can detect lower concentration of molecules due to the presence of nanoparticles.
Resumo:
Mesh generation is an important step inmany numerical methods.We present the “HierarchicalGraphMeshing” (HGM)method as a novel approach to mesh generation, based on algebraic graph theory.The HGM method can be used to systematically construct configurations exhibiting multiple hierarchies and complex symmetry characteristics. The hierarchical description of structures provided by the HGM method can be exploited to increase the efficiency of multiscale and multigrid methods. In this paper, the HGMmethod is employed for the systematic construction of super carbon nanotubes of arbitrary order, which present a pertinent example of structurally and geometrically complex, yet highly regular, structures. The HGMalgorithm is computationally efficient and exhibits good scaling characteristics. In particular, it scales linearly for super carbon nanotube structures and is working much faster than geometry-based methods employing neighborhood search algorithms. Its modular character makes it conducive to automatization. For the generation of a mesh, the information about the geometry of the structure in a given configuration is added in a way that relates geometric symmetries to structural symmetries. The intrinsically hierarchic description of the resulting mesh greatly reduces the effort of determining mesh hierarchies for multigrid and multiscale applications and helps to exploit symmetry-related methods in the mechanical analysis of complex structures.
Resumo:
In this work, we present an atomistic-continuum model for simulations of ultrafast laser-induced melting processes in semiconductors on the example of silicon. The kinetics of transient non-equilibrium phase transition mechanisms is addressed with MD method on the atomic level, whereas the laser light absorption, strong generated electron-phonon nonequilibrium, fast heat conduction, and photo-excited free carrier diffusion are accounted for with a continuum TTM-like model (called nTTM). First, we independently consider the applications of nTTM and MD for the description of silicon, and then construct the combined MD-nTTM model. Its development and thorough testing is followed by a comprehensive computational study of fast nonequilibrium processes induced in silicon by an ultrashort laser irradiation. The new model allowed to investigate the effect of laser-induced pressure and temperature of the lattice on the melting kinetics. Two competing melting mechanisms, heterogeneous and homogeneous, were identified in our big-scale simulations. Apart from the classical heterogeneous melting mechanism, the nucleation of the liquid phase homogeneously inside the material significantly contributes to the melting process. The simulations showed, that due to the open diamond structure of the crystal, the laser-generated internal compressive stresses reduce the crystal stability against the homogeneous melting. Consequently, the latter can take a massive character within several picoseconds upon the laser heating. Due to the large negative volume of melting of silicon, the material contracts upon the phase transition, relaxes the compressive stresses, and the subsequent melting proceeds heterogeneously until the excess of thermal energy is consumed. A series of simulations for a range of absorbed fluences allowed us to find the threshold fluence value at which homogeneous liquid nucleation starts contributing to the classical heterogeneous propagation of the solid-liquid interface. A series of simulations for a range of the material thicknesses showed that the sample width we chosen in our simulations (800 nm) corresponds to a thick sample. Additionally, in order to support the main conclusions, the results were verified for a different interatomic potential. Possible improvements of the model to account for nonthermal effects are discussed and certain restrictions on the suitable interatomic potentials are found. As a first step towards the inclusion of these effects into MD-nTTM, we performed nanometer-scale MD simulations with a new interatomic potential, designed to reproduce ab initio calculations at the laser-induced electronic temperature of 18946 K. The simulations demonstrated that, similarly to thermal melting, nonthermal phase transition occurs through nucleation. A series of simulations showed that higher (lower) initial pressure reinforces (hinders) the creation and the growth of nonthermal liquid nuclei. For the example of Si, the laser melting kinetics of semiconductors was found to be noticeably different from that of metals with a face-centered cubic crystal structure. The results of this study, therefore, have important implications for interpretation of experimental data on the kinetics of melting process of semiconductors.
Resumo:
KAM is a computer program that can automatically plan, monitor, and interpret numerical experiments with Hamiltonian systems with two degrees of freedom. The program has recently helped solve an open problem in hydrodynamics. Unlike other approaches to qualitative reasoning about physical system dynamics, KAM embodies a significant amount of knowledge about nonlinear dynamics. KAM's ability to control numerical experiments arises from the fact that it not only produces pictures for us to see, but also looks at (sic---in its mind's eye) the pictures it draws to guide its own actions. KAM is organized in three semantic levels: orbit recognition, phase space searching, and parameter space searching. Within each level spatial properties and relationships that are not explicitly represented in the initial representation are extracted by applying three operations ---(1) aggregation, (2) partition, and (3) classification--- iteratively.
Resumo:
We present an example-based learning approach for locating vertical frontal views of human faces in complex scenes. The technique models the distribution of human face patterns by means of a few view-based "face'' and "non-face'' prototype clusters. At each image location, the local pattern is matched against the distribution-based model, and a trained classifier determines, based on the local difference measurements, whether or not a human face exists at the current image location. We provide an analysis that helps identify the critical components of our system.
Resumo:
Image analysis and graphics synthesis can be achieved with learning techniques using directly image examples without physically-based, 3D models. In our technique: -- the mapping from novel images to a vector of "pose" and "expression" parameters can be learned from a small set of example images using a function approximation technique that we call an analysis network; -- the inverse mapping from input "pose" and "expression" parameters to output images can be synthesized from a small set of example images and used to produce new images using a similar synthesis network. The techniques described here have several applications in computer graphics, special effects, interactive multimedia and very low bandwidth teleconferencing.
Resumo:
Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. Such a descriptor--based on a set of oriented Gaussian derivative filters-- is used in our recognition system. We report here an evaluation of several techniques for orientation estimation to achieve rotation invariance of the descriptor. We also describe feature selection based on a single training image. Virtual images are generated by rotating and rescaling the image and robust features are selected. The results confirm robust performance in cluttered scenes, in the presence of partial occlusions, and when the object is embedded in different backgrounds.
Resumo:
The Kineticist's Workbench is a program that simulates chemical reaction mechanisms by predicting, generating, and interpreting numerical data. Prior to simulation, it analyzes a given mechanism to predict that mechanism's behavior; it then simulates the mechanism numerically; and afterward, it interprets and summarizes the data it has generated. In performing these tasks, the Workbench uses a variety of techniques: graph- theoretic algorithms (for analyzing mechanisms), traditional numerical simulation methods, and algorithms that examine simulation results and reinterpret them in qualitative terms. The Workbench thus serves as a prototype for a new class of scientific computational tools---tools that provide symbiotic collaborations between qualitative and quantitative methods.