934 resultados para Two-level Atom
Resumo:
Gough, John, (2004) 'Holevo-Ordering and the Continuous-Time Limit for Open Floquet Dynamics', Letters in Mathematical Physcis 67(3) pp.207-221 RAE2008
Resumo:
Greaves, George; Sen, S., (2007) 'Inorganic glasses, glass-forming liquids and amorphizing solids', Advances in Physics 56(1) pp.1-166 RAE2008
Resumo:
We consider the motion of ballistic electrons in a miniband of a semiconductor superlattice (SSL) under the influence of an external, time-periodic electric field. We use the semi-classical balance-equation approach which incorporates elastic and inelastic scattering (as dissipation) and the self-consistent field generated by the electron motion. The coupling of electrons in the miniband to the self-consistent field produces a cooperative nonlinear oscillatory mode which, when interacting with the oscillatory external field and the intrinsic Bloch-type oscillatory mode, can lead to complicated dynamics, including dissipative chaos. For a range of values of the dissipation parameters we determine the regions in the amplitude-frequency plane of the external field in which chaos can occur. Our results suggest that for terahertz external fields of the amplitudes achieved by present-day free electron lasers, chaos may be observable in SSLs. We clarify the nature of this novel nonlinear dynamics in the superlattice-external field system by exploring analogies to the Dicke model of an ensemble of two-level atoms coupled with a resonant cavity field and to Josephson junctions.
Resumo:
A recent quantum computing paper (G. S. Uhrig, Phys. Rev. Lett. 98, 100504 (2007)) analytically derived optimal pulse spacings for a multiple spin echo sequence designed to remove decoherence in a two-level system coupled to a bath. The spacings in what has been called a "Uhrig dynamic decoupling (UDD) sequence" differ dramatically from the conventional, equal pulse spacing of a Carr-Purcell-Meiboom-Gill (CPMG) multiple spin echo sequence. The UDD sequence was derived for a model that is unrelated to magnetic resonance, but was recently shown theoretically to be more general. Here we show that the UDD sequence has theoretical advantages for magnetic resonance imaging of structured materials such as tissue, where diffusion in compartmentalized and microstructured environments leads to fluctuating fields on a range of different time scales. We also show experimentally, both in excised tissue and in a live mouse tumor model, that optimal UDD sequences produce different T(2)-weighted contrast than do CPMG sequences with the same number of pulses and total delay, with substantial enhancements in most regions. This permits improved characterization of low-frequency spectral density functions in a wide range of applications.
Resumo:
The classical Purcell's vector method, for the construction of solutions to dense systems of linear equations is extended to a flexible orthogonalisation procedure. Some properties are revealed of the orthogonalisation procedure in relation to the classical Gauss-Jordan elimination with or without pivoting. Additional properties that are not shared by the classical Gauss-Jordan elimination are exploited. Further properties related to distributed computing are discussed with applications to panel element equations in subsonic compressible aerodynamics. Using an orthogonalisation procedure within panel methods enables a functional decomposition of the sequential panel methods and leads to a two-level parallelism.
Resumo:
Temperature distributions involved in some metal-cutting or surface-milling processes may be obtained by solving a non-linear inverse problem. A two-level concept on parallelism is introduced to compute such temperature distribution. The primary level is based on a problem-partitioning concept driven by the nature and properties of the non-linear inverse problem. Such partitioning results to a coarse-grained parallel algorithm. A simplified 2-D metal-cutting process is used as an example to illustrate the concept. A secondary level exploitation of further parallel properties based on the concept of domain-data parallelism is explained and implemented using MPI. Some experiments were performed on a network of loosely coupled machines consist of SUN Sparc Classic workstations and a network of tightly coupled processors, namely the Origin 2000.
Resumo:
Virtual manufacturing and design assessment increasingly involve the simulation of interacting phenomena, sic. multi-physics, an activity which is very computationally intensive. This chapter describes an attempt to address the parallel issues associated with a multi-physics simulation approach based upon a range of compatible procedures operating on one mesh using a single database - the distinct physics solvers can operate separately or coupled on sub-domains of the whole geometric space. Moreover, the finite volume unstructured mesh solvers use different discretization schemes (and, particularly, different ‘nodal’ locations and control volumes). A two-level approach to the parallelization of this simulation software is described: the code is restructured into parallel form on the basis of the mesh partitioning alone, that is, without regard to the physics. However, at run time, the mesh is partitioned to achieve a load balance, by considering the load per node/element across the whole domain. The latter of course is determined by the problem specific physics at a particular location.
Resumo:
Numerical solutions of realistic 2-D and 3-D inverse problems may require a very large amount of computation. A two-level concept on parallelism is often used to solve such problems. The primary level uses the problem partitioning concept which is a decomposition based on the mathematical/physical problem. The secondary level utilizes the widely used data partitioning concept. A theoretical performance model is built based on the two-level parallelism. The observed performance results obtained from a network of general purpose Sun Sparc stations are compared with the theoretical values. Restrictions of the theoretical model are also discussed.
Resumo:
Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.
Resumo:
This paper studies a two-level supply chain consisting of components supplier and product assembly manufacturer, while the manufacturer shares the investment on shortening supply lead time. The objective of this research is to investigate the benefits of cost sharing strategy and adopting component commonality. The result of numerical analysis demonstrates that using component commonality can help reduce the total cost, especially when the manufacture shares a higher fraction of the cost of investment in shortening supply lead time.
Resumo:
This paper reports a study carried out to develop a self-compacting fibre reinforced concrete containing a high fibre content with slurry infiltrated fibre concrete (SIFCON). The SIFCON was developed with 10% of steel fibres which are infiltrated by self-compacting cement slurry without any vibration. Traditionally, the infiltration of the slurry into the layer of fibres is carried out under intensive vibration. A two-level fractional factorial design was used to optimise the properties of cement-based slurries with four independent variables, such as dosage of silica fume, dosage of superplasticiser, sand content, and water/cement ratio (W/C). Rheometer, mini-slump test, Lombardi plate cohesion meter, J-fibre penetration test, and induced bleeding were used to assess the behaviour of fresh cement slurries. The compressive strengths at 7 and 28 days were also measured. The statistical models are valid for slurries made with W/C of 0.40 to 0.50, 50 to 100% of sand by mass of cement, 5 to 10% of silica fume by mass of cement, and SP dosage of 0.6 to 1.2% by mass of cement. This model makes it possible to evaluate the effect of individual variables on measured parameters of fresh cement slurries. The proposed models offered useful information to understand trade-offs between mix variables and compare the responses obtained from various test methods in order to optimise self-compacting SIFCON.
Resumo:
We show that two qubits can be entangled by local interactions with an entangled two-mode continuous variable state. This is illustrated by the evolution of two two-level atoms interacting with a two-mode squeezed state. Two modes of the squeezed field are injected respectively into two spatially separate cavities and the atoms are then sent into the cavities to interact resonantly with the cavity field. We find that the atoms may be entangled even by a two-mode squeezed state which has been decohered while penetrating into the cavity.
Resumo:
Slurries with high penetrability for production of Self-consolidating Slurry Infiltrated Fiber Concrete (SIFCON) were investigated in this study. Factorial experimental design was adopted in this investigation to assess the combined effects of five independent variables on mini-slump test, plate cohesion meter, induced bleeding test, J-fiber penetration test and compressive strength at 7 and 28 days. The independent variables investigated were the proportions of limestone powder (LSP) and sand, the dosages of superplasticiser (SP) and viscosity agent (VA), and water-to-binder ratio (w/b). A two-level fractional factorial statistical method was used to model the influence of key parameters on properties affecting the behaviour of fresh cement slurry and compressive strength. The models are valid for mixes with 10 to 50% LSP as replacement of cement, 0.02 to 0.06% VA by mass of cement, 0.6 to 1.2% SP and 50 to 150% sand (% mass of binder) and 0.42 to 0.48 w/b. The influences of LSP, SP, VA, sand and W/B were characterised and analysed using polynomial regression which identifies the primary factors and their interactions on the measured properties. Mathematical polynomials were developed for mini-slump, plate cohesion meter, J-fiber penetration test, induced bleeding and compressive strength as functions of LSP, SP, VA, sand and w/b. The estimated results of mini-slump, induced bleeding test and compressive strength from the derived models are compared with results obtained from previously proposed models that were developed for cement paste. The proposed response models of the self-consolidating SIFCON offer useful information regarding the mix optimization to secure a highly penetration of slurry with low compressive strength
Resumo:
In this paper the parameters of cement grout affecting rheological behaviour and compressive strength are investigated. Factorial experimental design was adopted in this investigation to assess the combined effects of the following factors on fluidity, rheological properties, induced bleeding and compressive strength: water/binder ratio (W/B), dosage of superplasticiser (SP), dosage of viscosity agent (VA), and proportion of limestone powder as replacement of cement (LSP). Mini-slump test, Marsh cone, Lombardi plate cohesion meter, induced bleeding test, coaxial rotating cylinder viscometer were used to evaluate the rheology of the cement grout and the compressive strengths at 7 and 28 days were measured. A two-level fractional factorial statistical model was used to model the influence of key parameters on properties affecting the fluidity, the rheology and compressive strength. The models are valid for mixes with 0.35-0.42 W/B, 0.3-1.2% SP, 0.02-0.7% VA (percentage of binder) and 12-45% LSP as replacement of cement. The influences of W/B, SP, VA and LSP were characterised and analysed using polynomial regression which can identify the primary factors and their interactions on the measured properties. Mathematical polynomials were developed for mini-slump, plate cohesion meter, inducing bleeding, yield value, plastic viscosity and compressive strength as function of W/B, SP, VA and proportion of LSP. The statistical approach used highlighted the limestone powder effect and the dosage of SP and VA on the various rheological characteristics of cement grout
Resumo:
Intense, few-femtosecond pulse technology has enabled studies of the fastest vibrational relaxation processes. The hydrogen group vibrations can be imaged and manipulated using intense infrared pulses. Through numerical simulation, we demonstrate an example of ultrafast coherent control that could be effected with current experimental facilities, and observed using high-resolution time-of-flight spectroscopy. The proposal is a pump-probe-type technique to manipulate the D2+ ion with ultrashort pulse sequences. The simulations presented show that vibrational selection can be achieved through pulse delay. We find that the vibrational system can be purified to a two-level system thus realizing a vibrational qubit. A novel scheme for the selective transfer of population between these two levels, based on a Raman process and conditioned upon the delay time of a second control-pulse is outlined, and may enable quantum encoding with this system.