929 resultados para Coupled Oscillators System
Resumo:
In this work, we investigate the quantum dynamics of a model for two singlemode Bose-Einstein condensates which are coupled via Josephson tunnelling. Using direct numerical diagonalization of the Hamiltonian, we compute the time evolution of the expectation value for the relative particle number across a wide range of couplings. Our analysis shows that the system exhibits rich and complex behaviours varying between harmonic and non-harmonic oscillations, particularly around the threshold coupling between the delocalized and selftrapping phases. We show that these behaviours are dependent on both the initial state of the system and regime of the coupling. In addition, a study of the dynamics for the variance of the relative particle number expectation and the entanglement for different initial states is presented in detail.
Resumo:
In this work we investigate the energy gap between the ground state and the first excited state in a model of two single-mode Bose-Einstein condensates coupled via Josephson tunnelling. The ene:rgy gap is never zero when the tunnelling interaction is non-zero. The gap exhibits no local minimum below a threshold coupling which separates a delocalized phase from a self-trapping phase that occurs in the absence of the external potential. Above this threshold point one minimum occurs close to the Josephson regime, and a set of minima and maxima appear in the Fock regime. Expressions for the position of these minima and maxima are obtained. The connection between these minima and maxima and the dynamics for the expectation value of the relative number of particles is analysed in detail. We find that the dynamics of the system changes as the coupling crosses these points.
Resumo:
The planning and management of water resources in the Pioneer Valley, north-eastern Australia requires a tool for assessing the impact of groundwater and stream abstractions on water supply reliabilities and environmental flows in Sandy Creek (the main surface water system studied). Consequently, a fully coupled stream-aquifer model has been constructed using the code MODHMS, calibrated to near-stream observations of watertable behaviour and multiple components of gauged stream flow. This model has been tested using other methods of estimation, including stream depletion analysis and radon isotope tracer sampling. The coarseness of spatial discretisation, which is required for practical reasons of computational efficiency, limits the model's capacity to simulate small-scale processes (e.g., near-stream groundwater pumping, bank storage effects), and alternative approaches are required to complement the model's range of applicability. Model predictions of groundwater influx to Sandy Creek are compared with baseflow estimates from three different hydrograph separation techniques, which were found to be unable to reflect the dynamics of Sandy Creek stream-aquifer interactions. The model was also used to infer changes in the water balance of the system caused by historical land use change. This led to constraints on the recharge distribution which can be implemented to improve model calibration performance. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In a recent paper Yu and Eberly [Phys. Rev. Lett. 93, 140404 (2004)] have shown that two initially entangled and afterward not interacting qubits can become completely disentangled in a finite time. We study transient entanglement between two qubits coupled collectively to a multimode vacuum field, assuming that the two-qubit system is initially prepared in an entangled state produced by the two-photon coherences, and find the unusual feature that the irreversible spontaneous decay can lead to a revival of the entanglement that has already been destroyed. The results show that this feature is independent of the coherent dipole-dipole interaction between the atoms but it depends critically on whether or not collective damping is present.
Resumo:
We propose an asymmetric multi-processor SoC architecture, featuring a master CPU running uClinux, and multiple loosely-coupled slave CPUs running real-time threads assigned by the master CPU. Real-time SoC architectures often demand a compromise between a generic platform for different applications, and application-specific customizations to achieve performance requirements. Our proposed architecture offers a generic platform running a conventional embedded operating system providing a traditional software-oriented development approach, while multiple slave CPUs act as a dedicated independent real-time threads execution unit running in parallel of master CPU to achieve performance requirements. In this paper, the architecture is described, including the application / threading development environment. The performance of the architecture with several standard benchmark routines is also analysed.
Resumo:
In mantle convection models it has become common to make use of a modified (pressure sensitive, Boussinesq) von Mises yield criterion to limit the maximum stress the lithosphere can support. This approach allows the viscous, cool thermal boundary layer to deform in a relatively plate-like mode even in a fully Eulerian representation. In large-scale models with embedded continental crust where the mobile boundary layer represents the oceanic lithosphere, the von Mises yield criterion for the oceans ensures that the continents experience a realistic broad-scale stress regime. In detailed models of crustal deformation it is, however, more appropriate to choose a Mohr-Coulomb yield criterion based upon the idea that frictional slip occurs on whichever one of many randomly oriented planes happens to be favorably oriented with respect to the stress field. As coupled crust/mantle models become more sophisticated it is important to be able to use whichever failure model is appropriate to a given part of the system. We have therefore developed a way to represent Mohr-Coulomb failure within a code which is suited to mantle convection problems coupled to large-scale crustal deformation. Our approach uses an orthotropic viscous rheology (a different viscosity for pure shear to that for simple shear) to define a prefered plane for slip to occur given the local stress field. The simple-shear viscosity and the deformation can then be iterated to ensure that the yield criterion is always satisfied. We again assume the Boussinesq approximation - neglecting any effect of dilatancy on the stress field. An additional criterion is required to ensure that deformation occurs along the plane aligned with maximum shear strain-rate rather than the perpendicular plane which is formally equivalent in any symmetric formulation. It is also important to allow strain-weakening of the material. The material should remember both the accumulated failure history and the direction of failure. We have included this capacity in a Lagrangian-Integration-point finite element code and will show a number of examples of extension and compression of a crustal block with a Mohr-Coulomb failure criterion, and comparisons between mantle convection models using the von Mises versus the Mohr-Coulomb yield criteria. The formulation itself is general and applies to 2D and 3D problems, although it is somewhat more complicated to identify the slip plane in 3D.
Resumo:
-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
Resumo:
Purpose - To develop a systems strategy for supply chain management in aerospace maintenance, repair and overhaul (MRO). Design/methodology/approach - A standard systems development methodology has been followed to produce a process model (i.e. the AMSCR model); an information model (i.e. business rules) and a computerised information management capability (i.e. automated optimisation). Findings - The proof of concept for this web-based MRO supply chain system has been established through collaboration with a sample of the different types of supply chain members. The proven benefits comprise new potential to minimise the stock holding costs of the whole supply chain whilst also minimising non-flying time of the aircraft that the supply chain supports. Research limitations/implications - The scale of change needed to successfully model and automate the supply chain is vast. This research is a limited-scale experiment intended to show the power of process analysis and automation, coupled with strategic use of management science techniques, to derive tangible business benefit. Practical implications - This type of system is now vital in an industry that has continuously decreasing profit margins; which in turn means pressure to reduce servicing times and increase the mean time between them. Originality/value - Original work has been conducted at several levels: process, information and automation. The proof-of-concept system has been applied to an aircraft MRO supply chain. This is an area of research that has been neglected, and as a result is not well served by current systems solutions. © Emerald Group Publishing Limited.
Resumo:
We present, for the first time to our knowledge, experimental evidence showing that superimposed blazed fiber Bragg gratings may be fabricated and used to extend the dynamic range of a grating-based spectrometer. Blazed gratings of 4° and 8° were superimposed in germanosilicate fiber by ultraviolet inscription and used in conjunction with a coated charged-coupled device array to interrogate a wavelength-division-multiplexing sensor array. We show that the system can be used to monitor strain and temperature sensors simultaneously with an employable bandwidth which is extendable to 70 nm.
Resumo:
Respiration is a complex activity. If the relationship between all neurological and skeletomuscular interactions was perfectly understood, an accurate dynamic model of the respiratory system could be developed and the interaction between different inputs and outputs could be investigated in a straightforward fashion. Unfortunately, this is not the case and does not appear to be viable at this time. In addition, the provision of appropriate sensor signals for such a model would be a considerable invasive task. Useful quantitative information with respect to respiratory performance can be gained from non-invasive monitoring of chest and abdomen motion. Currently available devices are not well suited in application for spirometric measurement for ambulatory monitoring. A sensor matrix measurement technique is investigated to identify suitable sensing elements with which to base an upper body surface measurement device that monitors respiration. This thesis is divided into two main areas of investigation; model based and geometrical based surface plethysmography. In the first instance, chapter 2 deals with an array of tactile sensors that are used as progression of existing and previously investigated volumetric measurement schemes based on models of respiration. Chapter 3 details a non-model based geometrical approach to surface (and hence volumetric) profile measurement. Later sections of the thesis concentrate upon the development of a functioning prototype sensor array. To broaden the application area the study has been conducted as it would be fore a generically configured sensor array. In experimental form the system performance on group estimation compares favourably with existing system on volumetric performance. In addition provides continuous transient measurement of respiratory motion within an acceptable accuracy using approximately 20 sensing elements. Because of the potential size and complexity of the system it is possible to deploy it as a fully mobile ambulatory monitoring device, which may be used outside of the laboratory. It provides a means by which to isolate coupled physiological functions and thus allows individual contributions to be analysed separately. Thus facilitating greater understanding of respiratory physiology and diagnostic capabilities. The outcome of the study is the basis for a three-dimensional surface contour sensing system that is suitable for respiratory function monitoring and has the prospect with future development to be incorporated into a garment based clinical tool.
Resumo:
We explore the dynamics of a periodically driven Duffing resonator coupled elastically to a van der Pol oscillator in the case of 1?:?1 internal resonance in the cases of weak and strong coupling. Whilst strong coupling leads to dominating synchronization, the weak coupling case leads to a multitude of complex behaviours. A two-time scales method is used to obtain the frequency-amplitude modulation. The internal resonance leads to an antiresonance response of the Duffing resonator and a stagnant response (a small shoulder in the curve) of the van der Pol oscillator. The stability of the dynamic motions is also analyzed. The coupled system shows a hysteretic response pattern and symmetry-breaking facets. Chaotic behaviour of the coupled system is also observed and the dependence of the system dynamics on the parameters are also studied using bifurcation analysis.
Resumo:
This work concerns the developnent of a proton irduced X-ray emission (PIXE) analysis system and a multi-sample scattering chamber facility. The characteristics of the beam pulsing system and its counting rate capabilities were evaluated by observing the ion-induced X-ray emission from pure thick copper targets, with and without beam pulsing operation. The characteristic X-rays were detected with a high resolution Si(Li) detector coupled to a rrulti-channel analyser. The removal of the pile-up continuum by the use of the on-demand beam pulsing is clearly demonstrated in this work. This new on-demand pu1sirg system with its counting rate capability of 25, 18 and 10 kPPS corresponding to 2, 4 am 8 usec main amplifier time constant respectively enables thick targets to be analysed more readily. Reproducibility tests of the on-demard beam pulsing system operation were checked by repeated measurements of the system throughput curves, with and without beam pulsing. The reproducibility of the analysis performed using this system was also checked by repeated measurements of the intensity ratios from a number of standard binary alloys during the experimental work. A computer programme has been developed to evaluate the calculations of the X-ray yields from thick targets bornbarded by protons, taking into account the secondary X-ray yield production due to characteristic X-ray fluorescence from an element energetically higher than the absorption edge energy of the other element present in the target. This effect was studied on metallic binary alloys such as Fe/Ni and Cr/Fe. The quantitative analysis of Fe/Ni and Cr/Fe alloy samples to determine their elemental composition taking into account the enhancement has been demonstrated in this work. Furthermore, the usefulness of the Rutherford backscattering (R.B.S.) technique to obtain the depth profiles of the elements in the upper micron of the sample is discussed.
Resumo:
Requirements for systems to continue to operate satisfactorily in the presence of faults has led to the development of techniques for the construction of fault tolerant software. This thesis addresses the problem of error detection and recovery in distributed systems which consist of a set of communicating sequential processes. A method is presented for the `a priori' design of conversations for this class of distributed system. Petri nets are used to represent the state and to solve state reachability problems for concurrent systems. The dynamic behaviour of the system can be characterised by a state-change table derived from the state reachability tree. Systematic conversation generation is possible by defining a closed boundary on any branch of the state-change table. By relating the state-change table to process attributes it ensures all necessary processes are included in the conversation. The method also ensures properly nested conversations. An implementation of the conversation scheme using the concurrent language occam is proposed. The structure of the conversation is defined using the special features of occam. The proposed implementation gives a structure which is independent of the application and is independent of the number of processes involved. Finally, the integrity of inter-process communications is investigated. The basic communication primitives used in message passing systems are seen to have deficiencies when applied to systems with safety implications. Using a Petri net model a boundary for a time-out mechanism is proposed which will increase the integrity of a system which involves inter-process communications.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.