945 resultados para Stochastic integrals
Resumo:
Despite apparent overwhelming benefits, implementation of the Household Responsibility System (HRS) in China contained a number of flaws. The Two-Farmland System (TFS), which originated in Pingdu City in Shandong Province, sought to address the twin problems of land fragmentation and economies of size. A stochastic frontier production function analysis that isolates the impacts of land allocation reforms suggests that the TFS increased efficiency by around 7%. This article highlights the need for empirical analysis to assess objectively the merits or otherwise of particular reforms. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Unit-efficiency homodyne detection of the resonance fluorescence of a two-level atom collapses the quantum state of the atom to a stochastically moving point on the Bloch sphere. Recently, Hofmann, Mahler, and Hess [Phys. Rev. A 57, 4877 (1998)] showed that by making part of the coherent driving proportional to the homodyne photocurrent one can stabilize the state to any point on the bottom-half of the sphere. Here we reanalyze their proposal using the technique of stochastic master equations, allowing their results to be generalized in two ways. First, we show that any point on the upper- or lower-half, but not the equator, of the sphere may be stabilized. Second, we consider nonunit-efficiency detection, and quantify the effectiveness of the feedback by calculating the maximal purity obtainable in any particular direction in Bloch space.
Resumo:
Applied econometricians often fail to impose economic regularity constraints in the exact form economic theory prescribes. We show how the Singular Value Decomposition (SVD) Theorem and Markov Chain Monte Carlo (MCMC) methods can be used to rigorously impose time- and firm-varying equality and inequality constraints. To illustrate the technique we estimate a system of translog input demand functions subject to all the constraints implied by economic theory, including observation-varying symmetry and concavity constraints. Results are presented in the form of characteristics of the estimated posterior distributions of functions of the parameters. Copyright (C) 2001 John Wiley Sons, Ltd.
Resumo:
Loss networks have long been used to model various types of telecommunication network, including circuit-switched networks. Such networks often use admission controls, such as trunk reservation, to optimize revenue or stabilize the behaviour of the network. Unfortunately, an exact analysis of such networks is not usually possible, and reduced-load approximations such as the Erlang Fixed Point (EFP) approximation have been widely used. The performance of these approximations is typically very good for networks without controls, under several regimes. There is evidence, however, that in networks with controls, these approximations will in general perform less well. We propose an extension to the EFP approximation that gives marked improvement for a simple ring-shaped network with trunk reservation. It is based on the idea of considering pairs of links together, thus making greater allowance for dependencies between neighbouring links than does the EFP approximation, which only considers links in isolation.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
A model is introduced for two reduced BCS systems which are coupled through the transfer of Cooper pairs between the systems. The model may thus be used in the analysis of the Josephson effect arising from pair tunneling between two strongly coupled small metallic grains. At a particular coupling strength the model is integrable and explicit results are derived for the energy spectrum, conserved operators, integrals of motion, and wave function scalar products. It is also shown that form factors can be obtained for the calculation of correlation functions. Furthermore, a connection with perturbed conformal field theory is made.
Resumo:
Motivation: A consensus sequence for a family of related sequences is, as the name suggests, a sequence that captures the features common to most members of the family. Consensus sequences are important in various DNA sequencing applications and are a convenient way to characterize a family of molecules. Results: This paper describes a new algorithm for finding a consensus sequence, using the popular optimization method known as simulated annealing. Unlike the conventional approach of finding a consensus sequence by first forming a multiple sequence alignment, this algorithm searches for a sequence that minimises the sum of pairwise distances to each of the input sequences. The resulting consensus sequence can then be used to induce a multiple sequence alignment. The time required by the algorithm scales linearly with the number of input sequences and quadratically with the length of the consensus sequence. We present results demonstrating the high quality of the consensus sequences and alignments produced by the new algorithm. For comparison, we also present similar results obtained using ClustalW. The new algorithm outperforms ClustalW in many cases.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
Field quantization in unstable optical systems is treated by expanding the vector potential in terms of non-Hermitean (Fox-Li) modes. We define non-Hermitean modes and their adjoints in both the cavity and external regions and make use of the important bi-orthogonality relationships that exist within each mode set. We employ a standard canonical quantization procedure involving the introduction of generalized coordinates and momenta for the electromagnetic (EM) field. Three-dimensional systems are treated, making use of the paraxial and monochromaticity approximations for the cavity non-Hermitean modes. We show that the quantum EM field is equivalent to a set of quantum harmonic oscillators (QHOs), associated with either the cavity or the external region non-Hermitean modes, and thus confirming the validity of the photon model in unstable optical systems. Unlike in the conventional (Hermitean mode) case, the annihilation and creation operators we define for each QHO are not Hermitean adjoints. It is shown that the quantum Hamiltonian for the EM field is the sum of non-commuting cavity and external region contributions, each of which can be expressed as a sum of independent QHO Hamiltonians for each non-Hermitean mode, except that the external field Hamiltonian also includes a coupling term responsible for external non-Hermitean mode photon exchange processes. The non-commutativity of certain cavity and external region annihilation and creation operators is associated with cavity energy gain and loss processes, and may be described in terms of surface integrals involving cavity and external region non-Hermitean mode functions on the cavity-external region boundary. Using the essential states approach and the rotating wave approximation, our results are applied to the spontaneous decay of a two-level atom inside an unstable cavity. We find that atomic transitions leading to cavity non-Hermitean mode photon absorption are associated with a different coupling constant to that for transitions leading to photon emission, a feature consequent on the use of non-Hermitean mode functions. We show that under certain conditions the spontaneous decay rate is enhanced by the Petermann factor.
Resumo:
A laser, be it an optical laser or an atom laser, is an open quantum system that produces a coherent beam of bosons (photons or atoms, respectively). Far above threshold, the stationary state rho(ss) of the laser mode is a mixture of coherent-field states with random phase, or, equivalently, a Poissonian mixture of number states. This paper answers the question: can descriptions such as these, of rho(ss) as a stationary ensemble of pure states, be physically realized? Here physical realization is as defined previously by us [H. M. Wiseman and J. A. Vaccaro, Phys. Lett. A 250, 241 (1998)]: an ensemble of pure states for a particular system can be physically realized if, without changing the dynamics of the system, an experimenter can (in principle) know at any time that the system is in one of the pure-state members of the ensemble. Such knowledge can be obtained by monitoring the baths to which the system is coupled, provided that coupling is describable by a Markovian master equation. Using a family of master equations for the (atom) laser, we solve for the physically realizable (PR) ensembles. We find that for any finite self-energy chi of the bosons in the laser mode, the coherent-state ensemble is not PR; the closest one can come to it is an ensemble of squeezed states. This is particularly relevant for atom lasers, where the self-energy arising from elastic collisions is expected to be large. By contrast, the number-state ensemble is always PR. As the self-energy chi increases, the states in the PR ensemble closest to the coherent-state ensemble become increasingly squeezed. Nevertheless, there are values of chi for which states with well-defined coherent amplitudes are PR, even though the atom laser is not coherent (in the sense of having a Bose-degenerate output). We discuss the physical significance of this anomaly in terms of conditional coherence (and hence conditional Bose degeneracy).
Resumo:
We develop a systematic theory of critical quantum fluctuations in the driven parametric oscillator. Our analytic results agree well with stochastic numerical simulations. We also compare the results obtained in the positive-P representation, as a fully quantum-mechanical calculation, with the truncated Wigner phase-space equation, also known as the semiclassical theory. We show when these results agree and differ in calculations taken beyond the linearized approximation. We find that the optimal broadband noise reduction occurs just above threshold. In this region where there are large quantum fluctuations in the conjugate variance and macroscopic quantum superposition states might be expected, we find that the quantum predictions correspond very closely to the semiclassical theory.
Resumo:
We develop a systematic theory of quantum fluctuations in the driven optical parametric oscillator, including the region near threshold. This allows us to treat the limits imposed by nonlinearities to quantum squeezing and noise reduction in this nonequilibrium quantum phase transition. In particular, we compute the squeezing spectrum near threshold and calculate the optimum value. We find that the optimal noise reduction occurs at different driving fields, depending on the ratio of damping rates. The largest spectral noise reductions are predicted to occur with a very high-Q second-harmonic cavity. Our analytic results agree well with stochastic numerical simulations. We also compare the results obtained in the positive-P representation, as a fully quantum-mechanical calculation, with the truncated Wigner phase-space equation, also known as the semiclassical theory.
Resumo:
We compare two different approaches to the control of the dynamics of a continuously monitored open quantum system. The first is Markovian feedback, as introduced in quantum optics by Wiseman and Milburn [Phys. Rev. Lett. 70, 548 (1993)]. The second is feedback based on an estimate of the system state, developed recently by Doherty and Jacobs [Phys. Rev. A 60, 2700 (1999)]. Here we choose to call it, for brevity, Bayesian feedback. For systems with nonlinear dynamics, we expect these two methods of feedback control to give markedly different results. The simplest possible nonlinear system is a driven and damped two-level atom, so we choose this as our model system. The monitoring is taken to be homodyne detection of the atomic fluorescence, and the control is by modulating the driving. The aim of the feedback in both cases is to stabilize the internal state of the atom as close as possible to an arbitrarily chosen pure state, in the presence of inefficient detection and other forms of decoherence. Our results (obtained without recourse to stochastic simulations) prove that Bayesian feedback is never inferior, and is usually superior, to Markovian feedback. However, it would be far more difficult to implement than Markovian feedback and it loses its superiority when obvious simplifying approximations are made. It is thus not clear which form of feedback would be better in the face of inevitable experimental imperfections.
Resumo:
The splitting method is a simulation technique for the estimation of very small probabilities. In this technique, the sample paths are split into multiple copies, at various stages in the simulation. Of vital importance to the efficiency of the method is the Importance Function (IF). This function governs the placement of the thresholds or surfaces at which the paths are split. We derive a characterisation of the optimal IF and show that for multi-dimensional models the natural choice for the IF is usually not optimal. We also show how nearly optimal splitting surfaces can be derived or simulated using reverse time analysis. Our numerical experiments illustrate that by using the optimal IF, one can obtain a significant improvement in simulation efficiency.
Resumo:
This article examines the productivity performance of Australia's manufacturing sector by decomposing its output growth into input growth, technological progress and gains in technical efficiency. This three-way decomposition is done with an improved version of the stochastic frontier model using eight, two-digit industry level data from 1968/9 to 1994/5. Empirical evidence shows that input growth fueled output growth from 1968/9 to 1973/4, but since then, total factor productivity (TFP) growth has been the main contributor of output growth. While the trend of TFP growth was found to be promising for most industries with positive and increasing technological progress, the negative gains from technical efficiency over time is however cause for concern.