881 resultados para Minimization of open stack problem
Resumo:
We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.
Resumo:
The high variability of the intensity of suprathermal electron flux in the solar wind is usually ascribed to the high variability of sources on the Sun. Here we demonstrate that a substantial amount of the variability arises from peaks in stream interaction regions, where fast wind runs into slow wind and creates a pressure ridge at the interface. Superposed epoch analysis centered on stream interfaces in 26 interaction regions previously identified in Wind data reveal a twofold increase in 250 eV flux (integrated over pitch angle). Whether the peaks result from the compression there or are solar signatures of the coronal hole boundary, to which interfaces may map, is an open question. Suggestive of the latter, some cases show a displacement between the electron and magnetic field peaks at the interface. Since solar information is transmitted to 1 AU much more quickly by suprathermal electrons compared to convected plasma signatures, the displacement may imply a shift in the coronal hole boundary through transport of open magnetic flux via interchange reconnection. If so, however, the fact that displacements occur in both directions and that the electron and field peaks in the superposed epoch analysis are nearly coincident indicate that any systematic transport expected from differential solar rotation is overwhelmed by a random pattern, possibly owing to transport across a ragged coronal hole boundary.
Conditioning of incremental variational data assimilation, with application to the Met Office system
Resumo:
Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.
Resumo:
Historic geomagnetic activity observations have been used to reveal centennial variations in the open solar flux and the near-Earth heliospheric conditions (the interplanetary magnetic field and the solar wind speed). The various methods are in very good agreement for the past 135 years when there were sufficient reliable magnetic observatories in operation to eliminate problems due to site-specific errors and calibration drifts. This review underlines the physical principles that allow these reconstructions to be made, as well as the details of the various algorithms employed and the results obtained. Discussion is included of: the importance of the averaging timescale; the key differences between “range” and “interdiurnal variability” geomagnetic data; the need to distinguish source field sector structure from heliospherically-imposed field structure; the importance of ensuring that regressions used are statistically robust; and uncertainty analysis. The reconstructions are exceedingly useful as they provide calibration between the in-situ spacecraft measurements from the past five decades and the millennial records of heliospheric behaviour deduced from measured abundances of cosmogenic radionuclides found in terrestrial reservoirs. Continuity of open solar flux, using sunspot number to quantify the emergence rate, is the basis of a number of models that have been very successful in reproducing the variation derived from geomagnetic activity. These models allow us to extend the reconstructions back to before the development of the magnetometer and to cover the Maunder minimum. Allied to the radionuclide data, the models are revealing much about how the Sun and heliosphere behaved outside of grand solar maxima and are providing a means of predicting how solar activity is likely to evolve now that the recent grand maximum (that had prevailed throughout the space age) has come to an end.
Resumo:
Results from all phases of the orbits of the Ulysses spacecraft have shown that the magnitude of the radial component of the heliospheric field is approximately independent of heliographic latitude. This result allows the use of near- Earth observations to compute the total open flux of the Sun. For example, using satellite observations of the interplanetary magnetic field, the average open solar flux was shown to have risen by 29% between 1963 and 1987 and using the aa geomagnetic index it was found to have doubled during the 20th century. It is therefore important to assess fully the accuracy of the result and to check that it applies to all phases of the solar cycle. The first perihelion pass of the Ulysses spacecraft was close to sunspot minimum, and recent data from the second perihelion pass show that the result also holds at solar maximum. The high level of correlation between the open flux derived from the various methods strongly supports the Ulysses discovery that the radial field component is independent of latitude. We show here that the errors introduced into open solar flux estimates by assuming that the heliospheric field’s radial component is independent of latitude are similar for the two passes and are of order 25% for daily values, falling to 5% for averaging timescales of 27 days or greater. We compare here the results of four methods for estimating the open solar flux with results from the first and second perehelion passes by Ulysses. We find that the errors are lowest (1–5% for averages over the entire perehelion passes lasting near 320 days), for near-Earth methods, based on either interplanetary magnetic field observations or the aa geomagnetic activity index. The corresponding errors for the Solanki et al. (2000) model are of the order of 9–15% and for the PFSS method, based on solar magnetograms, are of the order of 13–47%. The model of Solanki et al. is based on the continuity equation of open flux, and uses the sunspot number to quantify the rate of open flux emergence. It predicts that the average open solar flux has been decreasing since 1987, as Correspondence to: M. Lockwood (m.lockwood@rl.ac.uk) is observed in the variation of all the estimates of the open flux. This decline combines with the solar cycle variation to produce an open flux during the second (sunspot maximum) perihelion pass of Ulysses which is only slightly larger than that during the first (sunspot minimum) perihelion pass.
Resumo:
In this paper the origin and evolution of the Sun’s open magnetic flux are considered for single magnetic bipoles as they are transported across the Sun. The effects of magnetic flux transport on the radial field at the surface of the Sun are modeled numerically by developing earlier work by Wang, Sheeley, and Lean (2000). The paper considers how the initial tilt of the bipole axis (α) and its latitude of emergence affect the variation and magnitude of the surface and open magnetic flux. The amount of open magnetic flux is estimated by constructing potential coronal fields. It is found that the open flux may evolve independently from the surface field for certain ranges of the tilt angle. For a given tilt angle, the lower the latitude of emergence, the higher the magnitude of the surface and open flux at the end of the simulation. In addition, three types of behavior are found for the open flux depending on the initial tilt angle of the bipole axis. When the tilt is such that α ≥ 2◦ the open flux is independent of the surface flux and initially increases before decaying away. In contrast, for tilt angles in the range −16◦ < α < 2◦ the open flux follows the surface flux and continually decays. Finally, for α ≤ −16◦ the open flux first decays and then increases in magnitude towards a second maximum before decaying away. This behavior of the open flux can be explained in terms of two competing effects produced by differential rotation. Firstly, differential rotation may increase or decrease the open flux by rotating the centers of each polarity of the bipole at different rates when the axis has tilt. Secondly, it decreases the open flux by increasing the length of the polarity inversion line where flux cancellation occurs. The results suggest that, in order to reproduce a realistic model of the Sun’s open magnetic flux over a solar cycle, it is important to have accurate input data on the latitude of emergence of bipoles along with the variation of their tilt angles as the cycle progresses.
Resumo:
The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.
Resumo:
Optimal state estimation is a method that requires minimising a weighted, nonlinear, least-squares objective function in order to obtain the best estimate of the current state of a dynamical system. Often the minimisation is non-trivial due to the large scale of the problem, the relative sparsity of the observations and the nonlinearity of the objective function. To simplify the problem the solution is often found via a sequence of linearised objective functions. The condition number of the Hessian of the linearised problem is an important indicator of the convergence rate of the minimisation and the expected accuracy of the solution. In the standard formulation the convergence is slow, indicating an ill-conditioned objective function. A transformation to different variables is often used to ameliorate the conditioning of the Hessian by changing, or preconditioning, the Hessian. There is only sparse information in the literature for describing the causes of ill-conditioning of the optimal state estimation problem and explaining the effect of preconditioning on the condition number. This paper derives descriptive theoretical bounds on the condition number of both the unpreconditioned and preconditioned system in order to better understand the conditioning of the problem. We use these bounds to explain why the standard objective function is often ill-conditioned and why a standard preconditioning reduces the condition number. We also use the bounds on the preconditioned Hessian to understand the main factors that affect the conditioning of the system. We illustrate the results with simple numerical experiments.
Resumo:
A mixed integer continuous nonlinear model and a solution method for the problem of orthogonally packing identical rectangles within an arbitrary convex region are introduced in the present work. The convex region is assumed to be made of an isotropic material in such a way that arbitrary rotations of the items, preserving the orthogonality constraint, are allowed. The solution method is based on a combination of branch and bound and active-set strategies for bound-constrained minimization of smooth functions. Numerical results show the reliability of the presented approach. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A novel global optimization method based on an Augmented Lagrangian framework is introduced for continuous constrained nonlinear optimization problems. At each outer iteration k the method requires the epsilon(k)-global minimization of the Augmented Lagrangian with simple constraints, where epsilon(k) -> epsilon. Global convergence to an epsilon-global minimizer of the original problem is proved. The subproblems are solved using the alpha BB method. Numerical experiments are presented.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this work, the plate bending formulation of the boundary element method - BEM, based on the Reissner's hypothesis, is extended to the analysis of plates reinforced by beams taking into account the membrane effects. The formulation is derived by assuming a zoned body where each sub-region defines a beam or a slab and all of them are represented by a chosen reference surface. Equilibrium and compatibility conditions are automatically imposed by the integral equations, which treat this composed structure as a single body. In order to reduce the number of degrees of freedom, the problem values defined on the interfaces are written in terms of their values on the beam axis. Initially are derived separated equations for the bending and stretching problems, but in the final system of equations the two problems are coupled and can not be treated separately. Finally are presented some numerical examples whose analytical results are known to show the accuracy of the proposed model.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
Many variational inequality problems (VIPs) can be reduced, by a compactification procedure, to a VIP on the canonical simplex. Reformulations of this problem are studied, including smooth reformulations with simple constraints and unconstrained reformulations based on the penalized Fischer-Burmeister function. It is proved that bounded level set results hold for these reformulations under quite general assumptions on the operator. Therefore, it can be guaranteed that minimization algorithms generate bounded sequences and, under monotonicity conditions, these algorithms necessarily nd solutions of the original problem. Some numerical experiments are presented.