61 resultados para Perturbation Problem
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
In this work we show that the eigenvalues of the Dirichlet problem for the biharmonic operator are generically simple in the set Of Z(2)-symmetric regions of R-n, n >= 2, with a suitable topology. To accomplish this, we combine Baire`s lemma, a generalised version of the transversality theorem, due to Henry [Perturbation of the boundary in boundary value problems of PDEs, London Mathematical Society Lecture Note Series 318 (Cambridge University Press, 2005)], and the method of rapidly oscillating functions developed in [A. L. Pereira and M. C. Pereira, Mat. Contemp. 27 (2004) 225-241].
Resumo:
This paper addresses the capacitated lot sizing problem (CLSP) with a single stage composed of multiple plants, items and periods with setup carry-over among the periods. The CLSP is well studied and many heuristics have been proposed to solve it. Nevertheless, few researches explored the multi-plant capacitated lot sizing problem (MPCLSP), which means that few solution methods were proposed to solve it. Furthermore, to our knowledge, no study of the MPCLSP with setup carry-over was found in the literature. This paper presents a mathematical model and a GRASP (Greedy Randomized Adaptive Search Procedure) with path relinking to the MPCLSP with setup carry-over. This solution method is an extension and adaptation of a previously adopted methodology without the setup carry-over. Computational tests showed that the improvement of the setup carry-over is significant in terms of the solution value with a low increase in computational time.
Resumo:
Introduction: Work disability is a major consequence of rheumatoid arthritis (RA), associated not only with traditional disease activity variables, but also more significantly with demographic, functional, occupational, and societal variables. Recent reports suggest that the use of biologic agents offers potential for reduced work disability rates, but the conclusions are based on surrogate disease activity measures derived from studies primarily from Western countries. Methods: The Quantitative Standard Monitoring of Patients with RA (QUEST-RA) multinational database of 8,039 patients in 86 sites in 32 countries, 16 with high gross domestic product (GDP) (>24K US dollars (USD) per capita) and 16 low-GDP countries (<11K USD), was analyzed for work and disability status at onset and over the course of RA and clinical status of patients who continued working or had stopped working in high-GDP versus low-GDP countries according to all RA Core Data Set measures. Associations of work disability status with RA Core Data Set variables and indices were analyzed using descriptive statistics and regression analyses. Results: At the time of first symptoms, 86% of men (range 57%-100% among countries) and 64% (19%-87%) of women <65 years were working. More than one third (37%) of these patients reported subsequent work disability because of RA. Among 1,756 patients whose symptoms had begun during the 2000s, the probabilities of continuing to work were 80% (95% confidence interval (CI) 78%-82%) at 2 years and 68% (95% CI 65%-71%) at 5 years, with similar patterns in high-GDP and low-GDP countries. Patients who continued working versus stopped working had significantly better clinical status for all clinical status measures and patient self-report scores, with similar patterns in high-GDP and low-GDP countries. However, patients who had stopped working in high-GDP countries had better clinical status than patients who continued working in low-GDP countries. The most significant identifier of work disability in all subgroups was Health Assessment Questionnaire (HAQ) functional disability score. Conclusions: Work disability rates remain high among people with RA during this millennium. In low-GDP countries, people remain working with high levels of disability and disease activity. Cultural and economic differences between societies affect work disability as an outcome measure for RA.
Resumo:
A smooth inflaton potential is generally assumed when calculating the primordial power spectrum, implicitly assuming that a very small oscillation in the inflaton potential creates a negligible change in the predicted halo mass function. We show that this is not true. We find that a small oscillating perturbation in the inflaton potential in the slow-roll regime can alter significantly the predicted number of small halos. A class of models derived from supergravity theories gives rise to inflaton potentials with a large number of steps and many trans-Planckian effects may generate oscillations in the primordial power spectrum. The potentials we study are the simple quadratic (chaotic inflation) potential with superimposed small oscillations for small field values. Without leaving the slow-roll regime, we find that for a wide choice of parameters, the predicted number of halos change appreciably. For the oscillations beginning in the 10(7)-10(8) M(circle dot) range, for example, we find that only a 5% change in the amplitude of the chaotic potential causes a 50% suppression of the number of halos for masses between 10(7)-10(8) M(circle dot) and an increase in the number of halos for masses <10(6) M(circle dot) by factors similar to 15-50. We suggest that this might be a solution to the problem of the lack of observed dwarf galaxies in the range 10(7)-10(8) M(circle dot). This might also be a solution to the reionization problem where a very large number of Population III stars in low mass halos are required.
Resumo:
Aims. An analytical solution for the discrepancy between observed core-like profiles and predicted cusp profiles in dark matter halos is studied. Methods. We calculate the distribution function for Navarro-Frenk-White halos and extract energy from the distribution, taking into account the effects of baryonic physics processes. Results. We show with a simple argument that we can reproduce the evolution of a cusp to a flat density profile by a decrease of the initial potential energy.
Resumo:
We consider a binary Bose-Einstein condensate (BEC) described by a system of two-dimensional (2D) Gross-Pitaevskii equations with the harmonic-oscillator trapping potential. The intraspecies interactions are attractive, while the interaction between the species may have either sign. The same model applies to the copropagation of bimodal beams in photonic-crystal fibers. We consider a family of trapped hidden-vorticity (HV) modes in the form of bound states of two components with opposite vorticities S(1,2) = +/- 1, the total angular momentum being zero. A challenging problem is the stability of the HV modes. By means of a linear-stability analysis and direct simulations, stability domains are identified in a relevant parameter plane. In direct simulations, stable HV modes feature robustness against large perturbations, while unstable ones split into fragments whose number is identical to the azimuthal index of the fastest growing perturbation eigenmode. Conditions allowing for the creation of the HV modes in the experiment are discussed too. For comparison, a similar but simpler problem is studied in an analytical form, viz., the modulational instability of an HV state in a one-dimensional (1D) system with periodic boundary conditions (this system models a counterflow in a binary BEC mixture loaded into a toroidal trap or a bimodal optical beam coupled into a cylindrical shell). We demonstrate that the stabilization of the 1D HV modes is impossible, which stresses the significance of the stabilization of the HV modes in the 2D setting.
Resumo:
This is a more detailed version of our recent paper where we proposed, from first principles, a direct method for evaluating the exact fermion propagator in the presence of a general background field at finite temperature. This can, in turn, be used to determine the finite temperature effective action for the system. As applications, we discuss the complete one loop finite temperature effective actions for 0+1 dimensional QED as well as for the Schwinger model in detail. These effective actions, which are derived in the real time (closed time path) formalism, generate systematically all the Feynman amplitudes calculated in thermal perturbation theory and also show that the retarded (advanced) amplitudes vanish in these theories. Various other aspects of the problem are also discussed in detail.
Resumo:
Cosmological analyses based on currently available observations are unable to rule out a sizeable coupling between dark energy and dark matter. However, the signature of the coupling is not easy to grasp, since the coupling is degenerate with other cosmological parameters, such as the dark energy equation of state and the dark matter abundance. We discuss possible ways to break such degeneracy. Based on the perturbation formalism, we carry out the global fitting by using the latest observational data and get a tight constraint on the interaction between dark sectors. We find that the appropriate interaction can alleviate the coincidence problem.
Resumo:
The energy spectrum of an electron confined in a quantum dot (QD) with a three-dimensional anisotropic parabolic potential in a tilted magnetic field was found analytically. The theory describes exactly the mixing of in-plane and out-of-plane motions of an electron caused by a tilted magnetic field, which could be seen, for example, in the level anticrossing. For charged QDs in a tilted magnetic field we predict three strong resonant lines in the far-infrared-absorption spectra.
Resumo:
We study the free-fall of a quantum particle in the context of noncommutative quantum mechanics (NCQM). Assuming noncommutativity of the canonical type between the coordinates of a two-dimensional configuration space, we consider a neutral particle trapped in a gravitational well and exactly solve the energy eigenvalue problem. By resorting to experimental data from the GRANIT experiment, in which the first energy levels of freely falling quantum ultracold neutrons were determined, we impose an upper-bound on the noncommutativity parameter. We also investigate the time of flight of a quantum particle moving in a uniform gravitational field in NCQM. This is related to the weak equivalence principle. As we consider stationary, energy eigenstates, i.e., delocalized states, the time of flight must be measured by a quantum clock, suitably coupled to the particle. By considering the clock as a small perturbation, we solve the (stationary) scattering problem associated and show that the time of flight is equal to the classical result, when the measurement is made far from the turning point. This result is interpreted as an extension of the equivalence principle to the realm of NCQM. (C) 2010 American Institute of Physics. [doi:10.1063/1.3466812]
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
The width of a closed convex subset of n-dimensional Euclidean space is the distance between two parallel supporting hyperplanes. The Blaschke-Lebesgue problem consists of minimizing the volume in the class of convex sets of fixed constant width and is still open in dimension n >= 3. In this paper we describe a necessary condition that the minimizer of the Blaschke-Lebesgue must satisfy in dimension n = 3: we prove that the smooth components of the boundary of the minimizer have their smaller principal curvature constant and therefore are either spherical caps or pieces of tubes (canal surfaces).
Resumo:
The first problem of the Seleucid mathematical cuneiform tablet BM 34 568 calculates the diagonal of a rectangle from its sides without resorting to the Pythagorean rule. For this reason, it has been a source of discussion among specialists ever since its first publication. but so far no consensus in relation to its mathematical meaning has been attained. This paper presents two new interpretations of the scribe`s procedure. based on the assumption that he was able to reduce the problem to a standard Mesopotamian question about reciprocal numbers. These new interpretations are then linked to interpretations of the Old Babylonian tablet Plimpton 322 and to the presence of Pythagorean triples in the contexts of Old Babylonian and Hellenistic mathematics. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.