531 resultados para invariance
Resumo:
Feedback stabilization of an ensemble of non interacting half spins described by the Bloch equations is considered. This system may be seen as an interesting example for infinite dimensional systems with continuous spectra. We propose an explicit feedback law that stabilizes asymptotically the system around a uniform state of spin +1/2 or -1/2. The proof of the convergence is done locally around the equilibrium in the H-1 topology. This local convergence is shown to be a weak asymptotic convergence for the H-1 topology and thus a strong convergence for the C topology. The proof relies on an adaptation of the LaSalle invariance principle to infinite dimensional systems. Numerical simulations illustrate the efficiency of these feedback laws, even for initial conditions far from the equilibrium. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
On the basis of the full analytical solution of the overall unitary dynamics, the time evolution of entanglement is studied in a simple bipartite model system evolving unitarily from a pure initial state. The system consists of two particles in one spatial dimension bound by harmonic forces and having its free center of mass initially localized in space in a minimum uncertainty wavepacket. The existence of such initial states in which the bound particles are not entangled is discussed. Galilean invariance of the system ensures that the dynamics of entanglement between the two particles is independent of the wavepacket mean momentum. In fact, as shown, it is driven by the dispersive center of mass free dynamics, and evolves in a time scale that depends on the interparticle interaction in an essential way.
Resumo:
Scaling methods allow a single solution to Richards' equation (RE) to suffice for numerous specific cases of water flow in unsaturated soils. During the past half-century, many such methods were developed for similar soils. In this paper, a new method is proposed for scaling RE for a wide range of dissimilar soils. Exponential-power (EP) functions are used to reduce the dependence of the scaled RE on the soil hydraulic properties. To evaluate the proposed method, the scaled RE was solved numerically considering two test cases: infiltration into relatively dry soils having initially uniform water content distributions, and gravity-dominant drainage occurring from initially wet soil profiles. Although the results for four texturally different soils ranging from sand to heavy clay (adopted from the UNSODA database) showed that the scaled solution were invariant for a wide range of flow conditions, slight deviations were observed when the soil profile was initially wet in the infiltration case or deeply wet in the drainage case. The invariance of the scaled RE makes it possible to generalize a single solution of RE to many dissimilar soils and conditions. Such a procedure reduces the numerical calculations and provides additional opportunities for solving the highly nonlinear RE for unsaturated water flow in soils.
Resumo:
We propose an alternative, nonsingular, cosmic scenario based on gravitationally induced particle production. The model is an attempt to evade the coincidence and cosmological constant problems of the standard model (Lambda CDM) and also to connect the early and late time accelerating stages of the Universe. Our space-time emerges from a pure initial de Sitter stage thereby providing a natural solution to the horizon problem. Subsequently, due to an instability provoked by the production of massless particles, the Universe evolves smoothly to the standard radiation dominated era thereby ending the production of radiation as required by the conformal invariance. Next, the radiation becomes subdominant with the Universe entering in the cold dark matter dominated era. Finally, the negative pressure associated with the creation of cold dark matter (CCDM model) particles accelerates the expansion and drives the Universe to a final de Sitter stage. The late time cosmic expansion history of the CCDM model is exactly like in the standard Lambda CDM model; however, there is no dark energy. The model evolves between two limiting (early and late time) de Sitter regimes. All the stages are also discussed in terms of a scalar field description. This complete scenario is fully determined by two extreme energy densities, or equivalently, the associated de Sitter Hubble scales connected by rho(I)/rho(f) = (H-I/H-f)(2) similar to 10(122), a result that has no correlation with the cosmological constant problem. We also study the linear growth of matter perturbations at the final accelerating stage. It is found that the CCDM growth index can be written as a function of the Lambda growth index, gamma(Lambda) similar or equal to 6/11. In this framework, we also compare the observed growth rate of clustering with that predicted by the current CCDM model. Performing a chi(2) statistical test we show that the CCDM model provides growth rates that match sufficiently well with the observed growth rate of structure.
Resumo:
We construct harmonic functions on random graphs given by Delaunay triangulations of ergodic point processes as the limit of the zero-temperature harness process. (C) 2012 Elsevier B.V All rights reserved.
Resumo:
A non-Markovian one-dimensional random walk model is studied with emphasis on the phase-diagram, showing all the diffusion regimes, along with the exactly determined critical lines. The model, known as the Alzheimer walk, is endowed with memory-controlled diffusion, responsible for the model's long-range correlations, and is characterized by a rich variety of diffusive regimes. The importance of this model is that superdiffusion arises due not to memory per se, but rather also due to loss of memory. The recently reported numerically and analytically estimated values for the Hurst exponent are hereby reviewed. We report the finding of two, previously overlooked, phases, namely, evanescent log-periodic diffusion and log-periodic diffusion with escape, both with Hurst exponent H = 1/2. In the former, the log-periodicity gets damped, whereas in the latter the first moment diverges. These phases further enrich the already intricate phase diagram. The results are discussed in the context of phase transitions, aging phenomena, and symmetry breaking.
Resumo:
Using the density matrix renormalization group, we calculated the finite-size corrections of the entanglement alpha-Renyi entropy of a single interval for several critical quantum chains. We considered models with U(1) symmetry such as the spin-1/2 XXZ and spin-1 Fateev-Zamolodchikov models, as well as models with discrete symmetries such as the Ising, the Blume-Capel, and the three-state Potts models. These corrections contain physically relevant information. Their amplitudes, which depend on the value of a, are related to the dimensions of operators in the conformal field theory governing the long-distance correlations of the critical quantum chains. The obtained results together with earlier exact and numerical ones allow us to formulate some general conjectures about the operator responsible for the leading finite-size correction of the alpha-Renyi entropies. We conjecture that the exponent of the leading finite-size correction of the alpha-Renyi entropies is p(alpha) = 2X(epsilon)/alpha for alpha > 1 and p(1) = nu, where X-epsilon denotes the dimensions of the energy operator of the model and nu = 2 for all the models.
Resumo:
In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.
Resumo:
An overview is given of the limitations of Luttinger liquid theory in describing the real time equilibrium dynamics of critical one-dimensional systems with nonlinear dispersion relation. After exposing the singularities of perturbation theory in band curvature effects that break the Lorentz invariance of the Tomonaga-Luttinger model, the origin of high frequency oscillations in the long time behaviour of correlation functions is discussed. The notion that correlations decay exponentially at finite temperature is challenged by the effects of diffusion in the density-density correlation due to umklapp scattering in lattice models.
Resumo:
We analytically study the input-output properties of a neuron whose active dendritic tree, modeled as a Cayley tree of excitable elements, is subjected to Poisson stimulus. Both single-site and two-site mean-field approximations incorrectly predict a nonequilibrium phase transition which is not allowed in the model. We propose an excitable-wave mean-field approximation which shows good agreement with previously published simulation results [Gollo et al., PLoS Comput. Biol. 5, e1000402 (2009)] and accounts for finite-size effects. We also discuss the relevance of our results to experiments in neuroscience, emphasizing the role of active dendrites in the enhancement of dynamic range and in gain control modulation.
Resumo:
The recently announced Higgs boson discovery marks the dawn of the direct probing of the electroweak symmetry breaking sector. Sorting out the dynamics responsible for electroweak symmetry breaking now requires probing the Higgs boson interactions and searching for additional states connected to this sector. In this work, we analyze the constraints on Higgs boson couplings to the standard model gauge bosons using the available data from Tevatron and LHC. We work in a model-independent framework expressing the departure of the Higgs boson couplings to gauge bosons by dimension-six operators. This allows for independent modifications of its couplings to gluons, photons, and weak gauge bosons while still preserving the Standard Model (SM) gauge invariance. Our results indicate that best overall agreement with data is obtained if the cross section of Higgs boson production via gluon fusion is suppressed with respect to its SM value and the Higgs boson branching ratio into two photons is enhanced, while keeping the production and decays associated to couplings to weak gauge bosons close to their SM prediction.
Resumo:
Planck scale physics may influence the evolution of cosmological fluctuations in the early stages of cosmological evolution. Because of the quasiexponential redshifting, which occurs during an inflationary period, the physical wavelengths of comoving scales that correspond to the present large-scale structure of the Universe were smaller than the Planck length in the early stages of the inflationary period. This trans-Planckian effect was studied before using toy models. The Horava-Lifshitz (HL) theory offers the chance to study this problem in a candidate UV complete theory of gravity. In this paper we study the evolution of cosmological perturbations according to HL gravity assuming that matter gives rise to an inflationary background. As is usually done in inflationary cosmology, we assume that the fluctuations originate in their minimum energy state. In the trans-Planckian region the fluctuations obey a nonlinear dispersion relation of Corley-Jacobson type. In the "healthy extension" of HL gravity there is an extra degree of freedom which plays an important role in the UV region but decouples in the IR, and which influences the cosmological perturbations. We find that in spite of these important changes compared to the usual description, the overall scale invariance of the power spectrum of cosmological perturbations is recovered. However, we obtain oscillations in the spectrum as a function of wave number with a relative amplitude of order unity and with an effective frequency which scales nonlinearly with wave number. Taking the usual inflationary parameters we find that the frequency of the oscillations is so large as to render the effect difficult to observe.
Resumo:
We show, in the imaginary time formalism, that the temperature dependent parts of all the retarded (advanced) amplitudes vanish in the Schwinger model. We trace this behavior to the CPT invariance of the theory and give a physical interpretation of this result in terms of forward scattering amplitudes of on-shell thermal particles.
Resumo:
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.
Resumo:
This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).