32 resultados para maximum Lyapunov exponent

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Control of a chaotic system by homogeneous nonlinear driving, when a conditional Lyapunov exponent is zero, may give rise to special and interesting synchronizationlike behaviors in which the response evolves in perfect correlation with the drive. Among them, there are the amplification of the drive attractor and the shift of it to a different region of phase space. In this paper, these synchronizationlike behaviors are discussed, and demonstrated by computer simulation of the Lorentz model [E. N. Lorenz, J. Atmos. Sci. 20 130 (1963)] and the double scroll [T. Matsumoto, L. O. Chua, and M. Komuro, IEEE Trans. CAS CAS-32, 798 (1985)].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The McMillan map is a one-parameter family of integrable symplectic maps of the plane, for which the origin is a hyperbolic fixed point with a homoclinic loop, with small Lyapunov exponent when the parameter is small. We consider a perturbation of the McMillan map for which we show that the loop breaks in two invariant curves which are exponentially close one to the other and which intersect transversely along two primary homoclinic orbits. We compute the asymptotic expansion of several quantities related to the splitting, namely the Lazutkin invariant and the area of the lobe between two consecutive primary homoclinic points. Complex matching techniques are in the core of this work. The coefficients involved in the expansion have a resurgent origin, as shown in [MSS08].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A peculiar type of synchronization has been found when two Van der PolDuffing oscillators, evolving in different chaotic attractors, are coupled. As the coupling increases, the frequencies of the two oscillators remain different, while a synchronized modulation of the amplitudes of a signal of each system develops, and a null Lyapunov exponent of the uncoupled systems becomes negative and gradually larger in absolute value. This phenomenon is characterized by an appropriate correlation function between the returns of the signals, and interpreted in terms of the mutual excitation of new frequencies in the oscillators power spectra. This form of synchronization also occurs in other systems, but it shows up mixed with or screened by other forms of synchronization, as illustrated in this paper by means of the examples of the dynamic behavior observed for three other different models of chaotic oscillators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When using a polynomial approximating function the most contentious aspect of the Heat Balance Integral Method is the choice of power of the highest order term. In this paper we employ a method recently developed for thermal problems, where the exponent is determined during the solution process, to analyse Stefan problems. This is achieved by minimising an error function. The solution requires no knowledge of an exact solution and generally produces significantly better results than all previous HBI models. The method is illustrated by first applying it to standard thermal problems. A Stefan problem with an analytical solution is then discussed and results compared to the approximate solution. An ablation problem is also analysed and results compared against a numerical solution. In both examples the agreement is excellent. A Stefan problem where the boundary temperature increases exponentially is analysed. This highlights the difficulties that can be encountered with a time dependent boundary condition. Finally, melting with a time-dependent flux is briefly analysed without applying analytical or numerical results to assess the accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the effect of strong heterogeneities on the fracture of disordered materials using a fiber bundle model. The bundle is composed of two subsets of fibers, i.e. a fraction 0 ≤ α ≤ 1 of fibers is unbreakable, while the remaining 1 - α fraction is characterized by a distribution of breaking thresholds. Assuming global load sharing, we show analytically that there exists a critical fraction of the components αc which separates two qualitatively diferent regimes of the system: below αc the burst size distribution is a power law with the usual exponent Ƭ= 5/2, while above αc the exponent switches to a lower value Ƭ = 9/4 and a cutoff function occurs with a diverging characteristic size. Analyzing the macroscopic response of the system we demonstrate that the transition is conditioned to disorder distributions where the constitutive curve has a single maximum and an inflexion point defining a novel universality class of breakdown phenomena

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An overview is given on a study which showed that not only in chemical reactions but also in the favorable case of nontotally symmetric vibrations where the chemical and external potentials keep approximately constant, the generalized maximum hardness principle (GMHP) and generalized minimum polarizability principle (GMPP) may not be obeyed. A method that allows an accurate determination of the nontotally symmetric molecular distortions with more marked GMPP or anti-GMPP character through diagonalization of the polarizability Hessian matrix is introduced

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents a new model based on the basic Maximum Capture model,MAXCAP. The New Chance Constrained Maximum Capture modelintroduces astochastic threshold constraint, which recognises the fact that a facilitycan be open only if a minimum level of demand is captured. A metaheuristicbased on MAX MIN ANT system and TABU search procedure is presented tosolve the model. This is the first time that the MAX MIN ANT system isadapted to solve a location problem. Computational experience and anapplication to 55 node network are also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precise estimation of propagation parameters inprecipitation media is of interest to improve the performanceof communications systems and in remote sensing applications.In this paper, we present maximum-likelihood estimators ofspecific attenuation and specific differential phase in rain. Themodel used for obtaining the cited estimators assumes coherentpropagation, reflection symmetry of the medium, and Gaussianstatistics of the scattering matrix measurements. No assumptionsabout the microphysical properties of the medium are needed.The performance of the estimators is evaluated through simulateddata. Results show negligible estimators bias and variances closeto Cramer–Rao bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The long-term mean properties of the global climate system and those of turbulent fluid systems are reviewed from a thermodynamic viewpoint. Two general expressions are derived for a rate of entropy production due to thermal and viscous dissipation (turbulent dissipation) in a fluid system. It is shown with these expressions that maximum entropy production in the Earth s climate system suggested by Paltridge, as well as maximum transport properties of heat or momentum in a turbulent system suggested by Malkus and Busse, correspond to a state in which the rate of entropy production due to the turbulent dissipation is at a maximum. Entropy production due to absorption of solar radiation in the climate system is found to be irrelevant to the maximized properties associated with turbulence. The hypothesis of maximum entropy production also seems to be applicable to the planetary atmospheres of Mars and Titan and perhaps to mantle convection. Lorenz s conjecture on maximum generation of available potential energy is shown to be akin to this hypothesis with a few minor approximations. A possible mechanism by which turbulent fluid systems adjust themselves to the states of maximum entropy production is presented as a selffeedback mechanism for the generation of available potential energy. These results tend to support the hypothesis of maximum entropy production that underlies a wide variety of nonlinear fluid systems, including our planet as well as other planets and stars

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the hypothesis that the atmosphere is constrained to maximize its entropy production by using a one-dimensional (1-D) vertical model. We prescribe the lapse rate in the convective layer as that of the standard troposphere. The assumption that convection sustains a critical lapse rate was absent in previous studies, which focused on the vertical distribution of climatic variables, since such a convective adjustment reduces the degrees of freedom of the system and may prevent the application of the maximum entropy production (MEP) principle. This is not the case in the radiative–convective model (RCM) developed here, since we accept a discontinuity of temperatures at the surface similar to that adopted in many RCMs. For current conditions, the MEP state gives a difference between the ground temperature and the air temperature at the surface ≈10 K. In comparison, conventional RCMs obtain a discontinuity ≈2 K only. However, the surface boundary layer velocity in the MEP state appears reasonable (≈3 m s-¹). Moreover, although the convective flux at the surface in MEP states is almost uniform in optically thick atmospheres, it reaches a maximum value for an optical thickness similar to current conditions. This additional result may support the maximum convection hypothesis suggested by Paltridge (1978)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.