886 resultados para largest finite-time Lyapunov exponent


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Array seismology is an useful tool to perform a detailed investigation of the Earth’s interior. Seismic arrays by using the coherence properties of the wavefield are able to extract directivity information and to increase the ratio of the coherent signal amplitude relative to the amplitude of incoherent noise. The Double Beam Method (DBM), developed by Krüger et al. (1993, 1996), is one of the possible applications to perform a refined seismic investigation of the crust and mantle by using seismic arrays. The DBM is based on a combination of source and receiver arrays leading to a further improvement of the signal-to-noise ratio by reducing the error in the location of coherent phases. Previous DBM works have been performed for mantle and core/mantle resolution (Krüger et al., 1993; Scherbaum et al., 1997; Krüger et al., 2001). An implementation of the DBM has been presented at 2D large-scale (Italian data-set for Mw=9.3, Sumatra earthquake) and at 3D crustal-scale as proposed by Rietbrock & Scherbaum (1999), by applying the revised version of Source Scanning Algorithm (SSA; Kao & Shan, 2004). In the 2D application, the rupture front propagation in time has been computed. In 3D application, the study area (20x20x33 km3), the data-set and the source-receiver configurations are related to the KTB-1994 seismic experiment (Jost et al., 1998). We used 60 short-period seismic stations (200-Hz sampling rate, 1-Hz sensors) arranged in 9 small arrays deployed in 2 concentric rings about 1 km (A-arrays) and 5 km (B-array) radius. The coherence values of the scattering points have been computed in the crustal volume, for a finite time-window along all array stations given the hypothesized origin time and source location. The resulting images can be seen as a (relative) joint log-likelihood of any point in the subsurface that have contributed to the full set of observed seismograms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The synchronization of dynamic multileaf collimator (DMLC) response with respiratory motion is critical to ensure the accuracy of DMLC-based four dimensional (4D) radiation delivery. In practice, however, a finite time delay (response time) between the acquisition of tumor position and multileaf collimator response necessitates predictive models of respiratory tumor motion to synchronize radiation delivery. Predicting a complex process such as respiratory motion introduces geometric errors, which have been reported in several publications. However, the dosimetric effect of such errors on 4D radiation delivery has not yet been investigated. Thus, our aim in this work was to quantify the dosimetric effects of geometric error due to prediction under several different conditions. Conformal and intensity modulated radiation therapy (IMRT) plans for a lung patient were generated for anterior-posterior/posterior-anterior (AP/PA) beam arrangements at 6 and 18 MV energies to provide planned dose distributions. Respiratory motion data was obtained from 60 diaphragm-motion fluoroscopy recordings from five patients. A linear adaptive filter was employed to predict the tumor position. The geometric error of prediction was defined as the absolute difference between predicted and actual positions at each diaphragm position. Distributions of geometric error of prediction were obtained for all of the respiratory motion data. Planned dose distributions were then convolved with distributions for the geometric error of prediction to obtain convolved dose distributions. The dosimetric effect of such geometric errors was determined as a function of several variables: response time (0-0.6 s), beam energy (6/18 MV), treatment delivery (3D/4D), treatment type (conformal/IMRT), beam direction (AP/PA), and breathing training type (free breathing/audio instruction/visual feedback). Dose difference and distance-to-agreement analysis was employed to quantify results. Based on our data, the dosimetric impact of prediction (a) increased with response time, (b) was larger for 3D radiation therapy as compared with 4D radiation therapy, (c) was relatively insensitive to change in beam energy and beam direction, (d) was greater for IMRT distributions as compared with conformal distributions, (e) was smaller than the dosimetric impact of latency, and (f) was greatest for respiration motion with audio instructions, followed by visual feedback and free breathing. Geometric errors of prediction that occur during 4D radiation delivery introduce dosimetric errors that are dependent on several factors, such as response time, treatment-delivery type, and beam energy. Even for relatively small response times of 0.6 s into the future, dosimetric errors due to prediction could approach delivery errors when respiratory motion is not accounted for at all. To reduce the dosimetric impact, better predictive models and/or shorter response times are required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article provides importance sampling algorithms for computing the probabilities of various types ruin of spectrally negative Lévy risk processes, which are ruin over the infinite time horizon, ruin within a finite time horizon and ruin past a finite time horizon. For the special case of the compound Poisson process perturbed by diffusion, algorithms for computing probabilities of ruins by creeping (i.e. induced by the diffusion term) and by jumping (i.e. by a claim amount) are provided. It is shown that these algorithms have either bounded relative error or logarithmic efficiency, as t,x→∞t,x→∞, where t>0t>0 is the time horizon and x>0x>0 is the starting point of the risk process, with y=t/xy=t/x held constant and assumed either below or above a certain constant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the evolution of a viscous fluid drop rotating about a fixed axis at constant angular velocity $Omega$ or constant angular momentum L surrounded by another viscous fluid. The problem is considered in the limit of large Ekman number and small Reynolds number. The analysis is carried out by combining asymptotic analysis and full numerical simulation by means of the boundary element method. We pay special attention to the stability/instability of equilibrium shapes and the possible formation of singularities representing a change in the topology of the fluid domain. When the evolution is at constant $Omega$, depending on its value, drops can take the form of a flat film whose thickness goes to zero in finite time or an elongated filament that extends indefinitely. When evolution takes place at constant L and axial symmetry is imposed, thin films surrounded by a toroidal rim can develop, but the film thickness does not vanish in finite time. When axial symmetry is not imposed and L is sufficiently large, drops break axial symmetry and, depending on the value of L, reach an equilibrium configuration with a 2-fold symmetry or break up into several drops with a 2- or 3-fold symmetry. The mechanism of breakup is also described

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reviews several recently developed Lagrangian tools and shows how their com- bined use succeeds in obtaining a detailed description of purely advective transport events in general aperiodic flows. In particular, because of the climate impact of ocean transport processes, we illustrate a 2D application on altimeter data sets over the area of the Kuroshio Current, although the proposed techniques are general and applicable to arbitrary time depen- dent aperiodic flows. The first challenge for describing transport in aperiodical time dependent flows is obtaining a representation of the phase portrait where the most relevant dynamical features may be identified. This representation is accomplished by using global Lagrangian descriptors that when applied for instance to the altimeter data sets retrieve over the ocean surface a phase portrait where the geometry of interconnected dynamical systems is visible. The phase portrait picture is essential because it evinces which transport routes are acting on the whole flow. Once these routes are roughly recognised it is possible to complete a detailed description by the direct computation of the finite time stable and unstable manifolds of special hyperbolic trajectories that act as organising centres of the flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The type-I intermittency route to (or out of) chaos is investigated within the horizontal visibility (HV) graph theory. For that purpose, we address the trajectories generated by unimodal maps close to an inverse tangent bifurcation and construct their associatedHVgraphs.We showhowthe alternation of laminar episodes and chaotic bursts imprints a fingerprint in the resulting graph structure. Accordingly, we derive a phenomenological theory that predicts quantitative values for several network parameters. In particular, we predict that the characteristic power-law scaling of the mean length of laminar trend sizes is fully inherited by the variance of the graph degree distribution, in good agreement with the numerics. We also report numerical evidence on how the characteristic power-law scaling of the Lyapunov exponent as a function of the distance to the tangent bifurcation is inherited in the graph by an analogous scaling of block entropy functionals defined on the graph. Furthermore, we are able to recast the full set of HV graphs generated by intermittent dynamics into a renormalization-group framework, where the fixed points of its graph-theoretical renormalization-group flow account for the different types of dynamics.We also establish that the nontrivial fixed point of this flow coincides with the tangency condition and that the corresponding invariant graph exhibits extremal entropic properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we address the problem of dynamic pricing to optimize the revenue coming from the sales of a limited inventory in a finite time-horizon. A priori, the demand is assumed to be unknown. The seller must learn on the fly. We first deal with the simplest case, involving only one class of product for sale. Furthermore the general situation is considered with a finite number of product classes for sale. In particular, a case in point is the sale of tickets for events related to culture and leisure; in this case, typically the tickets are sold months before the event, thus, uncertainty over actual demand levels is a very a common occurrence. We propose a heuristic strategy of adaptive dynamic pricing, based on experience gained from the past, taking into account, for each time period, the available inventory, the time remaining to reach the horizon, and the profit made in previous periods. In the computational simulations performed, the demand is updated dynamically based on the prices being offered, as well as on the remaining time and inventory. The simulations show a significant profit over the fixed-price strategy, confirming the practical usefulness of the proposed strategy. We develop a tool allowing us to test different dynamic pricing strategies designed to fit market conditions and seller s objectives, which will facilitate data analysis and decision-making in the face of the problem of dynamic pricing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study solutions of the two-dimensional quasi-geostrophic thermal active scalar equation involving simple hyperbolic saddles. There is a naturally associated notion of simple hyperbolic saddle breakdown. It is proved that such breakdown cannot occur in finite time. At large time, these solutions may grow at most at a quadruple-exponential rate. Analogous results hold for the incompressible three-dimensional Euler equation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For taxonomic levels higher than species, the abundance distributions of the number of subtaxa per taxon tend to approximate power laws but often show strong deviations from such laws. Previously, these deviations were attributed to finite-time effects in a continuous-time branching process at the generic level. Instead, we describe herein a simple discrete branching process that generates the observed distributions and find that the distribution's deviation from power law form is not caused by disequilibration, but rather that it is time independent and determined by the evolutionary properties of the taxa of interest. Our model predicts—with no free parameters—the rank-frequency distribution of the number of families in fossil marine animal orders obtained from the fossil record. We find that near power law distributions are statistically almost inevitable for taxa higher than species. The branching model also sheds light on species-abundance patterns, as well as on links between evolutionary processes, self-organized criticality, and fractals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We numerically study the aging properties of the dynamical heterogeneities in the Ising spin glass. We find that a phase transition takes place during the aging process. Statics-dynamics correspondence implies that systems of finite size in equilibrium have static heterogeneities that obey finite-size scaling, thus signaling an analogous phase transition in the thermodynamical limit. We compute the critical exponents and the transition point in the equilibrium setting, and use them to show that aging in dynamic heterogeneities can be described by a finite-time scaling ansatz, with potential implications for experimental work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Carnot cycle imposes a fundamental upper limit to the efficiency of a macroscopic motor operating between two thermal baths. However, this bound needs to be reinterpreted at microscopic scales, where molecular bio-motors and some artificial micro-engines operate. As described by stochastic thermodynamics, energy transfers in microscopic systems are random and thermal fluctuations induce transient decreases of entropy, allowing for possible violations of the Carnot limit. Here we report an experimental realization of a Carnot engine with a single optically trapped Brownian particle as the working substance. We present an exhaustive study of the energetics of the engine and analyse the fluctuations of the finite-time efficiency, showing that the Carnot bound can be surpassed for a small number of non-equilibrium cycles. As its macroscopic counterpart, the energetics of our Carnot device exhibits basic properties that one would expect to observe in any microscopic energy transducer operating with baths at different temperatures. Our results characterize the sources of irreversibility in the engine and the statistical properties of the efficiency-an insight that could inspire new strategies in the design of efficient nano-motors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Equações diferenciais de quarta ordem aparecem naturalmente na modelagem de oscilações de estruturas elásticas, como aquelas observadas em pontes pênseis. São considerados dois modelos que descrevem as oscilações no tabuleiro de uma ponte. No modelo unidimensional estudamos blow up em espaço finito de soluções de uma classe de equações diferenciais de quarta ordem. Os resultados apresentados solucionam uma conjectura apresentada em [F. Gazzola and R. Pavani. Wide oscillation finite time blow up for solutions to nonlinear fourth order differential equations. Arch. Ration. Mech. Anal., 207(2):717752, 2013] e implicam a não existência de ondas viajantes com baixa velocidade de propagação em uma viga. No modelo bidimensional analisamos uma equação não local para uma placa longa e fina, suportada nas extremidades menores, livre nas demais e sujeita a protensão. Provamos existência e unicidade de solução fraca e estudamos o seu comportamento assintótico sob amortecimento viscoso. Estudamos ainda a estabilidade de modos simples de oscilação, os quais são classificados como longitudinais ou torcionais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a conceptual framework for the empirical analysis of farmers’ labour allocation decisions. The paper presents a brief overview of previous farm household labour allocation studies. Following this, the agricultural household model, developed by Singh, Squire and Strauss (1986), which has been frequently applied to the study of labour allocation, is described in more depth. The agricultural household model, the theoretical model to be used in this analysis, is based on the premise that farmers behave to maximise utility, which is a function of consumption and leisure. It follows that consumption is bound by a budget constraint and leisure by a time constraint. The theoretical model can then be used to explain how farmers decide to allocate their time between leisure, farm work and off-farm work within the constraints of a finite time endowment and a budget constraint. Work, both farm and off-farm, provides a return to labour which in turn relaxes the budget constraint allowing the farm household to consume more. The theoretical model can also be used to explore the impact on government policies on labour allocation. It follows that subsidies that decrease commodity prices, such as reductions in intervention prices, mean that farmers have to work more (either on or off the farm) to maintain income and consumption levels. On the other hand, income support subsidies that are not linked to output or labour, such as decoupled subsidies, are a source of non-labour income and as such allow farmers to work less while maintaining consumption levels, known as the wealth effect.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been suggested that growth cones navigating through the developing nervous system might display adaptation, so that their response to gradient signals is conserved over wide variations in ligand concentration. Recently however, a new chemotaxis assay that allows the effect of gradient parameters on axonal trajectories to be finely varied has revealed a decline in gradient sensitivity on either side of an optimal concentration. We show that this behavior can be quantitatively reproduced with a computational model of axonal chemotaxis that does not employ explicit adaptation. Two crucial components of this model required to reproduce the observed sensitivity are spatial and temporal averaging. These can be interpreted as corresponding, respectively, to the spatial spread of signaling effects downstream from receptor binding, and to the finite time over which these signaling effects decay. For spatial averaging, the model predicts that an effective range of roughly one-third of the extent of the growth cone is optimal for detecting small gradient signals. For temporal decay, a timescale of about 3 minutes is required for the model to reproduce the experimentally observed sensitivity.