39 resultados para asymptotically hyperbolic
Resumo:
Let $A$ be an infinite Toeplitz matrix with a real symbol $f$ defined on $[-\pi, \pi]$. It is well known that the sequence of spectra of finite truncations $A_N$ of $A$ converges to the convex hull of the range of $f$. Recently, Levitin and Shargorodsky, on the basis of some numerical experiments, conjectured, for symbols $f$ with two discontinuities located at rational multiples of $\pi$, that the eigenvalues of $A_N$ located in the gap of $f$ asymptotically exhibit periodicity in $N$, and suggested a formula for the period as a function of the position of discontinuities. In this paper, we quantify and prove the analog of this conjecture for the matrix $A^2$ in a particular case when $f$ is a piecewise constant function taking values $-1$ and $1$.
Resumo:
We consider a quantity κ(Ω)—the distance to the origin from the null variety of the Fourier transform of the characteristic function of Ω. We conjecture, firstly, that κ(Ω) is maximised, among all convex balanced domains of a fixed volume, by a ball, and also that κ(Ω) is bounded above by the square root of the second Dirichlet eigenvalue of Ω. We prove some weaker versions of these conjectures in dimension two, as well as their validity for domains asymptotically close to a disk, and also discuss further links between κ(Ω) and the eigenvalues of the Laplacians.
Resumo:
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures.
Resumo:
A dynamic recurrent neural network (DRNN) is used to input/output linearize a control affine system in the globally linearizing control (GLC) structure. The network is trained as a part of a closed loop that involves a PI controller, the goal is to use the network, as a dynamic feedback, to cancel the nonlinear terms of the plant. The stability of the configuration is guarantee if the network and the plant are asymptotically stable and the linearizing input is bounded.
Resumo:
The main limitation of linearization theory that prevents its application in practical problems is the need for an exact knowledge of the plant. This requirement is eliminated and it is shown that a multilayer network can synthesise the state feedback coefficients that linearize a nonlinear control affine plant. The stability of the linearizing closed loop can be guaranteed if the autonomous plant is asymptotically stable and the state feedback is bounded.
Resumo:
In this paper we study generalised prime systems for which the integer counting function NP(x) is asymptotically well behaved, in the sense that NP(x)=ρx+O(xβ), where ρ is a positive constant and . For such systems, the associated zeta function ζP(s) is holomorphic for . We prove that for , for any ε>0, and also for ε=0 for all such σ except possibly one value. The Dirichlet divisor problem for generalised integers concerns the size of the error term in NkP(x)−Ress=1(ζPk(s)xs/s), which is O(xθ) for some θ<1. Letting αk denote the infimum of such θ, we show that .
Resumo:
The objective of this study was to investigate a novel light backscatter sensor, with a large field of view relative to curd size, for continuous on-line monitoring of coagulation and syneresis to improve curd moisture content control. A three-level, central composite design was employed to study the effects of temperature, cutting time, and CaCl2 addition on cheese making parameters. The sensor signal was recorded and analyzed. The light backscatter ratio followed a sigmoid increase during coagulation and decreased asymptotically after gel cutting. Curd yield and curd moisture content were predicted from the time to the maximum slope of the first derivative of the light backscatter ratio during coagulation and the decrease in the sensor response during syneresis. Whey fat was affected by coagulation kinetics and cutting time, suggesting curd rheological properties at cutting are dominant factors determining fat losses. The proposed technology shows potential for on-line monitoring of coagulation and syneresis. 2007 Elsevier Ltd. All rights reserved..
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
This study proposes a utility-based framework for the determination of optimal hedge ratios (OHRs) that can allow for the impact of higher moments on hedging decisions. We examine the entire hyperbolic absolute risk aversion family of utilities which include quadratic, logarithmic, power, and exponential utility functions. We find that for both moderate and large spot (commodity) exposures, the performance of out-of-sample hedges constructed allowing for nonzero higher moments is better than the performance of the simpler OLS hedge ratio. The picture is, however, not uniform throughout our seven spot commodities as there is one instance (cotton) for which the modeling of higher moments decreases welfare out-of-sample relative to the simpler OLS. We support our empirical findings by a theoretical analysis of optimal hedging decisions and we uncover a novel link between OHRs and the minimax hedge ratio, that is the ratio which minimizes the largest loss of the hedged position. © 2011 Wiley Periodicals, Inc. Jrl Fut Mark
Resumo:
Starting from the classical Saltzman two-dimensional convection equations, we derive via a severe spectral truncation a minimal 10 ODE system which includes the thermal effect of viscous dissipation. Neglecting this process leads to a dynamical system which includes a decoupled generalized Lorenz system. The consideration of this process breaks an important symmetry and couples the dynamics of fast and slow variables, with the ensuing modifications to the structural properties of the attractor and of the spectral features. When the relevant nondimensional number (Eckert number Ec) is different from zero, an additional time scale of O(Ec−1) is introduced in the system, as shown with standard multiscale analysis and made clear by several numerical evidences. Moreover, the system is ergodic and hyperbolic, the slow variables feature long-term memory with 1/f3/2 power spectra, and the fast variables feature amplitude modulation. Increasing the strength of the thermal-viscous feedback has a stabilizing effect, as both the metric entropy and the Kaplan-Yorke attractor dimension decrease monotonically with Ec. The analyzed system features very rich dynamics: it overcomes some of the limitations of the Lorenz system and might have prototypical value in relevant processes in complex systems dynamics, such as the interaction between slow and fast variables, the presence of long-term memory, and the associated extreme value statistics. This analysis shows how neglecting the coupling of slow and fast variables only on the basis of scale analysis can be catastrophic. In fact, this leads to spurious invariances that affect essential dynamical properties (ergodicity, hyperbolicity) and that cause the model losing ability in describing intrinsically multiscale processes.
Resumo:
Dissolved organic carbon (DOC) concentrations in surface waters have increased across much of Europe and North America, with implications for the terrestrial carbon balance, aquatic ecosystem functioning, water treatment costs and human health. Over the past decade, many hypotheses have been put forward to explain this phenomenon, from changing climate and land-management to eutrophication and acid deposition. Resolution of this debate has been hindered by a reliance on correlative analyses of time-series data, and a lack of robust experimental testing of proposed mechanisms. In a four-year, four-site replicated field experiment involving both acidifying and de-acidifying treatments, we tested the hypothesis that DOC leaching was previously suppressed by high levels of soil acidity in peat and organo-mineral soils, and therefore that observed DOC increases a consequence of decreasing soil acidity. We observed a consistent, positive relationship between DOC and acidity change at all sites. Responses were described by similar hyperbolic relationships between standardised changes in DOC and hydrogen ion concentrations at all sites, suggesting potentially general applicability. These relationships explained a substantial proportion of observed changes in peak DOC concentrations in nearby monitoring streams, and application to a UK-wide upland soil pH dataset suggests that recovery from acidification alone could have led to soil solution DOC increases in the range 46-126% by habitat type since 1978. Our findings raise the possibility that changing soil acidity may have wider impacts on ecosystem carbon balances. Decreasing sulphur deposition may be accelerating terrestrial carbon loss, and returning surface waters to a natural, high-DOC condition.
Resumo:
We prove essential self-adjointness of a class of Dirichlet operators in ℝn using the hyperbolic equation approach. This method allows one to prove essential self-adjointness under minimal conditions on the logarithmic derivative of the density and a condition of Muckenhoupt type on the density itself.
Resumo:
We study the boundedness and compactness of Toeplitz operators Ta on Bergman spaces , 1 < p < ∞. The novelty is that we allow distributional symbols. It turns out that the belonging of the symbol to a weighted Sobolev space of negative order is sufficient for the boundedness of Ta. We show the natural relation of the hyperbolic geometry of the disc and the order of the distribution. A corresponding sufficient condition for the compactness is also derived.
Resumo:
We study the boundedness of Toeplitz operators $T_a$ with locally integrable symbols on Bergman spaces $A^p(\mathbb{D})$, $1 < p < \infty$. Our main result gives a sufficient condition for the boundedness of $T_a$ in terms of some ``averages'' (related to hyperbolic rectangles) of its symbol. If the averages satisfy an ${o}$-type condition on the boundary of $\mathbb{D}$, we show that the corresponding Toeplitz operator is compact on $A^p$. Both conditions coincide with the known necessary conditions in the case of nonnegative symbols and $p=2$. We also show that Toeplitz operators with symbols of vanishing mean oscillation are Fredholm on $A^p$ provided that the averages are bounded away from zero, and derive an index formula for these operators.
Resumo:
A model for estimating the turbulent kinetic energy dissipation rate in the oceanic boundary layer, based on insights from rapid-distortion theory, is presented and tested. This model provides a possible explanation for the very high dissipation levels found by numerous authors near the surface. It is conceived that turbulence, injected into the water by breaking waves, is subsequently amplified due to its distortion by the mean shear of the wind-induced current and straining by the Stokes drift of surface waves. The partition of the turbulent shear stress into a shear-induced part and a wave-induced part is taken into account. In this picture, dissipation enhancement results from the same mechanism responsible for Langmuir circulations. Apart from a dimensionless depth and an eddy turn-over time, the dimensionless dissipation rate depends on the wave slope and wave age, which may be encapsulated in the turbulent Langmuir number La_t. For large La_t, or any Lat but large depth, the dissipation rate tends to the usual surface layer scaling, whereas when Lat is small, it is strongly enhanced near the surface, growing asymptotically as ɛ ∝ La_t^{-2} when La_t → 0. Results from this model are compared with observations from the WAVES and SWADE data sets, assuming that this is the dominant dissipation mechanism acting in the ocean surface layer and statistical measures of the corresponding fit indicate a substantial improvement over previous theoretical models. Comparisons are also carried out against more recent measurements, showing good order-of-magnitude agreement, even when shallow-water effects are important.