847 resultados para asymptotically hyperbolic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The family of location and scale mixtures of Gaussians has the ability to generate a number of flexible distributional forms. The family nests as particular cases several important asymmetric distributions like the Generalized Hyperbolic distribution. The Generalized Hyperbolic distribution in turn nests many other well known distributions such as the Normal Inverse Gaussian. In a multivariate setting, an extension of the standard location and scale mixture concept is proposed into a so called multiple scaled framework which has the advantage of allowing different tail and skewness behaviours in each dimension with arbitrary correlation between dimensions. Estimation of the parameters is provided via an EM algorithm and extended to cover the case of mixtures of such multiple scaled distributions for application to clustering. Assessments on simulated and real data confirm the gain in degrees of freedom and flexibility in modelling data of varying tail behaviour and directional shape.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motion due to an oscillatory point source in a rotating stratified fluid has been studied by Sarma & Naidu (1972) by using threefold Fourier transforms. The solution obtained by them in the hyperbolic case is wrong since they did not make use of any radiation condition, which is always necessary to get the correct solution. Whenever the motion is created by a source, the condition of radiation is that the sources must remain sources, not sinks of energy and no energy may be radiated from infinity into the prescribed singularities of the field. The purpose of the present note is to explain how Lighthill's (1960) radiation condition can be applied in the hyperbolic case to pick the correct solution. Further, the solution thus obtained is reiterated by an alternative procedure using Sommerfeld's (1964) radiation condition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis addresses modeling of financial time series, especially stock market returns and daily price ranges. Modeling data of this kind can be approached with so-called multiplicative error models (MEM). These models nest several well known time series models such as GARCH, ACD and CARR models. They are able to capture many well established features of financial time series including volatility clustering and leptokurtosis. In contrast to these phenomena, different kinds of asymmetries have received relatively little attention in the existing literature. In this thesis asymmetries arise from various sources. They are observed in both conditional and unconditional distributions, for variables with non-negative values and for variables that have values on the real line. In the multivariate context asymmetries can be observed in the marginal distributions as well as in the relationships of the variables modeled. New methods for all these cases are proposed. Chapter 2 considers GARCH models and modeling of returns of two stock market indices. The chapter introduces the so-called generalized hyperbolic (GH) GARCH model to account for asymmetries in both conditional and unconditional distribution. In particular, two special cases of the GARCH-GH model which describe the data most accurately are proposed. They are found to improve the fit of the model when compared to symmetric GARCH models. The advantages of accounting for asymmetries are also observed through Value-at-Risk applications. Both theoretical and empirical contributions are provided in Chapter 3 of the thesis. In this chapter the so-called mixture conditional autoregressive range (MCARR) model is introduced, examined and applied to daily price ranges of the Hang Seng Index. The conditions for the strict and weak stationarity of the model as well as an expression for the autocorrelation function are obtained by writing the MCARR model as a first order autoregressive process with random coefficients. The chapter also introduces inverse gamma (IG) distribution to CARR models. The advantages of CARR-IG and MCARR-IG specifications over conventional CARR models are found in the empirical application both in- and out-of-sample. Chapter 4 discusses the simultaneous modeling of absolute returns and daily price ranges. In this part of the thesis a vector multiplicative error model (VMEM) with asymmetric Gumbel copula is found to provide substantial benefits over the existing VMEM models based on elliptical copulas. The proposed specification is able to capture the highly asymmetric dependence of the modeled variables thereby improving the performance of the model considerably. The economic significance of the results obtained is established when the information content of the volatility forecasts derived is examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

AbstractObjectives Decision support tools (DSTs) for invasive species management have had limited success in producing convincing results and meeting users' expectations. The problems could be linked to the functional form of model which represents the dynamic relationship between the invasive species and crop yield loss in the DSTs. The objectives of this study were: a) to compile and review the models tested on field experiments and applied to DSTs; and b) to do an empirical evaluation of some popular models and alternatives. Design and methods This study surveyed the literature and documented strengths and weaknesses of the functional forms of yield loss models. Some widely used models (linear, relative yield and hyperbolic models) and two potentially useful models (the double-scaled and density-scaled models) were evaluated for a wide range of weed densities, maximum potential yield loss and maximum yield loss per weed. Results Popular functional forms include hyperbolic, sigmoid, linear, quadratic and inverse models. Many basic models were modified to account for the effect of important factors (weather, tillage and growth stage of crop at weed emergence) influencing weed–crop interaction and to improve prediction accuracy. This limited their applicability for use in DSTs as they became less generalized in nature and often were applicable to a much narrower range of conditions than would be encountered in the use of DSTs. These factors' effects could be better accounted by using other techniques. Among the model empirically assessed, the linear model is a very simple model which appears to work well at sparse weed densities, but it produces unrealistic behaviour at high densities. The relative-yield model exhibits expected behaviour at high densities and high levels of maximum yield loss per weed but probably underestimates yield loss at low to intermediate densities. The hyperbolic model demonstrated reasonable behaviour at lower weed densities, but produced biologically unreasonable behaviour at low rates of loss per weed and high yield loss at the maximum weed density. The density-scaled model is not sensitive to the yield loss at maximum weed density in terms of the number of weeds that will produce a certain proportion of that maximum yield loss. The double-scaled model appeared to produce more robust estimates of the impact of weeds under a wide range of conditions. Conclusions Previously tested functional forms exhibit problems for use in DSTs for crop yield loss modelling. Of the models evaluated, the double-scaled model exhibits desirable qualitative behaviour under most circumstances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

5,10-Methylenetetrahydrofolate reductase (EC 1.1.1.68) was purified from the cytosolic fraction of sheep liver by (NH4)2 SO4 fractionation, acid precipitation, DEAE-Sephacel chromatography and Blue Sepharose affinity chromatography. The homogeneity of the enzyme was established by sodium dodecyl sulphate-polyacrylamide gel electrophoresis, ultracentrifugation and Ouchterlony immunodiffusion test. The enzyme was a dimer of molecular weight 1,66,000 ± 5,000 with a subunit molecular weight of 87,000 ±5,000. The enzyme showed hyperbolic saturation pattern with 5-methyltetrahydrofolate.K 0.5 values for 5-methyltetrahydrofolate menadione and NADPH were determined to be 132 ΜM, 2.45 ΜM and 16 ΜM. The parallel set of lines in the Lineweaver-Burk plot, when either NADPH or menadione was varied at different fixed concentrations of the other substrate; non-competitive inhibition, when NADPH was varied at different fixed concentrations of NADP; competitive inhibition, when menadione was varied at different fixed concentrations of NADP and the absence of inhibition by NADP at saturating concentration of menadione, clearly established that the kinetic mechanism of the reaction catalyzed by this enzyme was ping-pong.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The nonlinear mode coupling between two co-directional quasi-harmonic Rayleigh surface waves on an isotropic solid is analysed using the method of multiple scales. This procedure yields a system of six semi-linear hyperbolic partial differential equations with the same principal part governing the slow variations in the (complex) amplitudes of the two fundamental, the two second harmonic and the two combination frequency waves at the second stage of the perturbation expansion. A numerical solution of these equations for excitation by monochromatic signals at two arbitrary frequencies, indicates that there is a continuous transfer of energy back and forth among the fundamental, second harmonic and combination frequency waves due to mode coupling. The mode coupling tends to be more pronounced as the frequencies of the interacting waves approach each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A systematic derivation of the approximate coupled amplitude equations governing the propagation of a quasi-monochromatic Rayleigh surface wave on an isotropic solid is presented, starting from the non-linear governing differential equations and the non-linear free-surface boundary conditions, using the method of mulitple scales. An explicit solution of these equations for a signalling problem is obtained in terms of hyperbolic functions. In the case of monochromatic excitation, it is shown that the second harmonic amplitude grows initially at the expense of the fundamental and that the amplitudes of the fundamental and second harmonic remain bounded for all time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation consists of three essays on misplanning wealth and health accumulation. The conventional economics assumes that individual's intertemporal preferences are exponential (exponential preferences, EP). Recent findings in behavioural economics have shown that, actually, people do discount near future relatively heavier than distant future. This implies hyperbolic intertemporal preferences (HP). Essays I and II concentrate especially on the effects of a delayed completion of tasks, a feature of behaviour that HP enables. Essay III uses current Finnish data to analyse the evolvement of the quality adjusted life years (QALYs) and inconsistencies in measuring that. Essay I studies the existence effects of a lucrative retirement savings program (SP) on the retirement savings of different individual types having HP. If the individual does not know that he will have HP also in the future, i.e. he is the naïve, for certain conditions, he delays the enrolment on SP until he abandons it. Very interesting finding is that the naïve retires then poorer in the presence than in the absence of SP. For the same conditions, the individual who knows that he will have HP also in the future, i.e. he is the sophisticated, gains from the existence of SP, and retires with greater retirement savings in the presence than in the absence of SP. Finally, capabilities to learn from past behaviour and about intertemporal preferences improve possibilities to gain from the existence but an adequate time to learn must be then guaranteed. Essay II studies delayed doctor's visits, theirs effects on the costs of a public health care system and government's attempts to control patient behaviour and fund the system. The controlling devices are a consultation fee and a deductible for that. The deductible is effective only for a patient whose diagnosis reveals a disease that would not get cured without the doctor's visit. The naives delay their visits the longest while EP-patients are the quickest visitors. To control the naives, the government should implement a low fee and a high deductible, while for the sophisticates the opposite is true. Finally, if all the types exist in an economy then using an incorrect conventional assumption that all individuals have EP leads to worse situation and requires higher tax rates than assuming incorrectly but unconventionally that only the naives exists. Essay III studies the development of QALYs in Finland 1995/96-2004. The essay concentrates on developing a consistent measure, i.e. independent of discounting, for measuring the age and gender specific QALY-changes and their incidences. For the given time interval, use of a relative change out of an attainable change seems to be almost intact to discounting and reveals that the greatest gains are for older age groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by a problem from fluid mechanics, we consider a generalization of the standard curve shortening flow problem for a closed embedded plane curve such that the area enclosed by the curve is forced to decrease at a prescribed rate. Using formal asymptotic and numerical techniques, we derive possible extinction shapes as the curve contracts to a point, dependent on the rate of decreasing area; we find there is a wider class of extinction shapes than for standard curve shortening, for which initially simple closed curves are always asymptotically circular. We also provide numerical evidence that self-intersection is possible for non-convex initial conditions, distinguishing between pinch-off and coalescence of the curve interior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The coexistence curve of the binary liquid mixture n-heptane-acetic anhydride has been determined by the observation of the transition temperatures of 76 samples over the range of compositions. The functional form of the difference in order parameter, in terms of either the mole fraction or the volume fraction, is consistent with theoretical predictions invoking the concept of universality at critical points. The average value of the order parameter, the diameter of the coexistence curve, shows an anomaly which can be described by either an exponent 1 - a, as predicted by various theories (where a is the critical exponent of the specific heat), or by an exponent 20 (where P is the coexistence curve exponent), as expected when the order parameter used is not the one the diameter of which diverges asymptotically as 1 - a.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, we propose a denoising algorithm to denoise a time series y(i) = x(i) + e(i), where {x(i)} is a time series obtained from a time- T map of a uniformly hyperbolic or Anosov flow, and {e(i)} a uniformly bounded sequence of independent and identically distributed (i.i.d.) random variables. Making use of observations up to time n, we create an estimate of x(i) for i<n. We show under typical limiting behaviours of the orbit and the recurrence properties of x(i), the estimation error converges to zero as n tends to infinity with probability 1.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A construction for a family of sequences over the 8-ary AM-PSK constellation that has maximum nontrivial correlation magnitude bounded as theta(max) less than or similar to root N is presented here. The famfly is asymptotically optimal with respect to the Welch bound on maximum magnitude of correlation. The 8-ary AM-PSK constellation is a subset of the 16-QAM constellation. We also construct two families of sequences over 16-QAM with theta(max) less than or similar to root 2 root N. These families are constructed by interleaving sets of sequences. A construction for a famBy of low-correlation sequences over QAM alphabet of size 2(2m) is presented with maximum nontrivial normalized correlation parameter bounded above by less than or similar to a root N, where N is the period of the sequences in the family and where a ranges from 1.61 in the case of 16-QAM modulation to 2.76 for large m. When used in a CDMA setting, the family will permit each user to modulate the code sequence with 2m bits of data. Interestingly, the construction permits users on the reverse link of the CDMA channel to communicate using varying data rates by switching between sequence famflies; associated to different values of the parameter m. Other features of the sequence families are improved Euclidean distance between different data symbols in comparison with PSK signaling and compatibility of the QAM sequence families with sequences belonging to the large quaternary sequence families {S(p)}.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Hadwiger number eta(G) of a graph G is the largest integer n for which the complete graph K-n on n vertices is a minor of G. Hadwiger conjectured that for every graph G, eta(G) >= chi(G), where chi(G) is the chromatic number of G. In this paper, we study the Hadwiger number of the Cartesian product G square H of graphs. As the main result of this paper, we prove that eta(G(1) square G(2)) >= h root 1 (1 - o(1)) for any two graphs G(1) and G(2) with eta(G(1)) = h and eta(G(2)) = l. We show that the above lower bound is asymptotically best possible when h >= l. This asymptotically settles a question of Z. Miller (1978). As consequences of our main result, we show the following: 1. Let G be a connected graph. Let G = G(1) square G(2) square ... square G(k) be the ( unique) prime factorization of G. Then G satisfies Hadwiger's conjecture if k >= 2 log log chi(G) + c', where c' is a constant. This improves the 2 log chi(G) + 3 bound in [2] 2. Let G(1) and G(2) be two graphs such that chi(G1) >= chi(G2) >= clog(1.5)(chi(G(1))), where c is a constant. Then G1 square G2 satisfies Hadwiger's conjecture. 3. Hadwiger's conjecture is true for G(d) (Cartesian product of G taken d times) for every graph G and every d >= 2. This settles a question by Chandran and Sivadasan [2]. ( They had shown that the Hadiwger's conjecture is true for G(d) if d >= 3).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a scenario in which a wireless sensor network is formed by randomly deploying n sensors to measure some spatial function over a field, with the objective of computing a function of the measurements and communicating it to an operator station. We restrict ourselves to the class of type-threshold functions (as defined in the work of Giridhar and Kumar, 2005), of which max, min, and indicator functions are important examples: our discussions are couched in terms of the max function. We view the problem as one of message-passing distributed computation over a geometric random graph. The network is assumed to be synchronous, and the sensors synchronously measure values and then collaborate to compute and deliver the function computed with these values to the operator station. Computation algorithms differ in (1) the communication topology assumed and (2) the messages that the nodes need to exchange in order to carry out the computation. The focus of our paper is to establish (in probability) scaling laws for the time and energy complexity of the distributed function computation over random wireless networks, under the assumption of centralized contention-free scheduling of packet transmissions. First, without any constraint on the computation algorithm, we establish scaling laws for the computation time and energy expenditure for one-time maximum computation. We show that for an optimal algorithm, the computation time and energy expenditure scale, respectively, as Theta(radicn/log n) and Theta(n) asymptotically as the number of sensors n rarr infin. Second, we analyze the performance of three specific computation algorithms that may be used in specific practical situations, namely, the tree algorithm, multihop transmission, and the Ripple algorithm (a type of gossip algorithm), and obtain scaling laws for the computation time and energy expenditure as n rarr infin. In particular, we show that the computation time for these algorithms scales as Theta(radicn/lo- g n), Theta(n), and Theta(radicn log n), respectively, whereas the energy expended scales as , Theta(n), Theta(radicn/log n), and Theta(radicn log n), respectively. Finally, simulation results are provided to show that our analysis indeed captures the correct scaling. The simulations also yield estimates of the constant multipliers in the scaling laws. Our analyses throughout assume a centralized optimal scheduler, and hence, our results can be viewed as providing bounds for the performance with practical distributed schedulers.