995 resultados para Rate of Convergence


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report reviews literature on the rate of convergence of maximum likelihood estimators and establishes a Central Limit Theorem, which yields an O(1/sqrt(n)) rate of convergence of the maximum likelihood estimator under somewhat relaxed smoothness conditions. These conditions include the existence of a one-sided derivative in θ of the pdf, compared to up to three that are classically required. A verification through simulation is included in the end of the report.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solution of generalized eigenproblem, K phi = lambda M phi, by the classical inverse iteration method exhibits slow convergence for some eigenproblems. In this paper, a modified inverse iteration algorithm is presented for improving the convergence rate. At every iteration, an optimal linear combination of the latest and the preceding iteration vectors is used as the input vector for the next iteration. The effectiveness of the proposed algorithm is demonstrated for three typical eigenproblems, i.e. eigenproblems with distinct, close and repeated eigenvalues. The algorithm yields 29, 96 and 23% savings in computational time, respectively, for these problems. The algorithm is simple and easy to implement, and this renders the algorithm even more attractive.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a characterization of contraction for bounded convex sets. For discrete-time multi-agent systems we provide an explicit upperbound on the rate of convergence to a consensus under the assumptions of contractiveness and (weak) connectedness (across an interval.) Convergence is shown to be exponential when either the system or the function characterizing the contraction is linear. Copyright © 2007 IFAC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the problem of approximating a function belonging to some funtion space Φ by a linear comination of n translates of a given function G. Ussing a lemma by Jones (1990) and Barron (1991) we show that it is possible to define function spaces and functions G for which the rate of convergence to zero of the erro is 0(1/n) in any number of dimensions. The apparent avoidance of the "curse of dimensionality" is due to the fact that these function spaces are more and more constrained as the dimension increases. Examples include spaces of the Sobolev tpe, in which the number of weak derivatives is required to be larger than the number of dimensions. We give results both for approximation in the L2 norm and in the Lc norm. The interesting feature of these results is that, thanks to the constructive nature of Jones" and Barron"s lemma, an iterative procedure is defined that can achieve this rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multiyear solution of the SIRGAS-CON network was used to estimate the strain rates of the earth surface from the changing directions of the velocity vectors of 140 geodetic points located in the South American plate. The strain rate was determined by the finite element method using Delaunay triangulation points that formed sub-networks; each sub-network was considered a solid and homogeneous body. The results showed that strain rates vary along the South American plate and are more significant on the western portion of the plate, as expected, since this region is close to the subduction zone of the Nazca plate beneath the South American plate. After using Euler vectors to infer Nazca plate movement and to orient the velocity vectors of the South American plate, it was possible to estimate the convergence and accommodation rates of the Nazca and South American plates, respectively. Strain rate estimates permitted determination of predominant contraction and/or extension regions and to establish that contraction regions coincide with locations with most of the high magnitude seismic events. Some areas with extension and contraction strains were found to the east within the stable South American plate, which may result from different stresses associated with different geological characteristics. These results suggest that major movements detected on the surface near the Nazca plate occur in regions with more heterogeneous geological structures and multiple rupture events. Most seismic events in the South American plate are concentrated in areas with predominant contraction strain rates oriented northeast-southwest; significant amounts of elastic strain can be accumulated on geological structures away from the plate boundary faults; and, behavior of contractions and extensions is similar to what has been found in seismological studies. © 2013 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let (Phi(t))(t is an element of R+) be a Harris ergodic continuous-time Markov process on a general state space, with invariant probability measure pi. We investigate the rates of convergence of the transition function P-t(x, (.)) to pi; specifically, we find conditions under which r(t) vertical bar vertical bar P-t (x, (.)) - pi vertical bar vertical bar -> 0 as t -> infinity, for suitable subgeometric rate functions r(t), where vertical bar vertical bar - vertical bar vertical bar denotes the usual total variation norm for a signed measure. We derive sufficient conditions for the convergence to hold, in terms of the existence of suitable points on which the first hitting time moments are bounded. In particular, for stochastically ordered Markov processes, explicit bounds on subgeometric rates of convergence are obtained. These results are illustrated in several examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting the impacts of environmental change on marine organisms, food webs, and biogeochemical cycles presently relies almost exclusively on short-term physiological studies, while the possibility of adaptive evolution is often ignored. Here, we assess adaptive evolution in the coccolithophore Emiliania huxleyi, a well-established model species in biological oceanography, in response to ocean acidification. We previously demonstrated that this globally important marine phytoplankton species adapts within 500 generations to elevated CO2. After 750 and 1000 generations, no further fitness increase occurred, and we observed phenotypic convergence between replicate populations. We then exposed adapted populations to two novel environments to investigate whether or not the underlying basis for high CO2-adaptation involves functional genetic divergence, assuming that different novel mutations become apparent via divergent pleiotropic effects. The novel environment "high light" did not reveal such genetic divergence whereas growth in a low-salinity environment revealed strong pleiotropic effects in high CO2 adapted populations, indicating divergent genetic bases for adaptation to high CO2. This suggests that pleiotropy plays an important role in adaptation of natural E. huxleyi populations to ocean acidification. Our study highlights the potential mutual benefits for oceanography and evolutionary biology of using ecologically important marine phytoplankton for microbial evolution experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To examine the visual predictors of falls and injurious falls among older adults with glaucoma. METHODS: Prospective falls data were collected for 71 community-dwelling adults with primary open-angle glaucoma, mean age 73.9 ± 5.7 years, for one year using monthly falls diaries. Baseline assessment of central visual function included high-contrast visual acuity and Pelli-Robson contrast sensitivity. Binocular integrated visual fields were derived from monocular Humphrey Field Analyser plots. Rate ratios (RR) for falls and injurious falls with 95% confidence intervals (CIs) were based on negative binomial regression models. RESULTS: During the one year follow-up, 31 (44%) participants experienced at least one fall and 22 (31%) experienced falls that resulted in an injury. Greater visual impairment was associated with increased falls rate, independent of age and gender. In a multivariate model, more extensive field loss in the inferior region was associated with higher rate of falls (RR 1.57, 95%CI 1.06, 2.32) and falls with injury (RR 1.80, 95%CI 1.12, 2.98), adjusted for all other vision measures and potential confounding factors. Visual acuity, contrast sensitivity, and superior field loss were not associated with the rate of falls; topical beta-blocker use was also not associated with increased falls risk. CONCLUSIONS: Falls are common among older adults with glaucoma and occur more frequently in those with greater visual impairment, particularly in the inferior field region. This finding highlights the importance of the inferior visual field region in falls risk and assists in identifying older adults with glaucoma at risk of future falls, for whom potential interventions should be targeted. KEY WORDS: glaucoma, visual field, visual impairment, falls, injury

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As an international norm, the Responsibility to Protect (R2P) has gained substantial influence and institutional presence—and created no small controversy—in the ten years since its first conceptualisation. Conversely, the Protection of Civilians in Armed Conflict (PoC) has a longer pedigree and enjoys a less contested reputation. Yet UN Security Council action in Libya in 2011 has thrown into sharp relief the relationship between the two. UN Security Council Resolutions 1970 and 1973 follow exactly the process envisaged by R2P in response to imminent atrocity crimes, yet the operative paragraphs of the resolutions themselves invoke only PoC. This article argues that, while the agendas of PoC and R2P converge with respect to Security Council action in cases like Libya, outside this narrow context it is important to keep the two norms distinct. Peacekeepers, humanitarian actors, international lawyers, individual states and regional organisations are required to act differently with respect to the separate agendas and contexts covered by R2P and PoC. While overlap between the two does occur in highly visible cases like Libya, neither R2P nor PoC collapses normatively, institutionally or operationally into the other.