864 resultados para Variational Convergence
Resumo:
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.
Resumo:
Variational data assimilation is commonly used in environmental forecasting to estimate the current state of the system from a model forecast and observational data. The assimilation problem can be written simply in the form of a nonlinear least squares optimization problem. However the practical solution of the problem in large systems requires many careful choices to be made in the implementation. In this article we present the theory of variational data assimilation and then discuss in detail how it is implemented in practice. Current solutions and open questions are discussed.
Resumo:
We consider the two-dimensional Helmholtz equation with constant coefficients on a domain with piecewise analytic boundary, modelling the scattering of acoustic waves at a sound-soft obstacle. Our discretisation relies on the Trefftz-discontinuous Galerkin approach with plane wave basis functions on meshes with very general element shapes, geometrically graded towards domain corners. We prove exponential convergence of the discrete solution in terms of number of unknowns.
Resumo:
Accurate and reliable rain rate estimates are important for various hydrometeorological applications. Consequently, rain sensors of different types have been deployed in many regions. In this work, measurements from different instruments, namely, rain gauge, weather radar, and microwave link, are combined for the first time to estimate with greater accuracy the spatial distribution and intensity of rainfall. The objective is to retrieve the rain rate that is consistent with all these measurements while incorporating the uncertainty associated with the different sources of information. Assuming the problem is not strongly nonlinear, a variational approach is implemented and the Gauss–Newton method is used to minimize the cost function containing proper error estimates from all sensors. Furthermore, the method can be flexibly adapted to additional data sources. The proposed approach is tested using data from 14 rain gauges and 14 operational microwave links located in the Zürich area (Switzerland) to correct the prior rain rate provided by the operational radar rain product from the Swiss meteorological service (MeteoSwiss). A cross-validation approach demonstrates the improvement of rain rate estimates when assimilating rain gauge and microwave link information.
Resumo:
Purpose This study investigated whether vergence and accommodation development in pre-term infants is pre-programmed or is driven by experience. Methods 32 healthy infants, born at mean 34 weeks gestation (range 31.2-36 weeks) were compared with 45 healthy full-term infants (mean 40.0 weeks) over a 6 month period, starting at 4-6 weeks post-natally. Simultaneous accommodation and convergence to a detailed target were measured using a Plusoptix PowerRefII infra-red photorefractor as a target moved between 0.33m and 2m. Stimulus/response gains and responses at 0.33m and 2m were compared by both corrected (gestational) age and chronological (post-natal) age. Results When compared by their corrected age, pre-term and full-term infants showed few significant differences in vergence and accommodation responses after 6-7 weeks of age. However, when compared by chronological age, pre-term infants’ responses were more variable, with significantly reduced vergence gains, reduced vergence response at 0.33m, reduced accommodation gain, and increased accommodation at 2m, compared to full-term infants between 8-13 weeks after birth. Conclusions When matched by corrected age, vergence and accommodation in pre-term infants show few differences from full-term infants’ responses. Maturation appears pre-programmed and is not advanced by visual experience. Longer periods of immature visual responses might leave pre-term infants more at risk of development of oculomotor deficits such as strabismus.
Resumo:
The debate associated with the qualifications of business school faculty has raged since the 1959 release of the Gordon–Howell and Pierson reports, which encouraged business schools in the USA to enhance their legitimacy by increasing their faculties’ doctoral qualifications and scholarly rigor. Today, the legitimacy of specific faculty qualifications remains one of the most discussed topics in management education, attracting the interest of administrators, faculty, and accreditation agencies. Based on new institutional theory and the institutional logics perspective, this paper examines convergence and innovation in business schools through an analysis of faculty hiring criteria. The qualifications examined are academic degree, scholarly publications, teaching experience, and professional experience. Three groups of schools are examined based on type of university, position within a media ranking system, and accreditation by the Association to Advance Collegiate Schools of Business. Data are gathered using a content analysis of 441 faculty postings from business schools based in the USA over two time periods. Contrary to claims of global convergence, we find most qualifications still vary by group, even in the mature US market. Moreover, innovative hiring is more likely to be found in non-elite schools.
Resumo:
In this work, we prove a weak Noether-type Theorem for a class of variational problems that admit broken extremals. We use this result to prove discrete Noether-type conservation laws for a conforming finite element discretisation of a model elliptic problem. In addition, we study how well the finite element scheme satisfies the continuous conservation laws arising from the application of Noether’s first theorem (1918). We summarise extensive numerical tests, illustrating the conservation of the discrete Noether law using the p-Laplacian as an example and derive a geometric-based adaptive algorithm where an appropriate Noether quantity is the goal functional.
Resumo:
Aims. Orthoptists are familiar with AC/A ratios and the concept that accommodation drives convergence, but the reverse relationship, that of the accommodation associated with convergence, is rarely considered. Methods. This article reviews published evidence from our laboratory which has investigated the drives to both vergence and accommodation. All studies involved a method by which accommodation and vergence were measured concurrently and objectively to a range of visual stimuli which manipulate blur, disparity and proximal/looming cues in different combinations. Results Results are summarised for both typical and atypical participants, and over development between birth and adulthood. Conclusions For the majority of typical children and adults, as well as patients with most heterophorias and intermittent exotropia, disparity is the main cue to both vergence and accommodation. Thus the convergence→accommodation relationship is more influential than that of accommodative vergence. Differences in “style” of near cue use may be a more useful way to think about responses to stimuli moving in depth, and their consequences for orthoptic patients, than either AC/A or CA/C ratios. The implications of a strong role for vergence accommodation in orthoptic practice are considered.
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Exact penalties for variational inequalities with applications to nonlinear complementarity problems
Resumo:
In this paper, we present a new reformulation of the KKT system associated to a variational inequality as a semismooth equation. The reformulation is derived from the concept of differentiable exact penalties for nonlinear programming. The best theoretical results are presented for nonlinear complementarity problems, where simple, verifiable, conditions ensure that the penalty is exact. We close the paper with some preliminary computational tests on the use of a semismooth Newton method to solve the equation derived from the new reformulation. We also compare its performance with the Newton method applied to classical reformulations based on the Fischer-Burmeister function and on the minimum. The new reformulation combines the best features of the classical ones, being as easy to solve as the reformulation that uses the Fischer-Burmeister function while requiring as few Newton steps as the one that is based on the minimum.
Resumo:
Optimization methods that employ the classical Powell-Hestenes-Rockafellar augmented Lagrangian are useful tools for solving nonlinear programming problems. Their reputation decreased in the last 10 years due to the comparative success of interior-point Newtonian algorithms, which are asymptotically faster. In this research, a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its `pure` counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:http://www.ime.usp.br/similar to egbirgin/tango/.
Resumo:
Two Augmented Lagrangian algorithms for solving KKT systems are introduced. The algorithms differ in the way in which penalty parameters are updated. Possibly infeasible accumulation points are characterized. It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions. Boundedness of the penalty parameters is proved under suitable assumptions. Numerical experiments are presented.
Resumo:
We introduce jump processes in R(k), called density-profile processes, to model biological signaling networks. Our modeling setup describes the macroscopic evolution of a finite-size spin-flip model with k types of spins with arbitrary number of internal states interacting through a non-reversible stochastic dynamics. We are mostly interested on the multi-dimensional empirical-magnetization vector in the thermodynamic limit, and prove that, within arbitrary finite time-intervals, its path converges almost surely to a deterministic trajectory determined by a first-order (non-linear) differential equation with explicit bounds on the distance between the stochastic and deterministic trajectories. As parameters of the spin-flip dynamics change, the associated dynamical system may go through bifurcations, associated to phase transitions in the statistical mechanical setting. We present a simple example of spin-flip stochastic model, associated to a synthetic biology model known as repressilator, which leads to a dynamical system with Hopf and pitchfork bifurcations. Depending on the parameter values, the magnetization random path can either converge to a unique stable fixed point, converge to one of a pair of stable fixed points, or asymptotically evolve close to a deterministic orbit in Rk. We also discuss a simple signaling pathway related to cancer research, called p53 module.