988 resultados para Linear Convergence
Resumo:
Aims. Orthoptists are familiar with AC/A ratios and the concept that accommodation drives convergence, but the reverse relationship, that of the accommodation associated with convergence, is rarely considered. Methods. This article reviews published evidence from our laboratory which has investigated the drives to both vergence and accommodation. All studies involved a method by which accommodation and vergence were measured concurrently and objectively to a range of visual stimuli which manipulate blur, disparity and proximal/looming cues in different combinations. Results Results are summarised for both typical and atypical participants, and over development between birth and adulthood. Conclusions For the majority of typical children and adults, as well as patients with most heterophorias and intermittent exotropia, disparity is the main cue to both vergence and accommodation. Thus the convergence→accommodation relationship is more influential than that of accommodative vergence. Differences in “style” of near cue use may be a more useful way to think about responses to stimuli moving in depth, and their consequences for orthoptic patients, than either AC/A or CA/C ratios. The implications of a strong role for vergence accommodation in orthoptic practice are considered.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
In this work, a new theoretical mechanism is presented in which equatorial Rossby and inertio-gravity wave modes may interact with each other through resonance with the diurnal cycle of tropical deep convection. We have adopted the two-layer incompressible equatorial primitive equations forced by a parametric heating that roughly represents deep convection activity in the tropical atmosphere. The heat source was parametrized in the simplest way according to the hypothesis that it is proportional to the lower-troposphere moisture convergence, with the background moisture state function mimicking the structure of the ITCZ. In this context, we have investigated the possibility of resonant interaction between equatorially trapped Rossby and inertio-gravity modes through the diurnal cycle of the background moisture state function. The reduced dynamics of a single resonant duo shows that when this diurnal variation is considered, a Rossby wave mode can undergo significant amplitude modulations when interacting with an inertio-gravity wave mode, which is not possible in the context of the resonant triad non-linear interaction. Therefore, the results suggest that the diurnal variation of the ITCZ can be a possible dynamical mechanism that leads the Rossby waves to be significantly affected by high frequency modes.
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Resumo:
Electromagnetic induction (EMI) method results are shown for vertical magnetic dipole (VMD) configuration by using the EM38 equipment. Performance in the location of metallic pipes and electrical cables is compared as a function of instrumental drift correction by linear and quadratic adjusting under controlled conditions. Metallic pipes and electrical cables are buried at the IAG/USP shallow geophysical test site in Sao Paulo City. Brazil. Results show that apparent electrical conductivity and magnetic susceptibility data were affected by ambient temperature variation. In order to obtain better contrast between background and metallic targets it was necessary to correct the drift. This correction was accomplished by using linear and quadratic relation between conductivity/susceptibility and temperature intending comparative studies. The correction of temperature drift by using a quadratic relation was effective, showing that all metallic targets were located as well deeper targets were also improved. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Prestes, J, Frollini, AB, De Lima, C, Donatto, FF, Foschini, D, de Marqueti, RC, Figueira Jr, A, and Fleck, SJ. Comparison between linear and daily undulating periodized resistance training to increase strength. J Strength Cond Res 23(9): 2437-2442, 2009-To determine the most effective periodization model for strength and hypertrophy is an important step for strength and conditioning professionals. The aim of this study was to compare the effects of linear (LP) and daily undulating periodized (DUP) resistance training on body composition and maximal strength levels. Forty men aged 21.5 +/- 8.3 and with a minimum 1-year strength training experience were assigned to an LP (n = 20) or DUP group (n = 20). Subjects were tested for maximal strength in bench press, leg press 45 degrees, and arm curl (1 repetition maximum [RM]) at baseline (T1), after 8 weeks (T2), and after 12 weeks of training (T3). Increases of 18.2 and 25.08% in bench press 1 RM were observed for LP and DUP groups in T3 compared with T1, respectively (p <= 0.05). In leg press 45 degrees, LP group exhibited an increase of 24.71% and DUP of 40.61% at T3 compared with T1. Additionally, DUP showed an increase of 12.23% at T2 compared with T1 and 25.48% at T3 compared with T2. For the arm curl exercise, LP group increased 14.15% and DUP 23.53% at T3 when compared with T1. An increase of 20% was also found at T2 when compared with T1, for DUP. Although the DUP group increased strength the most in all exercises, no statistical differences were found between groups. In conclusion, undulating periodized strength training induced higher increases in maximal strength than the linear model in strength-trained men. For maximizing strength increases, daily intensity and volume variations were more effective than weekly variations.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Nesse artigo, tem-se o interesse em avaliar diferentes estratégias de estimação de parâmetros para um modelo de regressão linear múltipla. Para a estimação dos parâmetros do modelo foram utilizados dados de um ensaio clínico em que o interesse foi verificar se o ensaio mecânico da propriedade de força máxima (EM-FM) está associada com a massa femoral, com o diâmetro femoral e com o grupo experimental de ratas ovariectomizadas da raça Rattus norvegicus albinus, variedade Wistar. Para a estimação dos parâmetros do modelo serão comparadas três metodologias: a metodologia clássica, baseada no método dos mínimos quadrados; a metodologia Bayesiana, baseada no teorema de Bayes; e o método Bootstrap, baseado em processos de reamostragem.
Resumo:
In this paper, we study the behavior of the solutions of nonlinear parabolic problems posed in a domain that degenerates into a line segment (thin domain) which has an oscillating boundary. We combine methods from linear homogenization theory for reticulated structures and from the theory on nonlinear dynamics of dissipative systems to obtain the limit problem for the elliptic and parabolic problems and analyze the convergence properties of the solutions and attractors of the evolutionary equations. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper describes a collocation method for numerically solving Cauchy-type linear singular integro-differential equations. The numerical method is based on the transformation of the integro-differential equation into an integral equation, and then applying a collocation method to solve the latter. The collocation points are chosen as the Chebyshev nodes. Uniform convergence of the resulting method is then discussed. Numerical examples are presented and solved by the numerical techniques.
Resumo:
We consider incompressible Stokes flow with an internal interface at which the pressure is discontinuous, as happens for example in problems involving surface tension. We assume that the mesh does not follow the interface, which makes classical interpolation spaces to yield suboptimal convergence rates (typically, the interpolation error in the L(2)(Omega)-norm is of order h(1/2)). We propose a modification of the P(1)-conforming space that accommodates discontinuities at the interface without introducing additional degrees of freedom or modifying the sparsity pattern of the linear system. The unknowns are the pressure values at the vertices of the mesh and the basis functions are computed locally at each element, so that the implementation of the proposed space into existing codes is straightforward. With this modification, numerical tests show that the interpolation order improves to O(h(3/2)). The new pressure space is implemented for the stable P(1)(+)/P(1) mini-element discretization, and for the stabilized equal-order P(1)/P(1) discretization. Assessment is carried out for Poiseuille flow with a forcing surface and for a static bubble. In all cases the proposed pressure space leads to improved convergence orders and to more accurate results than the standard P(1) space. In addition, two Navier-Stokes simulations with moving interfaces (Rayleigh-Taylor instability and merging bubbles) are reported to show that the proposed space is robust enough to carry out realistic simulations. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
Linear mixed models were developed to handle clustered data and have been a topic of increasing interest in statistics for the past 50 years. Generally. the normality (or symmetry) of the random effects is a common assumption in linear mixed models but it may, sometimes, be unrealistic, obscuring important features of among-subjects variation. In this article, we utilize skew-normal/independent distributions as a tool for robust modeling of linear mixed models under a Bayesian paradigm. The skew-normal/independent distributions is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal distribution, skew-t, skew-slash and the skew-contaminated normal distributions as special cases, providing an appealing robust alternative to the routine use of symmetric distributions in this type of models. The methods developed are illustrated using a real data set from Framingham cholesterol study. (C) 2009 Elsevier B.V. All rights reserved.