15 resultados para linear weighting methods
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
In this work we introduce a relaxed version of the constant positive linear dependence constraint qualification (CPLD) that we call RCPLD. This development is inspired by a recent generalization of the constant rank constraint qualification by Minchenko and Stakhovski that was called RCRCQ. We show that RCPLD is enough to ensure the convergence of an augmented Lagrangian algorithm and that it asserts the validity of an error bound. We also provide proofs and counter-examples that show the relations of RCRCQ and RCPLD with other known constraint qualifications. In particular, RCPLD is strictly weaker than CPLD and RCRCQ, while still stronger than Abadie's constraint qualification. We also verify that the second order necessary optimality condition holds under RCRCQ.
Resumo:
Estimates of evapotranspiration on a local scale is important information for agricultural and hydrological practices. However, equations to estimate potential evapotranspiration based only on temperature data, which are simple to use, are usually less trustworthy than the Food and Agriculture Organization (FAO)Penman-Monteith standard method. The present work describes two correction procedures for potential evapotranspiration estimates by temperature, making the results more reliable. Initially, the standard FAO-Penman-Monteith method was evaluated with a complete climatologic data set for the period between 2002 and 2006. Then temperature-based estimates by Camargo and Jensen-Haise methods have been adjusted by error autocorrelation evaluated in biweekly and monthly periods. In a second adjustment, simple linear regression was applied. The adjusted equations have been validated with climatic data available for the Year 2001. Both proposed methodologies showed good agreement with the standard method indicating that the methodology can be used for local potential evapotranspiration estimates.
Resumo:
Background: We aimed to investigate the performance of five different trend analysis criteria for the detection of glaucomatous progression and to determine the most frequently and rapidly progressing locations of the visual field. Design: Retrospective cohort. Participants or Samples: Treated glaucoma patients with =8 Swedish Interactive Thresholding Algorithm (SITA)-standard 24-2 visual field tests. Methods: Progression was determined using trend analysis. Five different criteria were used: (A) =1 significantly progressing point; (B) =2 significantly progressing points; (C) =2 progressing points located in the same hemifield; (D) at least two adjacent progressing points located in the same hemifield; (E) =2 progressing points in the same Garway-Heath map sector. Main Outcome Measures: Number of progressing eyes and false-positive results. Results: We included 587 patients. The number of eyes reaching a progression endpoint using each criterion was: A = 300 (51%); B = 212 (36%); C = 194 (33%); D = 170 (29%); and E = 186 (31%) (P = 0.03). The numbers of eyes with positive slopes were: A = 13 (4.3%); B = 3 (1.4%); C = 3 (1.5%); D = 2 (1.1%); and E = 3 (1.6%) (P = 0.06). The global slopes for progressing eyes were more negative in Groups B, C and D than in Group A (P = 0.004). The visual field locations that progressed more often were those in the nasal field adjacent to the horizontal midline. Conclusions: Pointwise linear regression criteria that take into account the retinal nerve fibre layer anatomy enhances the specificity of trend analysis for the detection glaucomatous visual field progression.
Resumo:
Background: Although linear growth during childhood may be affected by early-life exposures, few studies have examined whether the effects of these exposures linger on during school age, particularly in low-and middle-income countries. Methods: We conducted a population-based longitudinal study of 256 children living in the Brazilian Amazon, aged 0.1 y to 5.5 y in 2003. Data regarding socioeconomic and maternal characteristics, infant feeding practices, morbidities, and birth weight and length were collected at baseline of the study (2003). Child body length/height was measured at baseline and at follow-up visits (in 2007 and 2009). Restricted cubic splines were used to construct average height-for-age Z score (HAZ) growth curves, yielding estimated HAZ differences among exposure categories at ages 0.5 y, 1 y, 2 y, 5 y, 7 y, and 10 y. Results: At baseline, median age was 2.6 y (interquartile range, 1.4 y-3.8 y), and mean HAZ was -0.53 (standard deviation, 1.15); 10.2% of children were stunted. In multivariable analysis, children in households above the household wealth index median were 0.30 Z taller at age 5 y (P = 0.017), and children whose families owned land were 0.34 Z taller by age 10 y (P = 0.023), when compared with poorer children. Mothers in the highest tertile for height had children whose HAZ were significantly higher compared with those of children from mothers in the lowest height tertile at all ages. Birth weight and length were positively related to linear growth throughout childhood; by age 10 y, children weighing >3500 g at birth were 0.31 Z taller than those weighing 2501 g to 3500 g (P = 0.022) at birth, and children measuring >= 51 cm at birth were 0.51 Z taller than those measuring <= 48 cm (P = 0.005). Conclusions: Results suggest socioeconomic background is a potentially modifiable predictor of linear growth during the school-aged years. Maternal height and child's anthropometric characteristics at birth are positively associated with HAZ up until child age 10 y.
Resumo:
Background: Changes in heart rate during rest-exercise transition can be characterized by the application of mathematical calculations, such as deltas 0-10 and 0-30 seconds to infer on the parasympathetic nervous system and linear regression and delta applied to data range from 60 to 240 seconds to infer on the sympathetic nervous system. The objective of this study was to test the hypothesis that young and middle-aged subjects have different heart rate responses in exercise of moderate and intense intensity, with different mathematical calculations. Methods: Seven middle-aged men and ten young men apparently healthy were subject to constant load tests (intense and moderate) in cycle ergometer. The heart rate data were submitted to analysis of deltas (0-10, 0-30 and 60-240 seconds) and simple linear regression (60-240 seconds). The parameters obtained from simple linear regression analysis were: intercept and slope angle. We used the Shapiro-Wilk test to check the distribution of data and the "t" test for unpaired comparisons between groups. The level of statistical significance was 5%. Results: The value of the intercept and delta 0-10 seconds was lower in middle age in two loads tested and the inclination angle was lower in moderate exercise in middle age. Conclusion: The young subjects present greater magnitude of vagal withdrawal in the initial stage of the HR response during constant load exercise and higher speed of adjustment of sympathetic response in moderate exercise.
Resumo:
This paper presents simple, rapid, precise and accurate stability-indicating HPLC and CE methods, which were developed and validated for the determination of nitrendipine, nimodipine and nisoldipine. These drugs are calcium channel antagonists of the 1,4-dihydropyridine type which are used in the treatment of cardiovascular diseases. Experimental results showed a good linear correlation between the area and the concentration of drugs covering a relatively large domain of concentration in all cases. The linearity of the analytical procedures was in the range of 2.0-120.0 mu g mL-1 for nitrendipine, 1.0-100.0 mu g mL(-1) for nimodipine and 100.0-600.0 mu g mL(-1) for nisoldipine, the regression determination coefficient being higher than 0.99 in all cases. The proposed methods were found to have good precision and accuracy. The chemical stability of these drugs was determined under various conditions and the methods have shown adequate separation for their enantiomers and degradation products. In addition, degradation products produced as a result of stress studies did not interfere with the detection of the drugs' enantiomers and the assays can thus be considered stability-indicating.
Resumo:
A rigorous asymptotic theory for Wald residuals in generalized linear models is not yet available. The authors provide matrix formulae of order O(n(-1)), where n is the sample size, for the first two moments of these residuals. The formulae can be applied to many regression models widely used in practice. The authors suggest adjusted Wald residuals to these models with approximately zero mean and unit variance. The expressions were used to analyze a real dataset. Some simulation results indicate that the adjusted Wald residuals are better approximated by the standard normal distribution than the Wald residuals.
Resumo:
The existence and stability of three-dimensional (3D) solitons, in cross-combined linear and nonlinear optical lattices, are investigated. In particular, with a starting optical lattice (OL) configuration such that it is linear in the x-direction and nonlinear in the y-direction, we consider the z-direction either unconstrained (quasi-2D OL case) or with another linear OL (full 3D case). We perform this study both analytically and numerically: analytically by a variational approach based on a Gaussian ansatz for the soliton wavefunction and numerically by relaxation methods and direct integrations of the corresponding Gross-Pitaevskii equation. We conclude that, while 3D solitons in the quasi-2D OL case are always unstable, the addition of another linear OL in the z-direction allows us to stabilize 3D solitons both for attractive and repulsive mean interactions. From our results, we suggest the possible use of spatial modulations of the nonlinearity in one of the directions as a tool for the management of stable 3D solitons.
Resumo:
In this work, the reduction reaction of paraquat herbicide was used to obtain analytical signals using electrochemical techniques of differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry. Analytes were prepared with laboratory purified water and natural water samples (from Mogi-Guacu River, SP). The electrochemical techniques were applied to 1.0 mol L-1 Na2SO4 solutions, at pH 5.5, and containing different concentrations of paraquat, in the range of 1 to 10 mu mol L-1, using a gold ultramicroelectrode. 5 replicate experiments were conducted and in each the mean value for peak currents obtained -0.70 V vs. Ag/AgCl yielded excellent linear relationships with pesticide concentrations. The slope values for the calibration plots (method sensitivity) were 4.06 x 10(-3), 1.07 x 10(-2) and 2.95 x 10(-2) A mol(-1) L for purified water by differential pulse voltammetry, square wave voltammetry and multiple square wave voltammetry, respectively. For river water samples, the slope values were 2.60 x 10(-3), 1.06 x 10(-2) and 3.35 x 10(-2) A mol(-1) L, respectively, showing a small interference from the natural matrix components in paraquat determinations. The detection limits for paraquat determinations were calculated by two distinct methodologies, i.e., as proposed by IUPAC and a statistical method. The values obtained with multiple square waves voltammetry were 0.002 and 0.12 mu mol L-1, respectively, for pure water electrolytes. The detection limit from IUPAC recommendations, when inserted in the calibration curve equation, an analytical signal (oxidation current) is smaller than the one experimentally observed for the blank solution under the same experimental conditions. This is inconsistent with the definition of detection limit, thus the IUPAC methodology requires further discussion. The same conclusion can be drawn by the analyses of detection limits obtained with the other techniques studied.
Resumo:
This paper addresses the numerical solution of random crack propagation problems using the coupling boundary element method (BEM) and reliability algorithms. Crack propagation phenomenon is efficiently modelled using BEM, due to its mesh reduction features. The BEM model is based on the dual BEM formulation, in which singular and hyper-singular integral equations are adopted to construct the system of algebraic equations. Two reliability algorithms are coupled with BEM model. The first is the well known response surface method, in which local, adaptive polynomial approximations of the mechanical response are constructed in search of the design point. Different experiment designs and adaptive schemes are considered. The alternative approach direct coupling, in which the limit state function remains implicit and its gradients are calculated directly from the numerical mechanical response, is also considered. The performance of both coupling methods is compared in application to some crack propagation problems. The investigation shows that direct coupling scheme converged for all problems studied, irrespective of the problem nonlinearity. The computational cost of direct coupling has shown to be a fraction of the cost of response surface solutions, regardless of experiment design or adaptive scheme considered. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We consider modifications of the nonlinear Schrodinger model (NLS) to look at the recently introduced concept of quasi-integrability. We show that such models possess an in finite number of quasi-conserved charges which present intriguing properties in relation to very specific space-time parity transformations. For the case of two-soliton solutions where the fields are eigenstates of this parity, those charges are asymptotically conserved in the scattering process of the solitons. Even though the charges vary in time their values in the far past and the far future are the same. Such results are obtained through analytical and numerical methods, and employ adaptations of algebraic techniques used in integrable field theories. Our findings may have important consequences on the applications of these models in several areas of non-linear science. We make a detailed numerical study of the modified NLS potential of the form V similar to (vertical bar psi vertical bar(2))(2+epsilon), with epsilon being a perturbation parameter. We perform numerical simulations of the scattering of solitons for this model and find a good agreement with the results predicted by the analytical considerations. Our paper shows that the quasi-integrability concepts recently proposed in the context of modifications of the sine-Gordon model remain valid for perturbations of the NLS model.
Resumo:
Abstract Background Decreased heart rate variability (HRV) is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD) and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences]) and frequency domains ultra-low frequency (ULF) ≤ 0,003 Hz, very low frequency (VLF) 0,003 – 0,04 Hz, low frequency (LF) (0.04–0.15 Hz), and high frequency (HF) (0.15–0.40 Hz) as well as the ratio between LF and HF components (LF/HF). In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn), α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC). The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.
Resumo:
INTRODUÇÃO: As modificações da frequência cardíaca (FC) durante a transição repouso-exercício podem ser caracterizadas por meio da aplicação de cálculos matemáticos simples, como: deltas 0-10 e 0-30s para inferir sobre o sistema nervoso parassimpático, e delta e regressão linear aplicados no intervalo 60-240s para inferir sobre o sistema nervoso simpático. Assim, o objetivo deste estudo foi testar a hipótese de que indivíduos jovens e de meia-idade apresentam diferentes respostas da FC em exercício de intensidade moderada e intensa, com diferentes cálculos matemáticos. MÉTODOS: Homens aparentemente saudáveis, sendo sete de meia-idade e 10 jovens, foram submetidos a testes de carga constante de intensidade moderada e intensa. Foram calculados os deltas da FC nos períodos de 0-10s, 0-30s e 60-240s e a regressão linear simples no período de 60 a 240s. Os parâmetros obtidos na análise de regressão linear simples foram: intercepto e inclinação angular. Utilizou-se o teste Shapiro-Wilk para verificar a distribuição dos dados e o teste t não pareado para comparação entre os grupos. O nível de significância estatística considerado foi 5%. RESULTADOS: O valor do intercepto e do delta 0-10s foi menor no grupo meia-idade nas duas cargas e a inclinação do ângular foi menor no grupo meia-idade no exercício moderado. CONCLUSÃO: Os indivíduos jovens apresentam retirada vagal de maior magnitude no estágio inicial da resposta da FC durante exercício dinâmico em carga constante nas intensidades analisadas e maior velocidade de ajuste da resposta simpática em exercícios moderados.
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.
Resumo:
Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.