926 resultados para LINEAR-ANALYSIS
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Objective: The aim of the present study was to evaluate the effect of pursed-lip breathing (PLB) on cardiac autonomic modulation in individuals with chronic obstructive pulmonary disease (COPD) while at rest. Methods: Thirty-two individuals were allocated to one of two groups: COPD (n = 17; 67.29 +/- 6.87 years of age) and control (n = 15; 63.2 +/- 7.96 years of age). The groups were submitted to a two-stage experimental protocol. The first stage consisted of the characterization of the sample and spirometry. The second stage comprised the analysis of cardiac autonomic modulation through the recording of R-R intervals. This analysis was performed using both nonlinear and linear heart rate variability (HRV). In the statistical analysis, the level of significance was set to 5% (p = 0.05). Results: PLB promoted significant increases in the SD1, SD2, RMSSD and LF (ms(2)) indices as well as an increase in alpha(1) and a reduction in alpha(2) in the COPD group. A greater dispersion of points on the Poincare plots was also observed. The magnitude of the changes produced by PLB differed between groups. Conclusion: PLB led to a loss of fractal correlation properties of heart rate in the direction of linearity in patients with COPD as well as an increase in vagal activity and impact on the spectral analysis. The difference in the magnitude of the changes produced by PLB between groups may be related to the presence of the disease and alterations in the respiration rate.
Linear Versus Geometric Morphometric Approaches for the Analysis of Head Shape Dimorphism in Lizards
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The purpose of this paper is to present the application of a three-phase harmonic propagation analysis time-domain tool, using the Norton model to approach the modeling of non-linear loads, making the harmonics currents flow more appropriate to the operation analysis and to the influence of mitigation elements analysis. This software makes it possible to obtain results closer to the real distribution network, considering voltages unbalances, currents imbalances and the application of mitigation elements for harmonic distortions. In this scenario, a real case study with network data and equipments connected to the network will be presented, as well as the modeling of non-linear loads based on real data obtained from some PCCs (Points of Common Coupling) of interests for a distribution company.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Background: Changes in heart rate during rest-exercise transition can be characterized by the application of mathematical calculations, such as deltas 0-10 and 0-30 seconds to infer on the parasympathetic nervous system and linear regression and delta applied to data range from 60 to 240 seconds to infer on the sympathetic nervous system. The objective of this study was to test the hypothesis that young and middle-aged subjects have different heart rate responses in exercise of moderate and intense intensity, with different mathematical calculations. Methods: Seven middle-aged men and ten young men apparently healthy were subject to constant load tests (intense and moderate) in cycle ergometer. The heart rate data were submitted to analysis of deltas (0-10, 0-30 and 60-240 seconds) and simple linear regression (60-240 seconds). The parameters obtained from simple linear regression analysis were: intercept and slope angle. We used the Shapiro-Wilk test to check the distribution of data and the "t" test for unpaired comparisons between groups. The level of statistical significance was 5%. Results: The value of the intercept and delta 0-10 seconds was lower in middle age in two loads tested and the inclination angle was lower in moderate exercise in middle age. Conclusion: The young subjects present greater magnitude of vagal withdrawal in the initial stage of the HR response during constant load exercise and higher speed of adjustment of sympathetic response in moderate exercise.
Resumo:
In my PhD thesis I propose a Bayesian nonparametric estimation method for structural econometric models where the functional parameter of interest describes the economic agent's behavior. The structural parameter is characterized as the solution of a functional equation, or by using more technical words, as the solution of an inverse problem that can be either ill-posed or well-posed. From a Bayesian point of view, the parameter of interest is a random function and the solution to the inference problem is the posterior distribution of this parameter. A regular version of the posterior distribution in functional spaces is characterized. However, the infinite dimension of the considered spaces causes a problem of non continuity of the solution and then a problem of inconsistency, from a frequentist point of view, of the posterior distribution (i.e. problem of ill-posedness). The contribution of this essay is to propose new methods to deal with this problem of ill-posedness. The first one consists in adopting a Tikhonov regularization scheme in the construction of the posterior distribution so that I end up with a new object that I call regularized posterior distribution and that I guess it is solution of the inverse problem. The second approach consists in specifying a prior distribution on the parameter of interest of the g-prior type. Then, I detect a class of models for which the prior distribution is able to correct for the ill-posedness also in infinite dimensional problems. I study asymptotic properties of these proposed solutions and I prove that, under some regularity condition satisfied by the true value of the parameter of interest, they are consistent in a "frequentist" sense. Once I have set the general theory, I apply my bayesian nonparametric methodology to different estimation problems. First, I apply this estimator to deconvolution and to hazard rate, density and regression estimation. Then, I consider the estimation of an Instrumental Regression that is useful in micro-econometrics when we have to deal with problems of endogeneity. Finally, I develop an application in finance: I get the bayesian estimator for the equilibrium asset pricing functional by using the Euler equation defined in the Lucas'(1978) tree-type models.
Resumo:
Shell structure is widely used in engineering area. The purpose of this dissertation is to show the behavior of a thin shell under external load, especially for long cylindrical shell under compressive load, I analyzed not only for linear elastic problem and also for buckling problem, and by using finite element analysis it shows that the imperfection of a cylinder could affect the critical load which means the buckling capability of this cylinder. For linear elastic problem, I compared the theoretical results with the results got from Straus7 and Abaqus, and the results are really close. For the buckling problem I did the same: compared the theoretical and Abaqus results, the error is less than 1%, but in reality, it’s not possible to reach the theoretical buckling capability due to the imperfection of the cylinder, so I put different imperfection for the cylinder in Abaqus, and found out that with the increasing of the percentage of imperfection, the buckling capability decreases, for example 10% imperfection could decrease 40% of the buckling capability, and the outcome meet the buckling behavior in reality.
Resumo:
In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.