990 resultados para linear tests
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
For the development of this graduate work of fractal fracture behavior, it is necessary to establish references for fractal analysis on fracture surfaces, evaluating, from tests of fracture tenacity on modes I, II and combined I / II, the behavior of fractures in fragile materials, on linear elastic regime. Fractures in the linear elastic regime are described by your fractal behavior by several researchers, especially Mecholsky JJ. The motivation of that present proposal stems from work done by the group and accepted for publication in the journal Materials Science and Engineering A (Horovistiz et al, 2010), where the model of Mecholsky could not be proven for fractures into grooved specimens for tests of diametric compression of titania on mode I. The general objective of this proposal is to quantify the distinguish surface regions formed by different mechanisms of fracture propagation in linear elastic regime in polymeric specimens (phenolic resin), relating tenacity, thickness of the specimens and fractal dimension. The analyzed fractures were obtained from SCB tests in mode I loading, and the acquisition of images taken using an optical reflection microscope and the surface topographies obtained by the extension focus method of reconstruction, calculating the values of fractal dimension with the use of maps of elevations. The fractal dimension was classified as monofractal dimension (Df), when the fracture is described by a single value, or texture size (Dt), which is a macroscopic analysis of the fracture, combined with the structural dimension (Ds), which is a microscopic analysis. The results showed that there is no clear relationship between tenacity, thickness and fractal values for the material investigated. On the other hand it is clear that the fractal values change with the evolution of cracks during the fracture process ... (Complete abstract click electronic access below)
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
Background: We aimed to investigate the performance of five different trend analysis criteria for the detection of glaucomatous progression and to determine the most frequently and rapidly progressing locations of the visual field. Design: Retrospective cohort. Participants or Samples: Treated glaucoma patients with =8 Swedish Interactive Thresholding Algorithm (SITA)-standard 24-2 visual field tests. Methods: Progression was determined using trend analysis. Five different criteria were used: (A) =1 significantly progressing point; (B) =2 significantly progressing points; (C) =2 progressing points located in the same hemifield; (D) at least two adjacent progressing points located in the same hemifield; (E) =2 progressing points in the same Garway-Heath map sector. Main Outcome Measures: Number of progressing eyes and false-positive results. Results: We included 587 patients. The number of eyes reaching a progression endpoint using each criterion was: A = 300 (51%); B = 212 (36%); C = 194 (33%); D = 170 (29%); and E = 186 (31%) (P = 0.03). The numbers of eyes with positive slopes were: A = 13 (4.3%); B = 3 (1.4%); C = 3 (1.5%); D = 2 (1.1%); and E = 3 (1.6%) (P = 0.06). The global slopes for progressing eyes were more negative in Groups B, C and D than in Group A (P = 0.004). The visual field locations that progressed more often were those in the nasal field adjacent to the horizontal midline. Conclusions: Pointwise linear regression criteria that take into account the retinal nerve fibre layer anatomy enhances the specificity of trend analysis for the detection glaucomatous visual field progression.
Resumo:
This work presents the application of Linear Matrix Inequalities to the robust and optimal adjustment of Power System Stabilizers with pre-defined structure. Results of some tests show that gain and zeros adjustments are sufficient to guarantee robust stability and performance with respect to various operating points. Making use of the flexible structure of LMI's, we propose an algorithm that minimizes the norm of the controllers gain matrix while it guarantees the damping factor specified for the closed loop system, always using a controller with flexible structure. The technique used here is the pole placement, whose objective is to place the poles of the closed loop system in a specific region of the complex plane. Results of tests with a nine-machine system are presented and discussed, in order to validate the algorithm proposed. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Background: Changes in heart rate during rest-exercise transition can be characterized by the application of mathematical calculations, such as deltas 0-10 and 0-30 seconds to infer on the parasympathetic nervous system and linear regression and delta applied to data range from 60 to 240 seconds to infer on the sympathetic nervous system. The objective of this study was to test the hypothesis that young and middle-aged subjects have different heart rate responses in exercise of moderate and intense intensity, with different mathematical calculations. Methods: Seven middle-aged men and ten young men apparently healthy were subject to constant load tests (intense and moderate) in cycle ergometer. The heart rate data were submitted to analysis of deltas (0-10, 0-30 and 60-240 seconds) and simple linear regression (60-240 seconds). The parameters obtained from simple linear regression analysis were: intercept and slope angle. We used the Shapiro-Wilk test to check the distribution of data and the "t" test for unpaired comparisons between groups. The level of statistical significance was 5%. Results: The value of the intercept and delta 0-10 seconds was lower in middle age in two loads tested and the inclination angle was lower in moderate exercise in middle age. Conclusion: The young subjects present greater magnitude of vagal withdrawal in the initial stage of the HR response during constant load exercise and higher speed of adjustment of sympathetic response in moderate exercise.
Resumo:
Abstract Background Decreased heart rate variability (HRV) is related to higher morbidity and mortality. In this study we evaluated the linear and nonlinear indices of the HRV in stable angina patients submitted to coronary angiography. Methods We studied 77 unselected patients for elective coronary angiography, which were divided into two groups: coronary artery disease (CAD) and non-CAD groups. For analysis of HRV indices, HRV was recorded beat by beat with the volunteers in the supine position for 40 minutes. We analyzed the linear indices in the time (SDNN [standard deviation of normal to normal], NN50 [total number of adjacent RR intervals with a difference of duration greater than 50ms] and RMSSD [root-mean square of differences]) and frequency domains ultra-low frequency (ULF) ≤ 0,003 Hz, very low frequency (VLF) 0,003 – 0,04 Hz, low frequency (LF) (0.04–0.15 Hz), and high frequency (HF) (0.15–0.40 Hz) as well as the ratio between LF and HF components (LF/HF). In relation to the nonlinear indices we evaluated SD1, SD2, SD1/SD2, approximate entropy (−ApEn), α1, α2, Lyapunov Exponent, Hurst Exponent, autocorrelation and dimension correlation. The definition of the cutoff point of the variables for predictive tests was obtained by the Receiver Operating Characteristic curve (ROC). The area under the ROC curve was calculated by the extended trapezoidal rule, assuming as relevant areas under the curve ≥ 0.650. Results Coronary arterial disease patients presented reduced values of SDNN, RMSSD, NN50, HF, SD1, SD2 and -ApEn. HF ≤ 66 ms2, RMSSD ≤ 23.9 ms, ApEn ≤−0.296 and NN50 ≤ 16 presented the best discriminatory power for the presence of significant coronary obstruction. Conclusion We suggest the use of Heart Rate Variability Analysis in linear and nonlinear domains, for prognostic purposes in patients with stable angina pectoris, in view of their overall impairment.
Resumo:
ABSTRACT (italiano) Con crescente attenzione riguardo al problema della sicurezza di ponti e viadotti esistenti nei Paesi Bassi, lo scopo della presente tesi è quello di studiare, mediante la modellazione con Elementi Finiti ed il continuo confronto con risultati sperimentali, la risposta in esercizio di elementi che compongono infrastrutture del genere, ovvero lastre in calcestruzzo armato sollecitate da carichi concentrati. Tali elementi sono caratterizzati da un comportamento ed una crisi per taglio, la cui modellazione è, da un punto di vista computazionale, una sfida piuttosto ardua, a causa del loro comportamento fragile combinato a vari effetti tridimensionali. La tesi è incentrata sull'utilizzo della Sequentially Linear Analysis (SLA), un metodo di soluzione agli Elementi Finiti alternativo rispetto ai classici approcci incrementali e iterativi. Il vantaggio della SLA è quello di evitare i ben noti problemi di convergenza tipici delle analisi non lineari, specificando direttamente l'incremento di danno sull'elemento finito, attraverso la riduzione di rigidezze e resistenze nel particolare elemento finito, invece dell'incremento di carico o di spostamento. Il confronto tra i risultati di due prove di laboratorio su lastre in calcestruzzo armato e quelli della SLA ha dimostrato in entrambi i casi la robustezza del metodo, in termini di accuratezza dei diagrammi carico-spostamento, di distribuzione di tensioni e deformazioni e di rappresentazione del quadro fessurativo e dei meccanismi di crisi per taglio. Diverse variazioni dei più importanti parametri del modello sono state eseguite, evidenziando la forte incidenza sulle soluzioni dell'energia di frattura e del modello scelto per la riduzione del modulo elastico trasversale. Infine è stato effettuato un paragone tra la SLA ed il metodo non lineare di Newton-Raphson, il quale mostra la maggiore affidabilità della SLA nella valutazione di carichi e spostamenti ultimi insieme ad una significativa riduzione dei tempi computazionali. ABSTRACT (english) With increasing attention to the assessment of safety in existing dutch bridges and viaducts, the aim of the present thesis is to study, through the Finite Element modeling method and the continuous comparison with experimental results, the real response of elements that compose these infrastructures, i.e. reinforced concrete slabs subjected to concentrated loads. These elements are characterized by shear behavior and crisis, whose modeling is, from a computational point of view, a hard challenge, due to their brittle behavior combined with various 3D effects. The thesis is focused on the use of Sequentially Linear Analysis (SLA), an alternative solution technique to classical non linear Finite Element analyses that are based on incremental and iterative approaches. The advantage of SLA is to avoid the well-known convergence problems of non linear analyses by directly specifying a damage increment, in terms of a reduction of stiffness and strength in the particular finite element, instead of a load or displacement increment. The comparison between the results of two laboratory tests on reinforced concrete slabs and those obtained by SLA has shown in both the cases the robustness of the method, in terms of accuracy of load-displacements diagrams, of the distribution of stress and strain and of the representation of the cracking pattern and of the shear failure mechanisms. Different variations of the most important parameters have been performed, pointing out the strong incidence on the solutions of the fracture energy and of the chosen shear retention model. At last a confrontation between SLA and the non linear Newton-Raphson method has been executed, showing the better reliability of the SLA in the evaluation of the ultimate loads and displacements, together with a significant reduction of computational times.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.
Resumo:
INTRODUCTION The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models generated from dental impressions (Digimodels; Orthoproof, Nieuwegein, The Netherlands). The Digimodels were used as the reference standard. METHODS The 3 types of digital models were made from 10 subjects. Four examiners repeated 37 linear tooth and arch measurements 10 times. Paired t tests and the intraclass correlation coefficient were performed to determine the reproducibility and accuracy of the measurements. RESULTS The CBCT images showed significantly smaller intraclass correlation coefficient values and larger duplicate measurement errors compared with the corresponding values for Digimodels and Anatomodels. The average difference between measurements on CBCT images and Digimodels ranged from -0.4 to 1.65 mm, with limits of agreement values up to 1.3 mm for crown-width measurements. The average difference between Anatomodels and Digimodels ranged from -0.42 to 0.84 mm with limits of agreement values up to 1.65 mm. CONCLUSIONS Statistically significant differences between measurements on Digimodels and Anatomodels, and between Digimodels and CBCT images, were found. Although the mean differences might be clinically acceptable, the random errors were relatively large compared with corresponding measurements reported in the literature for both Anatomodels and CBCT images, and might be clinically important. Therefore, with the CBCT settings used in this study, measurements made directly on CBCT images and Anatomodels are not as accurate as measurements on Digimodels.
Resumo:
Let Y_i = f(x_i) + E_i\ (1\le i\le n) with given covariates x_1\lt x_2\lt \cdots\lt x_n , an unknown regression function f and independent random errors E_i with median zero. It is shown how to apply several linear rank test statistics simultaneously in order to test monotonicity of f in various regions and to identify its local extrema.
Resumo:
It is important to check the fundamental assumption of most popular Item Response Theory models, unidimensionality. However, it is hard for educational and psychological tests to be strictly unidimensional. The tests studied in this paper are from a standardized high-stake testing program. They feature potential multidimensionality by presenting various item types and item sets. Confirmatory factor analyses with one-factor and bifactor models, and based on both linear structural equation modeling approach and nonlinear IRT approach were conducted. The competing models were compared and the implications of the bifactor model for checking essential unidimensionality were discussed.
Resumo:
We present a novel approach for detecting severe obstructive sleep apnea (OSA) cases by introducing non-linear analysis into sustained speech characterization. The proposed scheme was designed for providing additional information into our baseline system, built on top of state-of-the-art cepstral domain modeling techniques, aiming to improve accuracy rates. This new information is lightly correlated with our previous MFCC modeling of sustained speech and uncorrelated with the information in our continuous speech modeling scheme. Tests have been performed to evaluate the improvement for our detection task, based on sustained speech as well as combined with a continuous speech classifier, resulting in a 10% relative reduction in classification for the first and a 33% relative reduction for the fused scheme. Results encourage us to consider the existence of non-linear effects on OSA patients' voices, and to think about tools which could be used to improve short-time analysis.
Resumo:
The decision to select the most suitable type of energy storage system for an electric vehicle is always difficult, since many conditionings must be taken into account. Sometimes, this study can be made by means of complex mathematical models which represent the behavior of a battery, ultracapacitor or some other devices. However, these models are usually too dependent on parameters that are not easily available, which usually results in nonrealistic results. Besides, the more accurate the model, the more specific it needs to be, which becomes an issue when comparing systems of different nature. This paper proposes a practical methodology to compare different energy storage technologies. This is done by means of a linear approach of an equivalent circuit based on laboratory tests. Via these tests, the internal resistance and the self-discharge rate are evaluated, making it possible to compare different energy storage systems regardless their technology. Rather simple testing equipment is sufficient to give a comparative idea of the differences between each system, concerning issues such as efficiency, heating and self-discharge, when operating under a certain scenario. The proposed methodology is applied to four energy storage systems of different nature for the sake of illustration.