953 resultados para Error Analysis
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
This paper examines the hysteresis hypothesis in the Brazilian industrialized exports using a time series analysis. This hypothesis finds an empirical representation into the nonlinear adjustments of the exported quantity to relative price changes. Thus, the threshold cointegration analysis proposed by Balke and Fomby [Balke, N.S. and Fomby, T.B. Threshold Cointegration. International Economic Review, 1997; 38; 627-645.] was used for estimating models with asymmetric adjustment of the error correction term. Amongst sixteen industrial sectors selected, there was evidence of nonlinearities in the residuals of long-run relationships of supply or demand for exports in nine of them. These nonlinearities represent asymmetric and/or discontinuous responses of exports to different representative measures of real exchange rates, in addition to other components of long-run demand or supply equations. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Background Meta-analysis is increasingly being employed as a screening procedure in large-scale association studies to select promising variants for follow-up studies. However, standard methods for meta-analysis require the assumption of an underlying genetic model, which is typically unknown a priori. This drawback can introduce model misspecifications, causing power to be suboptimal, or the evaluation of multiple genetic models, which augments the number of false-positive associations, ultimately leading to waste of resources with fruitless replication studies. We used simulated meta-analyses of large genetic association studies to investigate naive strategies of genetic model specification to optimize screenings of genome-wide meta-analysis signals for further replication. Methods Different methods, meta-analytical models and strategies were compared in terms of power and type-I error. Simulations were carried out for a binary trait in a wide range of true genetic models, genome-wide thresholds, minor allele frequencies (MAFs), odds ratios and between-study heterogeneity (tau(2)). Results Among the investigated strategies, a simple Bonferroni-corrected approach that fits both multiplicative and recessive models was found to be optimal in most examined scenarios, reducing the likelihood of false discoveries and enhancing power in scenarios with small MAFs either in the presence or in absence of heterogeneity. Nonetheless, this strategy is sensitive to tau(2) whenever the susceptibility allele is common (MAF epsilon 30%), resulting in an increased number of false-positive associations compared with an analysis that considers only the multiplicative model. Conclusion Invoking a simple Bonferroni adjustment and testing for both multiplicative and recessive models is fast and an optimal strategy in large meta-analysis-based screenings. However, care must be taken when examined variants are common, where specification of a multiplicative model alone may be preferable.
Resumo:
Analysis of a major multi-site epidemiologic study of heart disease has required estimation of the pairwise correlation of several measurements across sub-populations. Because the measurements from each sub-population were subject to sampling variability, the Pearson product moment estimator of these correlations produces biased estimates. This paper proposes a model that takes into account within and between sub-population variation, provides algorithms for obtaining maximum likelihood estimates of these correlations and discusses several approaches for obtaining interval estimates. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
Background: Biochemical analysis of fluid is the primary laboratory approach hi pleural effusion diagnosis. Standardization of the steps between collection and laboratorial analyses are fundamental to maintain the quality of the results. We evaluated the influence of temperature and storage time on sample stability. Methods: Pleural fluid from 30 patients was submitted to analyses of proteins, albumin, lactic dehydrogenase (LDH), cholesterol, triglycerides, and glucose. Aliquots were stored at 21 degrees, 4 degrees, and-20 degrees C, and concentrations were determined after 1, 2, 3, 4, 7, and 14 days. LDH isoenzymes were quantified in 7 random samples. Results: Due to the instability of isoenzymes 4 and 5, a decrease in LDH was observed in the first 24 h in samples maintained at -20 degrees C and after 2 days when maintained at 4 degrees C. Aside from glucose, all parameters were stable for up to at least day 4 when stored at room temperature or 4 degrees C. Conclusions: Temperature and storage time are potential preanalytical errors in pleural fluid analyses, mainly if we consider the instability of glucose and LDH. The ideal procedure is to execute all the tests immediately after collection. However, most of the tests can be done in refrigerated sample;, excepting LDH analysis. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Objectives: The aim of this study was to determine the precision of the measurements of 2 craniometric anatomic points-glabella and anterior nasal spine-in order to verify their possibility as potential locations for placing implants aimed at nasal prostheses retention. Methods: Twenty-six dry human skulls were scanned in a high-resolution spiral tomography with 1-mm axial slice thickness and 1-mm interval reconstruction using a bone tissue filter. Images obtained were stored and transferred to an independent workstation containing e-film imaging software. The measurements (in the glabella and anterior nasal fossa) were made independently by 2 observers twice for each measurement. Data were submitted to statistical analysis (parametric t test). Results: The results demonstrated no statistically significant difference between interobserver and intraobserver measurements (P > .05). The standard error was found to be between 0.49 mm and 0.84 mrn for measurements in bone protocol, indicating a high /eve/ of precision. Conclusions: The measurements obtained in anterior nasal spine and glabella were considered precise and reproducible. Mean values of such measurements pointed to the possibility of implant placement in these regions, particularly in the anterior nasal spine.
Resumo:
Background: It remains unclear as to whether or not dental bleaching affects the bond strength of dentin/resin restoration. Purpose: To evaluated the bond strength of adhesive systems to dentin submitted to bleaching with 38% hydrogen peroxide (HP) activated by LED-laser and to assess the adhesive/dentin interfaces by means of SEM. Study design: Sixty fragments of dentin (25 mm(2)) were included and divided into two groups: bleached and unbleached. HP was applied for 20 s and photoactivated for 45 s. Groups were subdivided according to the adhesive systems (n = 10): (1) two-steps conventional system (Adper Single Bond), (2) two-steps self-etching system (Clearfil standard error (SE) Bond), and (3) one-step self-etching system (Prompt L-Pop). The specimens received the Z250 resin and, after 24 h, were submitted to the bond strength test. Additional 30 dentin fragments (n = 5) received the same surface treatments and were prepared for SEM. Data were analyzed by ANOVA and Tukey`s test (alpha = 0.05). Results: There was significant strength reduction in bleached group when compared to unbleached group (P < 0.05). Higher bond strength was observed for Prompt. Single Bond and Clearfil presented the smallest values when used in bleached dentin. SEM analysis of the unbleached specimens revealed long tags and uniform hybrid layer for all adhesives. In bleached dentin, Single Bond provided open tubules and with few tags, Clearfil determined the absence of tags and hybrid layer, and Prompt promoted a regular hybrid layer with some tags. Conclusions: Prompt promoted higher shear bond strength, regardless of the bleaching treatment and allowed the formation of a regular and fine hybrid layer with less deep tags, when compared to Single Bond and Clearfil. Microsc. Res. Tech. 74:244-250, 2011. (C) 2010 Wiley-Liss, Inc.
Resumo:
Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Background and Purpose. This study evaluated an electromyographic technique for the measurement of muscle activity of the deep cervical flexor (DCF) muscles. Electromyographic signals were detected from the DCF, sternocleidomastoid (SCM), and anterior scalene (AS) muscles during performance of the craniocervical flexion (CCF) test, which involves performing 5 stages of increasing craniocervical flexion range of motion-the anatomical action of the DCF muscles. Subjects. Ten volunteers without known pathology or impairment participated in this study. Methods. Root-mean-square (RMS) values were calculated for the DCF, SCM, and AS muscles during performance of the CCF test. Myoelectric signals were recorded from the DCF muscles using bipolar electrodes placed over the posterior oropharyngeal wall. Reliability estimates of normalized RMS values were obtained by evaluating intraclass correlation coefficients and the normalized standard error of the mean (SEM). Results. A linear relationship was evident between the amplitude of DCF muscle activity and the incremental stages of the CCF test (F=239.04, df=36, P<.0001). Normalized SEMs in the range 6.7% to 10.3% were obtained for the normalized RMS values for the DCF muscles, providing evidence of reliability for these variables. Discussion and Conclusion. This approach for obtaining a direct measure of the DCF muscles, which differs from those previously used, may be useful for the examination of these muscles in future electromyographic applications.
Resumo:
For zygosity diagnosis in the absence of genotypic data, or in the recruitment phase of a twin study where only single twins from same-sex pairs are being screened, or to provide a test for sample duplication leading to the false identification of a dizygotic pair as monozygotic, the appropriate analysis of respondents' answers to questions about zygosity is critical. Using data from a young adult Australian twin cohort (N = 2094 complete pairs and 519 singleton twins from same-sex pairs with complete responses to all zygosity items), we show that application of latent class analysis (LCA), fitting a 2-class model, yields results that show good concordance with traditional methods of zygosity diagnosis, but with certain important advantages. These include the ability, in many cases, to assign zygosity with specified probability on the basis of responses of a single informant (advantageous when one zygosity type is being oversampled); and the ability to quantify the probability of misassignment of zygosity, allowing prioritization of cases for genotyping as well as identification of cases of probable laboratory error. Out of 242 twins (from 121 like-sex pairs) where genotypic data were available for zygosity confirmation, only a single case was identified of incorrect zygosity assignment by the latent class algorithm. Zygosity assignment for that single case was identified by the LCA as uncertain (probability of being a monozygotic twin only 76%), and the co-twin's responses clearly identified the pair as dizygotic (probability of being dizygotic 100%). In the absence of genotypic data, or as a safeguard against sample duplication, application of LCA for zygosity assignment or confirmation is strongly recommended.
Resumo:
Background: Although early in life there is little discernible difference in bone mass between boys and girls, at puberty sex differences are observed. It is uncertain if these differences represent differences in bone mass or just differences in anthropometric dimensions. Aim: The study aimed to identify whether sex independently affects bone mineral content (BMC) accrual in growing boys and girls. Three sites are investigated: total body (TB), femoral neck (FN) and lumbar spine (LS). Subjects and methods: 85 boys and 67 girls were assessed annually for seven consecutive years. BMC was assessed by dual energy X-ray absorptiometry (DXA). Biological age was defined as years from age at peak height velocity (PHV). Data were analysed using a hierarchical (random effects) modelling approach. Results: When biological age, body size and body composition were controlled, boys had statistically significantly higher TB and FN BMC at all maturity levels (p < 0.05). No independent sex differences were found at the LS (p > 0.05). Conclusion: Although a statistical significant sex effect is observed, it is less than the error of the measurement, and thus sex difference are debatable. In general, sex difference are explained by anthropometric difference
Resumo:
Time motion analysis is extensively used to assess the demands of team sports. At present there is only limited information on the reliability of measurements using this analysis tool. The aim of this study was to establish the reliability of an individual observer's time motion analysis of rugby union. Ten elite level rugby players were individually tracked in Southern Hemisphere Super 12 matches using a digital video camera. The video footage was subsequently analysed by a single researcher on two occasions one month apart. The test-retest reliability was quantified as the typical error of measurement (TEM) and rated as either good (10% TEM). The total time spent in the individual movements of walking, jogging, striding, sprinting, static exertion and being stationary had moderate to poor reliability (5.8-11.1% TEM). The frequency of individual movements had good to poor reliability (4.3-13.6% TEM), while the mean duration of individual movements had moderate reliability (7.1-9.3% TEM). For the individual observer in the present investigation, time motion analysis was shown to be moderately reliable as an evaluation tool for examining the movement patterns of players in competitive rugby. These reliability values should be considered when assessing the movement patterns of rugby players within competition.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
A hierarchical matrix is an efficient data-sparse representation of a matrix, especially useful for large dimensional problems. It consists of low-rank subblocks leading to low memory requirements as well as inexpensive computational costs. In this work, we discuss the use of the hierarchical matrix technique in the numerical solution of a large scale eigenvalue problem arising from a finite rank discretization of an integral operator. The operator is of convolution type, it is defined through the first exponential-integral function and, hence, it is weakly singular. We develop analytical expressions for the approximate degenerate kernels and deduce error upper bounds for these approximations. Some computational results illustrating the efficiency and robustness of the approach are presented.