11 resultados para error analysis
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The determination of hydrodynamic coefficients of full scale underwater vehicles using system identification (SI) is an extremely powerful technique. The procedure is based on experimental runs and on the analysis of on-board sensors and thrusters signals. The technique is cost effective and it has high repeatability; however, for open-frame underwater vehicles, it lacks accuracy due to the sensors' noise and the poor modeling of thruster-hull and thruster-thruster interaction effects. In this work, forced oscillation tests were undertaken with a full scale open-frame underwater vehicle. These conducted tests are unique in the sense that there are not many examples in the literature taking advantage of a PMM installation for testing a prototype and; consequently, allowing the comparison between the experimental results and the ones estimated by parameter identification. The Morison's equation inertia and drag coefficients were estimated with two parameter identification methods, that is, the weighted and the ordinary least-squares procedures. It was verified that the in-line force estimated from Morison's equation agrees well with the measured one except in the region around the motion inversion points. On the other hand, the error analysis showed that the ordinary least-squares provided better accuracy and, therefore, was used to evaluate the ratio between inertia and drag forces for a range of Keulegan-Carpenter and Reynolds numbers. It was concluded that, although both experimental and estimation techniques proved to be powerful tools for evaluation of an open-frame underwater vehicle's hydrodynamic coefficients, the research provided a rich amount of reference data for comparison with reduced models as well as for dynamic motion simulation of ROVs. [DOI: 10.1115/1.4004952]
Resumo:
Purpose: To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. Methods: One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab(US Patent). A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE<0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Results: Bio-Optics: sample size, 97 +/- 22 cells; RE, 6.52 +/- 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE<0.05); customized sample size, 162 +/- 34 cells. CSO: sample size, 110 +/- 20 cells; RE, 5.98 +/- 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE<0.05); customized sample size, 157 +/- 45 cells. Konan: sample size, 80 +/- 27 cells; RE, 10.6 +/- 3.67; none of the examinations had sufficient endothelial cell quantity (RE>0.05); customized sample size, 336 +/- 131 cells. Topcon: sample size, 87 +/- 17 cells; RE, 10.1 +/- 2.52; none of the examinations had sufficient endothelial cell quantity (RE>0.05); customized sample size, 382 +/- 159 cells. Conclusions: A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Resumo:
Background: Infant mortality is an important measure of human development, related to the level of welfare of a society. In order to inform public policy, various studies have tried to identify the factors that influence, at an aggregated level, infant mortality. The objective of this paper is to analyze the regional pattern of infant mortality in Brazil, evaluating the effect of infrastructure, socio-economic, and demographic variables to understand its distribution across the country. Methods: Regressions including socio-economic and living conditions variables are conducted in a structure of panel data. More specifically, a spatial panel data model with fixed effects and a spatial error autocorrelation structure is used to help to solve spatial dependence problems. The use of a spatial modeling approach takes into account the potential presence of spillovers between neighboring spatial units. The spatial units considered are Minimum Comparable Areas, defined to provide a consistent definition across Census years. Data are drawn from the 1980, 1991 and 2000 Census of Brazil, and from data collected by the Ministry of Health (DATASUS). In order to identify the influence of health care infrastructure, variables related to the number of public and private hospitals are included. Results: The results indicate that the panel model with spatial effects provides the best fit to the data. The analysis confirms that the provision of health care infrastructure and social policy measures (e. g. improving education attainment) are linked to reduced rates of infant mortality. An original finding concerns the role of spatial effects in the analysis of IMR. Spillover effects associated with health infrastructure and water and sanitation facilities imply that there are regional benefits beyond the unit of analysis. Conclusions: A spatial modeling approach is important to produce reliable estimates in the analysis of panel IMR data. Substantively, this paper contributes to our understanding of the physical and social factors that influence IMR in the case of a developing country.
Resumo:
Gastric cancer is the second leading cause of cancer-related death worldwide. The identification of new cancer biomarkers is necessary to reduce the mortality rates through the development of new screening assays and early diagnosis, as well as new target therapies. In this study, we performed a proteomic analysis of noncardia gastric neoplasias of individuals from Northern Brazil. The proteins were analyzed by two-dimensional electrophoresis and mass spectrometry. For the identification of differentially expressed proteins, we used statistical tests with bootstrapping resampling to control the type I error in the multiple comparison analyses. We identified 111 proteins involved in gastric carcinogenesis. The computational analysis revealed several proteins involved in the energy production processes and reinforced the Warburg effect in gastric cancer. ENO1 and HSPB1 expression were further evaluated. ENO1 was selected due to its role in aerobic glycolysis that may contribute to the Warburg effect. Although we observed two up-regulated spots of ENO1 in the proteomic analysis, the mean expression of ENO1 was reduced in gastric tumors by western blot. However, mean ENO1 expression seems to increase in more invasive tumors. This lack of correlation between proteomic and western blot analyses may be due to the presence of other ENO1 spots that present a slightly reduced expression, but with a high impact in the mean protein expression. In neoplasias, HSPB1 is induced by cellular stress to protect cells against apoptosis. In the present study, HSPB1 presented an elevated protein and mRNA expression in a subset of gastric cancer samples. However, no association was observed between HSPB1 expression and clinicopathological characteristics. Here, we identified several possible biomarkers of gastric cancer in individuals from Northern Brazil. These biomarkers may be useful for the assessment of prognosis and stratification for therapy if validated in larger clinical study sets.
Resumo:
Workplace accidents involving machines are relevant for their magnitude and their impacts on worker health. Despite consolidated critical statements, explanation centered on errors of operators remains predominant with industry professionals, hampering preventive measures and the improvement of production-system reliability. Several initiatives were adopted by enforcement agencies in partnership with universities to stimulate production and diffusion of analysis methodologies with a systemic approach. Starting from one accident case that occurred with a worker who operated a brake-clutch type mechanical press, the article explores cognitive aspects and the existence of traps in the operation of this machine. It deals with a large-sized press that, despite being endowed with a light curtain in areas of access to the pressing zone, did not meet legal requirements. The safety devices gave rise to an illusion of safety, permitting activation of the machine when a worker was still found within the operational zone. Preventive interventions must stimulate the tailoring of systems to the characteristics of workers, minimizing the creation of traps and encouraging safety policies and practices that replace judgments of behaviors that participate in accidents by analyses of reasons that lead workers to act in that manner.
Resumo:
L. Antonangelo, F. S. Vargas, M. M. P. Acencio, A. P. Cora, L. R. Teixeira, E. H. Genofre and R. K. B. Sales Effect of temperature and storage time on cellular analysis of fresh pleural fluid samples Objective: Despite the methodological variability in preparation techniques for pleural fluid cytology, it is fundamental that the cells should be preserved, permitting adequate morphological classification. We evaluated numerical and morphological changes in pleural fluid specimens processed after storage at room temperature or under refrigeration. Methods: Aliquots of pleural fluid from 30 patients, collected in ethylenediaminetetraacetic acid-coated tubes and maintained at room temperature (21 degrees C) or refrigeration (4 degrees C) were evaluated after 2 and 6 hours and 1, 2, 3, 4, 7 and 14 days. Evaluation of cytomorphology and global and percentage counts of leucocytes, macrophages and mesothelial cells were included. Results: The samples had quantitative cellular variations from day 3 or 4 onwards, depending on the storage conditions. Morphological alterations occurred earlier in samples maintained at room temperature (day 2) than in those under refrigeration (day 4). Conclusions: This study confirms that storage time and temperature are potential pre-analytical causes of error in pleural fluid cytology.
Resumo:
It is well known that constant-modulus-based algorithms present a large mean-square error for high-order quadrature amplitude modulation (QAM) signals, which may damage the switching to decision-directed-based algorithms. In this paper, we introduce a regional multimodulus algorithm for blind equalization of QAM signals that performs similar to the supervised normalized least-mean-squares (NLMS) algorithm, independently of the QAM order. We find a theoretical relation between the coefficient vector of the proposed algorithm and the Wiener solution and also provide theoretical models for the steady-state excess mean-square error in a nonstationary environment. The proposed algorithm in conjunction with strategies to speed up its convergence and to avoid divergence can bypass the switching mechanism between the blind mode and the decision-directed mode. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Background: Acute respiratory distress syndrome (ARDS) is associated with high in-hospital mortality. Alveolar recruitment followed by ventilation at optimal titrated PEEP may reduce ventilator-induced lung injury and improve oxygenation in patients with ARDS, but the effects on mortality and other clinical outcomes remain unknown. This article reports the rationale, study design, and analysis plan of the Alveolar Recruitment for ARDS Trial (ART). Methods/Design: ART is a pragmatic, multicenter, randomized (concealed), controlled trial, which aims to determine if maximum stepwise alveolar recruitment associated with PEEP titration is able to increase 28-day survival in patients with ARDS compared to conventional treatment (ARDSNet strategy). We will enroll adult patients with ARDS of less than 72 h duration. The intervention group will receive an alveolar recruitment maneuver, with stepwise increases of PEEP achieving 45 cmH(2)O and peak pressure of 60 cmH2O, followed by ventilation with optimal PEEP titrated according to the static compliance of the respiratory system. In the control group, mechanical ventilation will follow a conventional protocol (ARDSNet). In both groups, we will use controlled volume mode with low tidal volumes (4 to 6 mL/kg of predicted body weight) and targeting plateau pressure <= 30 cmH2O. The primary outcome is 28-day survival, and the secondary outcomes are: length of ICU stay; length of hospital stay; pneumothorax requiring chest tube during first 7 days; barotrauma during first 7 days; mechanical ventilation-free days from days 1 to 28; ICU, in-hospital, and 6-month survival. ART is an event-guided trial planned to last until 520 events (deaths within 28 days) are observed. These events allow detection of a hazard ratio of 0.75, with 90% power and two-tailed type I error of 5%. All analysis will follow the intention-to-treat principle. Discussion: If the ART strategy with maximum recruitment and PEEP titration improves 28-day survival, this will represent a notable advance to the care of ARDS patients. Conversely, if the ART strategy is similar or inferior to the current evidence-based strategy (ARDSNet), this should also change current practice as many institutions routinely employ recruitment maneuvers and set PEEP levels according to some titration method.
Resumo:
Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.
Resumo:
Abstract Background The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. Findings For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. Conclusions For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.