27 resultados para Error analysis (Mathematics)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

L. Antonangelo, F. S. Vargas, M. M. P. Acencio, A. P. Cora, L. R. Teixeira, E. H. Genofre and R. K. B. Sales Effect of temperature and storage time on cellular analysis of fresh pleural fluid samples Objective: Despite the methodological variability in preparation techniques for pleural fluid cytology, it is fundamental that the cells should be preserved, permitting adequate morphological classification. We evaluated numerical and morphological changes in pleural fluid specimens processed after storage at room temperature or under refrigeration. Methods: Aliquots of pleural fluid from 30 patients, collected in ethylenediaminetetraacetic acid-coated tubes and maintained at room temperature (21 degrees C) or refrigeration (4 degrees C) were evaluated after 2 and 6 hours and 1, 2, 3, 4, 7 and 14 days. Evaluation of cytomorphology and global and percentage counts of leucocytes, macrophages and mesothelial cells were included. Results: The samples had quantitative cellular variations from day 3 or 4 onwards, depending on the storage conditions. Morphological alterations occurred earlier in samples maintained at room temperature (day 2) than in those under refrigeration (day 4). Conclusions: This study confirms that storage time and temperature are potential pre-analytical causes of error in pleural fluid cytology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present two new constraint qualifications (CQs) that are weaker than the recently introduced relaxed constant positive linear dependence (RCPLD) CQ. RCPLD is based on the assumption that many subsets of the gradients of the active constraints preserve positive linear dependence locally. A major open question was to identify the exact set of gradients whose properties had to be preserved locally and that would still work as a CQ. This is done in the first new CQ, which we call the constant rank of the subspace component (CRSC) CQ. This new CQ also preserves many of the good properties of RCPLD, such as local stability and the validity of an error bound. We also introduce an even weaker CQ, called the constant positive generator (CPG), which can replace RCPLD in the analysis of the global convergence of algorithms. We close this work by extending convergence results of algorithms belonging to all the main classes of nonlinear optimization methods: sequential quadratic programming, augmented Lagrangians, interior point algorithms, and inexact restoration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents major results from a novel dynamic model intended to deterministically represent the complex relation between HIV-1 and the human immune system. The novel structure of the model extends previous work by representing different host anatomic compartments under a more in-depth cellular and molecular immunological phenomenology. Recently identified mechanisms related to HIV-1 infection as well as other well known relevant mechanisms typically ignored in mathematical models of HIV-1 pathogenesis and immunology, such as cell-cell transmission, are also addressed. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a recently proposed finite-element space that consists of piecewise affine functions with discontinuities across a smooth given interface Γ (a curve in two dimensions, a surface in three dimensions). Contrary to existing extended finite element methodologies, the space is a variant of the standard conforming Formula space that can be implemented element by element. Further, it neither introduces new unknowns nor deteriorates the sparsity structure. It is proved that, for u arbitrary in Formula, the interpolant Formula defined by this new space satisfies Graphic where h is the mesh size, Formula is the domain, Formula, Formula, Formula and standard notation has been adopted for the function spaces. This result proves the good approximation properties of the finite-element space as compared to any space consisting of functions that are continuous across Γ, which would yield an error in the Formula-norm of order Graphic. These properties make this space especially attractive for approximating the pressure in problems with surface tension or other immersed interfaces that lead to discontinuities in the pressure field. Furthermore, the result still holds for interfaces that end within the domain, as happens for example in cracked domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that constant-modulus-based algorithms present a large mean-square error for high-order quadrature amplitude modulation (QAM) signals, which may damage the switching to decision-directed-based algorithms. In this paper, we introduce a regional multimodulus algorithm for blind equalization of QAM signals that performs similar to the supervised normalized least-mean-squares (NLMS) algorithm, independently of the QAM order. We find a theoretical relation between the coefficient vector of the proposed algorithm and the Wiener solution and also provide theoretical models for the steady-state excess mean-square error in a nonstationary environment. The proposed algorithm in conjunction with strategies to speed up its convergence and to avoid divergence can bypass the switching mechanism between the blind mode and the decision-directed mode. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Acute respiratory distress syndrome (ARDS) is associated with high in-hospital mortality. Alveolar recruitment followed by ventilation at optimal titrated PEEP may reduce ventilator-induced lung injury and improve oxygenation in patients with ARDS, but the effects on mortality and other clinical outcomes remain unknown. This article reports the rationale, study design, and analysis plan of the Alveolar Recruitment for ARDS Trial (ART). Methods/Design: ART is a pragmatic, multicenter, randomized (concealed), controlled trial, which aims to determine if maximum stepwise alveolar recruitment associated with PEEP titration is able to increase 28-day survival in patients with ARDS compared to conventional treatment (ARDSNet strategy). We will enroll adult patients with ARDS of less than 72 h duration. The intervention group will receive an alveolar recruitment maneuver, with stepwise increases of PEEP achieving 45 cmH(2)O and peak pressure of 60 cmH2O, followed by ventilation with optimal PEEP titrated according to the static compliance of the respiratory system. In the control group, mechanical ventilation will follow a conventional protocol (ARDSNet). In both groups, we will use controlled volume mode with low tidal volumes (4 to 6 mL/kg of predicted body weight) and targeting plateau pressure <= 30 cmH2O. The primary outcome is 28-day survival, and the secondary outcomes are: length of ICU stay; length of hospital stay; pneumothorax requiring chest tube during first 7 days; barotrauma during first 7 days; mechanical ventilation-free days from days 1 to 28; ICU, in-hospital, and 6-month survival. ART is an event-guided trial planned to last until 520 events (deaths within 28 days) are observed. These events allow detection of a hazard ratio of 0.75, with 90% power and two-tailed type I error of 5%. All analysis will follow the intention-to-treat principle. Discussion: If the ART strategy with maximum recruitment and PEEP titration improves 28-day survival, this will represent a notable advance to the care of ARDS patients. Conversely, if the ART strategy is similar or inferior to the current evidence-based strategy (ARDSNet), this should also change current practice as many institutions routinely employ recruitment maneuvers and set PEEP levels according to some titration method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The generalized odds ratio (GOR) was recently suggested as a genetic model-free measure for association studies. However, its properties were not extensively investigated. We used Monte Carlo simulations to investigate type-I error rates, power and bias in both effect size and between-study variance estimates of meta-analyses using the GOR as a summary effect, and compared these results to those obtained by usual approaches of model specification. We further applied the GOR in a real meta-analysis of three genome-wide association studies in Alzheimer's disease. Findings For bi-allelic polymorphisms, the GOR performs virtually identical to a standard multiplicative model of analysis (e.g. per-allele odds ratio) for variants acting multiplicatively, but augments slightly the power to detect variants with a dominant mode of action, while reducing the probability to detect recessive variants. Although there were differences among the GOR and usual approaches in terms of bias and type-I error rates, both simulation- and real data-based results provided little indication that these differences will be substantial in practice for meta-analyses involving bi-allelic polymorphisms. However, the use of the GOR may be slightly more powerful for the synthesis of data from tri-allelic variants, particularly when susceptibility alleles are less common in the populations (≤10%). This gain in power may depend on knowledge of the direction of the effects. Conclusions For the synthesis of data from bi-allelic variants, the GOR may be regarded as a multiplicative-like model of analysis. The use of the GOR may be slightly more powerful in the tri-allelic case, particularly when susceptibility alleles are less common in the populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is a research paper in which we discuss “active learning” in the light of Cultural-Historical Activity Theory (CHAT), a powerful framework to analyze human activity, including teaching and learning process and the relations between education and wider human dimensions as politics, development, emancipation etc. This framework has its origin in Vygotsky's works in the psychology, supported by a Marxist perspective, but nowadays is a interdisciplinary field encompassing History, Anthropology, Psychology, Education for example.