974 resultados para Test procedures
Resumo:
BACKGROUND: Measurement of CD4+ T-lymphocytes (CD4) is a crucial parameter in the management of HIV patients, particularly in determining eligibility to initiate antiretroviral treatment (ART). A number of technologies exist for CD4 enumeration, with considerable variation in cost, complexity, and operational requirements. We conducted a systematic review of the performance of technologies for CD4 enumeration. METHODS AND FINDINGS: Studies were identified by searching electronic databases MEDLINE and EMBASE using a pre-defined search strategy. Data on test accuracy and precision included bias and limits of agreement with a reference standard, and misclassification probabilities around CD4 thresholds of 200 and 350 cells/μl over a clinically relevant range. The secondary outcome measure was test imprecision, expressed as % coefficient of variation. Thirty-two studies evaluating 15 CD4 technologies were included, of which less than half presented data on bias and misclassification compared to the same reference technology. At CD4 counts <350 cells/μl, bias ranged from -35.2 to +13.1 cells/μl while at counts >350 cells/μl, bias ranged from -70.7 to +47 cells/μl, compared to the BD FACSCount as a reference technology. Misclassification around the threshold of 350 cells/μl ranged from 1-29% for upward classification, resulting in under-treatment, and 7-68% for downward classification resulting in overtreatment. Less than half of these studies reported within laboratory precision or reproducibility of the CD4 values obtained. CONCLUSIONS: A wide range of bias and percent misclassification around treatment thresholds were reported on the CD4 enumeration technologies included in this review, with few studies reporting assay precision. The lack of standardised methodology on test evaluation, including the use of different reference standards, is a barrier to assessing relative assay performance and could hinder the introduction of new point-of-care assays in countries where they are most needed.
Resumo:
BACKGROUND: Diagnostic imaging represents the fastest growing segment of costs in the US health system. This study investigated the cost-effectiveness of alternative diagnostic approaches to meniscus tears of the knee, a highly prevalent disease that traditionally relies on MRI as part of the diagnostic strategy. PURPOSE: To identify the most efficient strategy for the diagnosis of meniscus tears. STUDY DESIGN: Economic and decision analysis; Level of evidence, 1. METHODS: A simple-decision model run as a cost-utility analysis was constructed to assess the value added by MRI in various combinations with patient history and physical examination (H&P). The model examined traumatic and degenerative tears in 2 distinct settings: primary care and orthopaedic sports medicine clinic. Strategies were compared using the incremental cost-effectiveness ratio (ICER). RESULTS: In both practice settings, H&P alone was widely preferred for degenerative meniscus tears. Performing MRI to confirm a positive H&P was preferred for traumatic tears in both practice settings, with a willingness to pay of less than US$50,000 per quality-adjusted life-year. Performing an MRI for all patients was not preferred in any reasonable clinical scenario. The prevalence of a meniscus tear in a clinician's patient population was influential. For traumatic tears, MRI to confirm a positive H&P was preferred when prevalence was less than 46.7%, with H&P preferred above that. For degenerative tears, H&P was preferred until the prevalence reaches 74.2%, and then MRI to confirm a negative was the preferred strategy. In both settings, MRI to confirm positive physical examination led to more than a 10-fold lower rate of unnecessary surgeries than did any other strategy, while MRI to confirm negative physical examination led to a 2.08 and 2.26 higher rate than H&P alone in primary care and orthopaedic clinics, respectively. CONCLUSION: For all practitioners, H&P is the preferred strategy for the suspected degenerative meniscus tear. An MRI to confirm a positive H&P is preferred for traumatic tears for all practitioners. Consideration should be given to implementing alternative diagnostic strategies as well as enhancing provider education in physical examination skills to improve the reliability of H&P as a diagnostic test. CLINICAL RELEVANCE: Alternative diagnostic strategies that do not include the use of MRI may result in decreased health care costs without harm to the patient and could possibly reduce unnecessary procedures.
Resumo:
Association studies of quantitative traits have often relied on methods in which a normal distribution of the trait is assumed. However, quantitative phenotypes from complex human diseases are often censored, highly skewed, or contaminated with outlying values. We recently developed a rank-based association method that takes into account censoring and makes no distributional assumptions about the trait. In this study, we applied our new method to age-at-onset data on ALDX1 and ALDX2. Both traits are highly skewed (skewness > 1.9) and often censored. We performed a whole genome association study of age at onset of the ALDX1 trait using Illumina single-nucleotide polymorphisms. Only slightly more than 5% of markers were significant. However, we identified two regions on chromosomes 14 and 15, which each have at least four significant markers clustering together. These two regions may harbor genes that regulate age at onset of ALDX1 and ALDX2. Future fine mapping of these two regions with densely spaced markers is warranted.
Resumo:
OBJECTIVE: The diagnosis of Alzheimer's disease (AD) remains difficult. Lack of diagnostic certainty or possible distress related to a positive result from diagnostic testing could limit the application of new testing technologies. The objective of this paper is to quantify respondents' preferences for obtaining AD diagnostic tests and to estimate the perceived value of AD test information. METHODS: Discrete-choice experiment and contingent-valuation questions were administered to respondents in Germany and the United Kingdom. Choice data were analyzed by using random-parameters logit. A probit model characterized respondents who were not willing to take a test. RESULTS: Most respondents indicated a positive value for AD diagnostic test information. Respondents who indicated an interest in testing preferred brain imaging without the use of radioactive markers. German respondents had relatively lower money-equivalent values for test features compared with respondents in the United Kingdom. CONCLUSIONS: Respondents preferred less invasive diagnostic procedures and tests with higher accuracy and expressed a willingness to pay up to €700 to receive a less invasive test with the highest accuracy.
Resumo:
info:eu-repo/semantics/published
Resumo:
A regularized algorithm for the recovery of band-limited signals from noisy data is described. The regularization is characterized by a single parameter. Iterative and non-iterative implementations of the algorithm are shown to have useful properties, the former offering the advantage of flexibility and the latter a potential for rapid data processing. Comparative results, using experimental data obtained in laser anemometry studies with a photon correlator, are presented both with and without regularization. © 1983 Taylor & Francis Ltd.
Resumo:
An analysis is carried out, using the prolate spheroidal wave functions, of certain regularized iterative and noniterative methods previously proposed for the achievement of object restoration (or, equivalently, spectral extrapolation) from noisy image data. The ill-posedness inherent in the problem is treated by means of a regularization parameter, and the analysis shows explicitly how the deleterious effects of the noise are then contained. The error in the object estimate is also assessed, and it is shown that the optimal choice for the regularization parameter depends on the signal-to-noise ratio. Numerical examples are used to demonstrate the performance of both unregularized and regularized procedures and also to show how, in the unregularized case, artefacts can be generated from pure noise. Finally, the relative error in the estimate is calculated as a function of the degree of superresolution demanded for reconstruction problems characterized by low space–bandwidth products.
Resumo:
The screening and treatment of latent tuberculosis (TB) infection reduces the risk of progression to active disease and is currently recommended for HIV-infected patients. The aim of this study is to evaluate, in a low TB incidence setting, the potential contribution of an interferon-gamma release assay in response to the mycobacterial latency antigen Heparin-Binding Haemagglutinin (HBHA-IGRA), to the detection of Mycobacterium tuberculosis infection in HIV-infected patients.
Resumo:
En este trabajo presentamos un análisis estadístico del Test de Conocimientos Previos de Matemáticas (TCPM) diseñado para medir el estado inicial de destrezas y conocimientos básicos en matemáticas de los alumnos ingresantes a carreras científico-tecnológicas de la Facultad de Ciencias Físico, Matemáticas y Naturales de la Universidad Nacional de San Luis. El objetivo de la investigación está centrado en observar el diagnóstico utilizado, con miras a una eventual utilización posterior. Para determinar la bondad de la prueba realizamos un análisis de la calidad, discriminación e índice de dificultad de los ítems, así como de la validez y confiabilidad del diagnóstico, para este análisis estadístico empleamos los programas TestGraf y SPSS. El test se aplicó a 698 estudiantes ingresantes a la Universidad en el ciclo lectivo 2002. De la investigación pudimos inferir que el diagnóstico resultó: difícil para la población de aplicación; de confiabilidad aceptable, y de buena calidad de items, con variada dificultad y aceptable discriminación.
Resumo:
Realizing scalable performance on high performance computing systems is not straightforward for single-phenomenon codes (such as computational fluid dynamics [CFD]). This task is magnified considerably when the target software involves the interactions of a range of phenomena that have distinctive solution procedures involving different discretization methods. The problems of addressing the key issues of retaining data integrity and the ordering of the calculation procedures are significant. A strategy for parallelizing this multiphysics family of codes is described for software exploiting finite-volume discretization methods on unstructured meshes using iterative solution procedures. A mesh partitioning-based SPMD approach is used. However, since different variables use distinct discretization schemes, this means that distinct partitions are required; techniques for addressing this issue are described using the mesh-partitioning tool, JOSTLE. In this contribution, the strategy is tested for a variety of test cases under a wide range of conditions (e.g., problem size, number of processors, asynchronous / synchronous communications, etc.) using a variety of strategies for mapping the mesh partition onto the processor topology.
Resumo:
Procedures are described for solving the equations governing a multi-physics process. Finite volume techniques are used to discretise, using the same unstructured mesh, the equations of fluid flow, heat transfer with solidification, and solid deformation. These discretised equations are then solved in an integrated manner. The computational mechanics environment, PHYSICA, which facilitates the building of multi-physics models, is described. Comparisons between model predictions and experimental data are presented for the casting of metal components.
Resumo:
Numerical predictions produced by the SMARTFIRE fire field model are compared with experimental data. The predictions consist of gas temperatures at several locations within the compartment over a 60 min period. The test fire, produced by a burning wood crib attained a maximum heat release rate of approximately 11MW. The fire is intended to represent a nonspreading fire (i.e. single fuel source) in a moderately sized ventilated room. The experimental data formed part of the CIB Round Robin test series. Two simulations are produced, one involving a relatively coarse mesh and the other with a finer mesh. While the SMARTFIRE simulations made use of a simple volumetric heat release rate model, both simulations were found capable of reproducing the overall qualitative results. Both simulations tended to overpredict the measured temperatures. However, the finer mesh simulation was better able to reproduce the qualitative features of the experimental data. The maximum recorded experimental temperature (12141C after 39 min) was over-predicted in the fine mesh simulation by 12%. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: • a single mesh covering the entire domain, • a Navier–Stokes flow, • a single FV-UM discretisation approach for both the flow and solid mechanics procedures, • an implicit predictor–corrector version of the Newmark algorithm, • a single code embedding the whole strategy.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.
Resumo:
A three dimensional finite volume, unstructured mesh method for dynamic fluid-structure interation is described. The broad approach is conventional in that the fluid and structure are solved sequentially. The pressure and viscous stresses from the flow algorithm provide load conditions for the solid algorithm, whilst at the fluid structure interface the deformed structure provides boundary condition from the structure to the fluid. The structure algorithm also provides the necessary mesh adaptation for the flow field, the effect of which is accounted for in the flow algorithm. The procedures described in this work have several novel features, namely: * a single mesh covering the entire domain. * a Navier Stokes flow. * a single FV-UM discretisation approach for both the flow and solid mechanics procedures. * an implicit predictor-corrector version of the Newmark algorithm. * a single code embedding the whole strategy. The procedure is illustrated for a three dimensional loaded cantilever in fluid flow.