13 resultados para evaluation algorithm
em Universit
Resumo:
Waveform tomographic imaging of crosshole georadar data is a powerful method to investigate the shallow subsurface because of its ability to provide images of pertinent petrophysical parameters with extremely high spatial resolution. All current crosshole georadar waveform inversion strategies are based on the assumption of frequency-independent electromagnetic constitutive parameters. However, in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behavior. In this paper, we evaluate synthetically the reconstruction limits of a recently published crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. Our results indicate that, when combined with a source wavelet estimation procedure that provides a means of partially accounting for the frequency-dependent effects through an "effective" wavelet, the inversion algorithm performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
INTRODUCTION: Many clinical practice guidelines (CPG) have been published in reply to the development of the concept of "evidence-based medicine" (EBM) and as a solution to the difficulty of synthesizing and selecting relevant medical literature. Taking into account the expansion of new CPG, the question of choice arises: which CPG to consider in a given clinical situation? It is of primary importance to evaluate the quality of the CPG, but until recently, there has been no standardized tool of evaluation or comparison of the quality of the CPG. An instrument of evaluation of the quality of the CPG, called "AGREE" for appraisal of guidelines for research and evaluation was validated in 2002. AIM OF THE STUDY: The six principal CPG concerning the treatment of schizophrenia are compared with the help of the "AGREE" instrument: (1) "the Agence nationale pour le développement de l'évaluation médicale (ANDEM) recommendations"; (2) "The American Psychiatric Association (APA) practice guideline for the treatment of patients with schizophrenia"; (3) "The quick reference guide of APA practice guideline for the treatment of patients with schizophrenia"; (4) "The schizophrenia patient outcomes research team (PORT) treatment recommendations"; (5) "The Texas medication algorithm project (T-MAP)" and (6) "The expert consensus guideline for the treatment of schizophrenia". RESULTS: The results of our study were then compared with those of a similar investigation published in 2005, structured on 24 CPG tackling the treatment of schizophrenia. The "AGREE" tool was also used by two investigators in their study. In general, the scores of the two studies differed little and the two global evaluations of the CPG converged; however, each of the six CPG is perfectible. DISCUSSION: The rigour of elaboration of the six CPG was in general average. The consideration of the opinion of potential users was incomplete, and an effort made in the presentation of the recommendations would facilitate their clinical use. Moreover, there was little consideration by the authors regarding the applicability of the recommendations. CONCLUSION: Globally, two CPG are considered as strongly recommended: "the quick reference guide of the APA practice guideline for the treatment of patients with schizophrenia" and "the T-MAP".
Resumo:
BACKGROUND: Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. METHODS: We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship 'Prevalence = Incidence x Duration' in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship 'incident = true incident + false incident' and also to the IIR derived from the BED incidence assay. RESULTS: Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R(2) = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. CONCLUSIONS: IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.
Resumo:
Detecting local differences between groups of connectomes is a great challenge in neuroimaging, because the large number of tests that have to be performed and the impact on multiplicity correction. Any available information should be exploited to increase the power of detecting true between-group effects. We present an adaptive strategy that exploits the data structure and the prior information concerning positive dependence between nodes and connections, without relying on strong assumptions. As a first step, we decompose the brain network, i.e., the connectome, into subnetworks and we apply a screening at the subnetwork level. The subnetworks are defined either according to prior knowledge or by applying a data driven algorithm. Given the results of the screening step, a filtering is performed to seek real differences at the node/connection level. The proposed strategy could be used to strongly control either the family-wise error rate or the false discovery rate. We show by means of different simulations the benefit of the proposed strategy, and we present a real application of comparing connectomes of preschool children and adolescents.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Resumo:
Fingerprint practitioners rely on level 3 features to make decisions in relation to the source of an unknown friction ridge skin impression. This research proposes to assess the strength of evidence associated with pores when shown in (dis)agreement between a mark and a reference print. Based upon an algorithm designed to automatically detect pores, a metric is defined in order to compare different impressions. From this metric, the weight of the findings is quantified using a likelihood ratio. The results obtained on four configurations and 54 donors show the significant contribution of the pore features and translate into statistical terms what latent fingerprint examiners have developed holistically through experience. The system provides LRs that are indicative of the true state under both the prosecution and the defense propositions. Not only such a system brings transparency regarding the weight to assign to such features, but also forces a discussion in relation to the risks of such a model to mislead.
Resumo:
Voxel-based morphometry from conventional T1-weighted images has proved effective to quantify Alzheimer's disease (AD) related brain atrophy and to enable fairly accurate automated classification of AD patients, mild cognitive impaired patients (MCI) and elderly controls. Little is known, however, about the classification power of volume-based morphometry, where features of interest consist of a few brain structure volumes (e.g. hippocampi, lobes, ventricles) as opposed to hundreds of thousands of voxel-wise gray matter concentrations. In this work, we experimentally evaluate two distinct volume-based morphometry algorithms (FreeSurfer and an in-house algorithm called MorphoBox) for automatic disease classification on a standardized data set from the Alzheimer's Disease Neuroimaging Initiative. Results indicate that both algorithms achieve classification accuracy comparable to the conventional whole-brain voxel-based morphometry pipeline using SPM for AD vs elderly controls and MCI vs controls, and higher accuracy for classification of AD vs MCI and early vs late AD converters, thereby demonstrating the potential of volume-based morphometry to assist diagnosis of mild cognitive impairment and Alzheimer's disease.
Resumo:
BACKGROUND: Fever upon return from tropical or subtropical regions can be caused by diseases that are rapidly fatal if left untreated. The differential diagnosis is wide. Physicians often lack the necessary knowledge to appropriately take care of such patients. OBJECTIVE: To develop practice guidelines for the initial evaluation of patients presenting with fever upon return from a tropical or subtropical country in order to reduce delays and potential fatal outcomes and to improve knowledge of physicians. TARGET AUDIENCE: Medical personnel, usually physicians, who see the returning patients, primarily in an ambulatory setting or in an emergency department of a hospital and specialists in internal medicine, infectious diseases, and travel medicine. METHOD: A systematic review of the literature--mainly extracted from the National Library of Medicine database--was performed between May 2000 and April 2001, using the keywords fever and/or travel and/or migrant and/or guidelines. Eventually, 250 articles were reviewed. The relevant elements of evidence were used in combination with expert knowledge to construct an algorithm with arborescence flagging the level of specialization required to deal with each situation. The proposed diagnoses and treatment plans are restricted to tropical or subtropical diseases (nonautochthonous diseases). The decision chart is accompanied with a detailed document that provides for each level of the tree the degree of evidence and the grade of recommendation as well as the key points of debate. PARTICIPANTS AND CONSENSUS PROCESS: Besides the 4 authors (2 specialists in travel/tropical medicine, 1 clinical epidemiologist, and 1 resident physician), a panel of 11 European physicians with different levels of expertise on travel medicine reviewed the guidelines. Thereafter, each point of the proposed recommendations was discussed with 15 experts in travel/tropical medicine from various continents. A final version was produced and submitted for evaluation to all participants. CONCLUSION: Although the quality of evidence was limited by the paucity of clinical studies, these guidelines established with the support of a large and highly experienced panel should help physicians to deal with patients coming back from the Tropics with fever.
Resumo:
The atomic force microscope is not only a very convenient tool for studying the topography of different samples, but it can also be used to measure specific binding forces between molecules. For this purpose, one type of molecule is attached to the tip and the other one to the substrate. Approaching the tip to the substrate allows the molecules to bind together. Retracting the tip breaks the newly formed bond. The rupture of a specific bond appears in the force-distance curves as a spike from which the binding force can be deduced. In this article we present an algorithm to automatically process force-distance curves in order to obtain bond strength histograms. The algorithm is based on a fuzzy logic approach that permits an evaluation of "quality" for every event and makes the detection procedure much faster compared to a manual selection. In this article, the software has been applied to measure the binding strength between tubuline and microtubuline associated proteins.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
De nombreuses recommandations de pratique clinique (RPC) ont été publiées, en réponse au développement du concept de la médecine fondée sur les preuves et comme solution à la difficulté de synthétiser et trier l'abondante littérature médicale. Pour faire un choix parmi le foisonnement de nouvelles RPC, il est primordial d'évaluer leur qualité. Récemment, le premier instrument d'évaluation standardisée de la qualité des RPC, appelé " AGREE " pour appraisal of guidelines for research and evaluation, a été validé. Nous avons comparé - avec l'aide de la grille " AGREE " - les six principales RPC publiées depuis une dizaine d'années sur le traitement de la schizophrénie : (1) les Recommandations de l'Agence nationale pour le développement de l'évaluation médicale (ANDEM) ; (2) The American Psychiatric Association (APA) practice guideline for the treatment of patients with schizophrenia ; (3) The quick reference guide of APA practice guideline for the treatment of patients with schizophrenia [APA - guide rapide de référence] ; (4) The schizophrenia patient outcomes research team (PORT) treatment recommandations ; (5) The Texas medication algorithm project " T-MAP " ; (6) The expert consensus guideline for the treatment of schizophrenia. Les résultats de notre étude ont ensuite été comparés avec ceux d'une étude similaire publiée en 2005 par Gæbel et al. portant sur 24 RPC abordant le traitement de la schizophrénie, réalisée également avec l'aide de la grille " AGREE " et deux évaluateurs [Br J Psychiatry 187 (2005) 248-255]. De manière générale, les scores des deux études ne sont pas trop éloignés et les deux évaluations globales des RPC convergent : chacune des six RPC est perfectible et présente différemment des points faibles et des points forts. La rigueur d'élaboration des six RPC est dans l'ensemble très moyenne, la prise en compte de l'opinion des utilisateurs potentiels est lacunaire et un effort sur la présentation des recommandations faciliterait leur utilisation clinique. L'applicabilité des recommandations est également peu considérée par les auteurs. Globalement, deux RPC se distinguent et peuvent être fortement recommandées selon les critères de la grille " AGREE " : " l'APA - guide rapide de référence " et le " T-MAP ".
Resumo:
Trabecular bone score (TBS) is a recently-developed analytical tool that performs novel grey-level texture measurements on lumbar spine dual X-ray absorptiometry (DXA) images, and thereby captures information relating to trabecular microarchitecture. In order for TBS to usefully add to bone mineral density (BMD) and clinical risk factors in osteoporosis risk stratification, it must be independently associated with fracture risk, readily obtainable, and ideally, present a risk which is amenable to osteoporosis treatment. This paper summarizes a review of the scientific literature performed by a Working Group of the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis. Low TBS is consistently associated with an increase in both prevalent and incident fractures that is partly independent of both clinical risk factors and areal BMD (aBMD) at the lumbar spine and proximal femur. More recently, TBS has been shown to have predictive value for fracture independent of fracture probabilities using the FRAX® algorithm. Although TBS changes with osteoporosis treatment, the magnitude is less than that of aBMD of the spine, and it is not clear how change in TBS relates to fracture risk reduction. TBS may also have a role in the assessment of fracture risk in some causes of secondary osteoporosis (e.g., diabetes, hyperparathyroidism and glucocorticoid-induced osteoporosis). In conclusion, there is a role for TBS in fracture risk assessment in combination with both aBMD and FRAX.
Resumo:
Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.