68 resultados para Vision-based row tracking algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

MALDI-TOF MS can be used for the identification of microorganism species. We have extended its application to a novel assay of Candida albicans susceptibility to fluconazole, based on monitoring modifications of the proteome of yeast cells grown in the presence of varying drug concentrations. The method was accurate, and reliable, and showed full agreement with the Clinical Laboratory Standards Institute's reference method. This proof-of-concept demonstration highlights the potential for this approach to test other pathogens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key challenges in the field of nanoparticle (NP) analysis is in producing reliable and reproducible characterisation data for nanomaterials. This study looks at the reproducibility using a relatively new, but rapidly adopted, technique, Nanoparticle Tracking Analysis (NTA) on a range of particle sizes and materials in several different media. It describes the protocol development and presents both the data and analysis of results obtained from 12 laboratories, mostly based in Europe, who are primarily QualityNano members. QualityNano is an EU FP7 funded Research Infrastructure that integrates 28 European analytical and experimental facilities in nanotechnology, medicine and natural sciences with the goal of developing and implementing best practice and quality in all aspects of nanosafety assessment. This study looks at both the development of the protocol and how this leads to highly reproducible results amongst participants. In this study, the parameter being measured is the modal particle size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a new method to track bonemovements in stereoscopic X-ray image series of the kneejoint. The method is based on two different X-ray imagesets: a rotational series of acquisitions of the stillsubject knee that will allow the tomographicreconstruction of the three-dimensional volume (model),and a stereoscopic image series of orthogonal projectionsas the subject performs movements. Tracking the movementsof bones throughout the stereoscopic image series meansto determine, for each frame, the best pose of everymoving element (bone) previously identified in the 3Dreconstructed model. The quality of a pose is reflectedin the similarity between its simulated projections andthe actual radiographs. We use direct Fourierreconstruction to approximate the three-dimensionalvolume of the knee joint. Then, to avoid the expensivecomputation of digitally rendered radiographs (DRR) forpose recovery, we reformulate the tracking problem in theFourier domain. Under the hypothesis of parallel X-raybeams, we use the central-slice-projection theorem toreplace the heavy 2D-to-3D registration of projections inthe signal domain by efficient slice-to-volumeregistration in the Fourier domain. Focusing onrotational movements, the translation-relevant phaseinformation can be discarded and we only consider scalarFourier amplitudes. The core of our motion trackingalgorithm can be implemented as a classical frame-wiseslice-to-volume registration task. Preliminary results onboth synthetic and real images confirm the validity ofour approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MOTIVATION: Analysis of millions of pyro-sequences is currently playing a crucial role in the advance of environmental microbiology. Taxonomy-independent, i.e. unsupervised, clustering of these sequences is essential for the definition of Operational Taxonomic Units. For this application, reproducibility and robustness should be the most sought after qualities, but have thus far largely been overlooked. RESULTS: More than 1 million hyper-variable internal transcribed spacer 1 (ITS1) sequences of fungal origin have been analyzed. The ITS1 sequences were first properly extracted from 454 reads using generalized profiles. Then, otupipe, cd-hit-454, ESPRIT-Tree and DBC454, a new algorithm presented here, were used to analyze the sequences. A numerical assay was developed to measure the reproducibility and robustness of these algorithms. DBC454 was the most robust, closely followed by ESPRIT-Tree. DBC454 features density-based hierarchical clustering, which complements the other methods by providing insights into the structure of the data. AVAILABILITY: An executable is freely available for non-commercial users at ftp://ftp.vital-it.ch/tools/dbc454. It is designed to run under MPI on a cluster of 64-bit Linux machines running Red Hat 4.x, or on a multi-core OSX system. CONTACT: dbc454@vital-it.ch or nicolas.guex@isb-sib.ch.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The genotyping of human papillomaviruses (HPV) is essential for the surveillance of HPV vaccines. We describe and validate a low-cost PGMY-based PCR assay (PGMY-CHUV) for the genotyping of 31 HPV by reverse blotting hybridization (RBH). Genotype-specific detection limits were 50 to 500 genome equivalents per reaction. RBH was 100% specific and 98.61% sensitive using DNA sequencing as the gold standard (n = 1,024 samples). PGMY-CHUV was compared to the validated and commercially available linear array (Roche) on 200 samples. Both assays identified the same positive (n = 182) and negative samples (n = 18). Seventy-six percent of the positives were fully concordant after restricting the comparison to the 28 genotypes shared by both assays. At the genotypic level, agreement was 83% (285/344 genotype-sample combinations; κ of 0.987 for single infections and 0.853 for multiple infections). Fifty-seven of the 59 discordant cases were associated with multiple infections and with the weakest genotypes within each sample (P < 0.0001). PGMY-CHUV was significantly more sensitive for HPV56 (P = 0.0026) and could unambiguously identify HPV52 in mixed infections. PGMY-CHUV was reproducible on repeat testing (n = 275 samples; 392 genotype-sample combinations; κ of 0.933) involving different reagents lots and different technicians. Discordant results (n = 47) were significantly associated with the weakest genotypes in samples with multiple infections (P < 0.0001). Successful participation in proficiency testing also supported the robustness of this assay. The PGMY-CHUV reagent costs were estimated at $2.40 per sample using the least expensive yet proficient genotyping algorithm that also included quality control. This assay may be used in low-resource laboratories that have sufficient manpower and PCR expertise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A magnetic resonance imaging (MRI) pulse sequence and a corresponding image processing algorithm to localize prostate brachytherapy seeds during or after therapy are presented. Inversion-Recovery with ON-resonant water suppression (IRON) is an MRI methodology that generates positive contrast in regions of magnetic field susceptibility, as created by prostate brachytherapy seeds. Phantoms comprising of several materials found in brachytherapy seeds were created to assess the usability of the IRON pulse sequence for imaging seeds. Resulting images show that seed materials are clearly visible with high contrast using IRON, agreeing with theoretical predictions. A seed localization algorithm to process IRON images demonstrates the potential of this imaging technique for seed localization and dosimetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Voxel-based morphometry from conventional T1-weighted images has proved effective to quantify Alzheimer's disease (AD) related brain atrophy and to enable fairly accurate automated classification of AD patients, mild cognitive impaired patients (MCI) and elderly controls. Little is known, however, about the classification power of volume-based morphometry, where features of interest consist of a few brain structure volumes (e.g. hippocampi, lobes, ventricles) as opposed to hundreds of thousands of voxel-wise gray matter concentrations. In this work, we experimentally evaluate two distinct volume-based morphometry algorithms (FreeSurfer and an in-house algorithm called MorphoBox) for automatic disease classification on a standardized data set from the Alzheimer's Disease Neuroimaging Initiative. Results indicate that both algorithms achieve classification accuracy comparable to the conventional whole-brain voxel-based morphometry pipeline using SPM for AD vs elderly controls and MCI vs controls, and higher accuracy for classification of AD vs MCI and early vs late AD converters, thereby demonstrating the potential of volume-based morphometry to assist diagnosis of mild cognitive impairment and Alzheimer's disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the segmentation of the headand neck lymph node regions using a new active contourbased atlas registration model. We propose to segment thelymph node regions without directly including them in theatlas registration process; instead, they are segmentedusing the dense deformation field computed from theregistration of the atlas structures with distinctboundaries. This approach results in robust and accuratesegmentation of the lymph node regions even in thepresence of significant anatomical variations between theatlas-image and the patient's image to be segmented. Wealso present a quantitative evaluation of lymph noderegions segmentation using various statistical as well asgeometrical metrics: sensitivity, specificity, dicesimilarity coefficient and Hausdorff distance. Acomparison of the proposed method with two other state ofthe art methods is presented. The robustness of theproposed method to the atlas selection, in segmenting thelymph node regions, is also evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For radiotherapy treatment planning of retinoblastoma inchildhood, Computed Tomography (CT) represents thestandard method for tumor volume delineation, despitesome inherent limitations. CT scan is very useful inproviding information on physical density for dosecalculation and morphological volumetric information butpresents a low sensitivity in assessing the tumorviability. On the other hand, 3D ultrasound (US) allows ahigh accurate definition of the tumor volume thanks toits high spatial resolution but it is not currentlyintegrated in the treatment planning but used only fordiagnosis and follow-up. Our ultimate goal is anautomatic segmentation of gross tumor volume (GTV) in the3D US, the segmentation of the organs at risk (OAR) inthe CT and the registration of both. In this paper, wepresent some preliminary results in this direction. Wepresent 3D active contour-based segmentation of the eyeball and the lens in CT images; the presented approachincorporates the prior knowledge of the anatomy by usinga 3D geometrical eye model. The automated segmentationresults are validated by comparing with manualsegmentations. Then, for the fusion of 3D CT and USimages, we present two approaches: (i) landmark-basedtransformation, and (ii) object-based transformation thatmakes use of eye ball contour information on CT and USimages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy, Total Variation (TV)- based energies and more recently non-local means. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm or fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n2) and O(1/√ε), while existing techniques are in O(1/n2) and O(1/√ε). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.