998 resultados para Diagnostic Algorithms
Resumo:
Two different types of immunological reaction are of assistance in the diagnosis of cancer: The first is detection of a weak immunological response of the patient toward his own tumor cells. Unfortunately the currently available techniques for the demonstration of humoral or cellular immunological reaction against autologous tumor cells are not reproducible enough to be recommended as routine clinical tests. Secondly, it is possible to use antisera, obtained by immunization of animals with human tumor extracts, for the detection of substances released into the blood by the tumor cells. The two major antigens associated with human cancer that can be measured in the blood by very sensitive immunological methods are the alphafetoprotein (AFP) and the carcinoembryonic antigen (CEA). It is very important for the physician to be fully alive to the usefulness and limitations of such tests in order to interpret them correctly. Clinical situations in which the measurement of AFP and CEA can provide useful information are reviewed.
Resumo:
This work aimed at assessing the doses delivered in Switzerland to paediatric patients during computed tomography (CT) examinations of the brain, chest and abdomen, and at establishing diagnostic reference levels (DRLs) for various age groups. Forms were sent to the ten centres performing CT on children, addressing the demographics, the indication and the scanning parameters: number of series, kilovoltage, tube current, rotation time, reconstruction slice thickness and pitch, volume CT dose index (CTDI(vol)) and dose length product (DLP). Per age group, the proposed DRLs for brain, chest and abdomen are, respectively, in terms of CTDI(vol): 20, 30, 40, 60 mGy; 5, 8, 10, 12 mGy; 7, 9, 13, 16 mGy; and in terms of DLP: 270, 420, 560, 1,000 mGy cm; 110, 200, 220, 460 mGy cm; 130, 300, 380, 500 mGy cm. An optimisation process should be initiated to reduce the spread in dose recorded in this study. A major element of this process should be the use of DRLs.
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
Our goal was to evaluate the diagnostic utility of C-reactive protein (CRP) alone or combined with clinical probability assessment in patients with suspected pulmonary embolism (PE), and to compare its performance to a D-dimer assay. We conducted a prospective study in which we performed a common immuno-turbidimetric CRP test and a rapid enzyme-linked immunosorbent assay (ELISA) D-dimer test in 259 consecutive outpatients with suspected PE at the emergency department of a teaching hospital. We assessed clinical probability of PE by a validated prediction rule overridden by clinical judgment. Patients with D-dimer levels > or = 500 microg/l underwent a work-up consisting of lower-limb venous ultrasound, spiral computerized tomography, ventilation-perfusion scan, or pulmonary angiography. Patients were followed up for three months. Seventy-seven (30%) of the patients had PE. The CRP alone had a sensitivity of 84% (95% confidence interval [CI).: 74 to 92%) and a negative predictive value (NPV) of 87% (95% CI: 78 to 93%) at a cutpoint of 5 mg/l. Overall, 61 (24%) patients with a low clinical probability of PE had a CRP < 5 mg/l. Due to the low prevalence of PE (9%) in this subgroup, the NPV increased to 97% (95% CI: 89 to 100%). The D-dimer (cutpoint 500 micro g/l) showed a sensitivity of 100% (95% CI: 95 to 100%) and a NPV of 100% (95% CI: 94 to 100%) irrespective of clinical probability and accurately rule out PE in 56 (22%) patients. Standard CRP tests alone or combined with clinical probability assessment cannot safely exclude PE.
Resumo:
In young people, the most frequent cause of isolated monocular visual loss due to an optic neuropathy is optic neuritis. We present the case of a 27 year old woman who presented monocular visual loss, excruciating orbital pain and unusual temporal headache. The initial diagnosis of optic neuritis revealed later to be a posterior ischemic optic neuropathy (PION). In this case, PION was the first unique presentation of a non-traumatic carotid dissection, and it was followed 24h later by an ischemic stroke. Sudden monocular visual loss associated with a new-onset headache are clinical symptoms that should immediately prompt to a carotid dissection.
Resumo:
Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
BACKGROUND: The Advisa MRI system is designed to safely undergo magnetic resonance imaging (MRI). Its influence on image quality is not well known. OBJECTIVE: To evaluate cardiac magnetic resonance (CMR) image quality and to characterize myocardial contraction patterns by using the Advisa MRI system. METHODS: In this international trial with 35 participating centers, an Advisa MRI system was implanted in 263 patients. Of those, 177 were randomized to the MRI group and 150 underwent MRI scans at the 9-12-week visit. Left ventricular (LV) and right ventricular (RV) cine long-axis steady-state free precession MR images were graded for quality. Signal loss along the implantable pulse generator and leads was measured. The tagging CMR data quality was assessed as the percentage of trackable tagging points on complementary spatial modulation of magnetization acquisitions (n=16) and segmental circumferential fiber shortening was quantified. RESULTS: Of all cine long-axis steady-state free precession acquisitions, 95% of LV and 98% of RV acquisitions were of diagnostic quality, with 84% and 93%, respectively, being of good or excellent quality. Tagging points were trackable from systole into early diastole (360-648 ms after the R-wave) in all segments. During RV pacing, tagging demonstrated a dyssynchronous contraction pattern, which was not observed in nonpaced (n = 4) and right atrial-paced (n = 8) patients. CONCLUSIONS: In the Advisa MRI study, high-quality CMR images for the assessment of cardiac anatomy and function were obtained in most patients with an implantable pacing system. In addition, this study demonstrated the feasibility of acquiring tagging data to study the LV function during pacing.
Resumo:
Adverse weather conditions dramatically affect the nation’s surface transportation system. The development of a prototype winter Maintenance Decision Support System (MDSS) is part of the Federal Highway Administration’s effort to produce a prototype tool for decision support to winter road maintenance managers to help make the highways safer for the traveling public. The MDSS is based on leading diagnostic and prognostic weather research capabilities and road condition algorithms, which are being developed at national research centers. In 2003, the Iowa Department of Transportation was chosen as a field test bed for the continuing development of this important research program. The Center for Transportation Research and Education assisted the Iowa Department of Transportation by collecting and analyzing surface condition data. The Federal Highway Administration also selected five national research centers to participate in the development of the prototype MDSS. It is anticipated that components of the prototype MDSS system developed by this project will ultimately be deployed by road operating agencies, including state departments of transportation, and generally supplied by private vendors.
Resumo:
The Common variable Immunodeficiency (CVID) is next to the selective IgA-deficiency the most frequent primary immunodeficiency syndrome. Because of its variable clinical manifestations and age of declaration, CVID can mimic different other pathologies and is therefore frequently diagnosed in a later stage of the disease. However, as a consequence of late diagnosis, irreversible organ damage can have occurred which could have been prevented by early treatment. Therefore, early diagnosis of CVID by the general practitioner in patients with recurrent infections or other typical clinical manifestations is of great importance.