998 resultados para Applied statistics
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
The aim of this work was the identification of new metabolites and transformation products (TPs) in chicken muscle from Enrofloxacin (ENR), Ciprofloxacin (CIP), Difloxacin (DIF) and Sarafloxacin (SAR), which are antibiotics that belong to the fluoroquinolones family. The stability of ENR, CIP, DIF and SAR standard solutions versus pH degradation process (from pH 1.5 to 8.0, simulating the pH since the drug is administered until its excretion) and freeze-thawing (F/T) cycles was tested. In addition, chicken muscle samples from medicated animals with ENR were analyzed in order to identify new metabolites and TPs. The identification of the different metabolites and TPs was accomplished by comparison of mass spectral data from samples and blanks, using liquid chromatography coupled to quadrupole time-of-flight (LC-QqToF) and Multiple Mass Defect Filter (MMDF) technique as a pre-filter to remove most of the background noise and endogenous components. Confirmation and structure elucidation was performed by liquid chromatography coupled to linear ion trap quadrupole Orbitrap (LC-LTQ-Orbitrap), due to its mass accuracy and MS/MS capacity for elemental composition determination. As a result, 21 TPs from ENR, 6 TPs from CIP, 14 TPs from DIF and 12 TPs from SAR were identified due to the pH shock and F/T cycles. On the other hand, 14 metabolites were identified from the medicated chicken muscle samples. Formation of CIP and SAR, from ENR and DIF, respectively, and the formation of desethylene-quinolone were the most remarkable identified compounds.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
This publication is an historical recording of the most requested statistics on vital events and is a source of information that can be used in further analysis.
Resumo:
The aim of this work was the identification of new metabolites and transformation products (TPs) in chicken muscle from Enrofloxacin (ENR), Ciprofloxacin (CIP), Difloxacin (DIF) and Sarafloxacin (SAR), which are antibiotics that belong to the fluoroquinolones family. The stability of ENR, CIP, DIF and SAR standard solutions versus pH degradation process (from pH 1.5 to 8.0, simulating the pH since the drug is administered until its excretion) and freeze-thawing (F/T) cycles was tested. In addition, chicken muscle samples from medicated animals with ENR were analyzed in order to identify new metabolites and TPs. The identification of the different metabolites and TPs was accomplished by comparison of mass spectral data from samples and blanks, using liquid chromatography coupled to quadrupole time-of-flight (LC-QqToF) and Multiple Mass Defect Filter (MMDF) technique as a pre-filter to remove most of the background noise and endogenous components. Confirmation and structure elucidation was performed by liquid chromatography coupled to linear ion trap quadrupole Orbitrap (LC-LTQ-Orbitrap), due to its mass accuracy and MS/MS capacity for elemental composition determination. As a result, 21 TPs from ENR, 6 TPs from CIP, 14 TPs from DIF and 12 TPs from SAR were identified due to the pH shock and F/T cycles. On the other hand, 14 metabolites were identified from the medicated chicken muscle samples. Formation of CIP and SAR, from ENR and DIF, respectively, and the formation of desethylene-quinolone were the most remarkable identified compounds.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Survival statistics from the incident cases of the Vaud Cancer Registry over the period 1974-1980 were computed on the basis of an active follow-up based on verification of vital status as to December 31, 1984. Product-moment crude and relative 5 to 10 year rates are presented in separate strata of sex, age and area of residence (urban or rural). Most of the rates are comparable with those in other published series from North America or Europe, but survival from gastric cancer (24% 5-year relative rates) tended to be higher, and that from bladder cancer (about 30%) lower than in most other datasets. No significant difference in survival emerged according to residence in urban Lausanne vs surrounding (rural) areas. Interesting indications according to subsite (higher survival for the pyloric region vs the gastric fundus, but absence of substantial differences for various colon subsites), histology (higher rates for squamous carcinomas of the lung, seminomas of the testis or chronic lymphatic leukemias as compared with other histotypes), or site of origin (higher survival for lower limb melanomas), require further quantitative assessment from other population-based series. A Cox proportional hazard model applied to melanomatous skin cancers showed an independent favorable effect on long-term prognosis of female gender and adverse implications for advanced age, stage at diagnosis and tumor site other than lower limb.
Resumo:
Background: HAART has contributed to decrease the HIV-related mortality and morbidity. However, the prevalence of HIV-associated neurocognitive disorders (HAND) seems to have increased. The aim of this study was to determine the prevalence of cognitive complaint and of HAND in a cohort of aviremic HIV_patients in the South-western part of Switzerland. Design/Methods: Two hundred HIV_ patients who had (1) undetectable HIV RNA concentrations in the plasma for_3 months, (2) no history of major opportunistic infection of the CNS in the past three years, (3) no current use of IV drugs and (4) no signs of major depression according to the DSM-IV criteria, answered a questionnaire designed to elicit cognitive complaints. Cognitive functions of a subset of HIV_ patients with or without cognitive complaints were assessed using the HIV Dementia scale (HDS) and a battery of neuropsychological tests evaluating the sub-cortical functions. Cognitive impairment was defined according to the revised diagnostic criteria for HAND. Non-parametric tests were used for statistics and a Bonferroni corrected standard p level of pB0.002 was applied for multiple comparisons. Results: The prevalence of cognitive complaints was 27% (54 patients) among the 200 questioned patients. At the time of writing this abstract, cognitive functions of 50 complaining and 28 noncomplaining aviremic patients had been assessed with the HDS and the full neuropsychological battery. The prevalence of HAND producing at least mild interference in daily functioning (mild neurocognitive disorders [MND] or HIV-associated dementia [HAD]) was 44% (34/78 patients) in the group who underwent neuropsychological testing. Objective evidences of HAND were more frequent in complaining than in non-complaining patients (pB0.001). Using a ROC curve, a cut-off of 13 on the HDS was found to have a sensitivity of 74% and a specificity of 71% (p_0.001) for the diagnosis of HAND. A trend for lower CNS Penetrating-Effectiveness scores for HAART in patients with MND or HAD as compared to the others was present (1.59 0.6 vs. 1.990.6; p_0.006 [Bonferroni correction]). Conclusions/Relevance: So far, our results suggest that (1) the prevalence of HAND is high in HIV_ patients with a long-term suppression of viremia, and (2) cognitive complaints expressed by aviremic HIV_ patients should be carefully investigated as they correlate with objective evidences of cognitive decline in a neuropsychological testing. HAART with a high CNS penetrating-effectiveness may contribute to prevent HAND. Funding: Swiss HIV Cohort Study.
Resumo:
The aim of this study was to develop an ambulatory system for the three-dimensional (3D) knee kinematics evaluation, which can be used outside a laboratory during long-term monitoring. In order to show the efficacy of this ambulatory system, knee function was analysed using this system, after an anterior cruciate ligament (ACL) lesion, and after reconstructive surgery. The proposed system was composed of two 3D gyroscopes, fixed on the shank and on the thigh, and a portable data logger for signal recording. The measured parameters were the 3D mean range of motion (ROM) and the healthy knee was used as control. The precision of this system was first assessed using an ultrasound reference system. The repeatability was also estimated. A clinical study was then performed on five unilateral ACL-deficient men (range: 19-36 years) prior to, and a year after the surgery. The patients were evaluated with the IKDC score and the kinematics measurements were carried out on a 30 m walking trial. The precision in comparison with the reference system was 4.4 degrees , 2.7 degrees and 4.2 degrees for flexion-extension, internal-external rotation, and abduction-adduction, respectively. The repeatability of the results for the three directions was 0.8 degrees , 0.7 degrees and 1.8 degrees . The averaged ROM of the five patients' healthy knee were 70.1 degrees (standard deviation (SD) 5.8 degrees), 24.0 degrees (SD 3.0 degrees) and 12.0 degrees (SD 6.3 degrees for flexion-extension, internal-external rotation and abduction-adduction before surgery, and 76.5 degrees (SD 4.1 degrees), 21.7 degrees (SD 4.9 degrees) and 10.2 degrees (SD 4.6 degrees) 1 year following the reconstruction. The results for the pathologic knee were 64.5 degrees (SD 6.9 degrees), 20.6 degrees (SD 4.0 degrees) and 19.7 degrees (8.2 degrees) during the first evaluation, and 72.3 degrees (SD 2.4 degrees), 25.8 degrees (SD 6.4 degrees) and 12.4 degrees (SD 2.3 degrees) during the second one. The performance of the system enabled us to detect knee function modifications in the sagittal and transverse plane. Prior to the reconstruction, the ROM of the injured knee was lower in flexion-extension and internal-external rotation in comparison with the controlateral knee. One year after the surgery, four patients were classified normal (A) and one almost normal (B), according to the IKDC score, and changes in the kinematics of the five patients remained: lower flexion-extension ROM and higher internal-external rotation ROM in comparison with the controlateral knee. The 3D kinematics was changed after an ACL lesion and remained altered one year after the surgery
Resumo:
This is the statistical portion of the annual survey results of the State Library of Iowa for 1974.