124 resultados para Sample algorithms
Resumo:
Purpose (1) To identify work related stressors that are associated with psychiatric symptoms in a Swiss sample of policemen and (2) to develop a model for identifying officers at risk for developing mental health problems. Method The study design is cross sectional. A total of 354 male police officers answered a questionnaire assessing a wide spectrum of work related stressors. Psychiatric symptoms were assessed using the "TST questionnaire" (Langner in J Health Hum Behav 4, 269-276, 1962). Logistic regression with backward procedure was used to identify a set of variables collectively associated with high scores for psychiatric symptoms. Results A total of 42 (11.9%) officers had a high score for psychiatric symptoms. Nearly all potential stressors considered were significantly associated (at P < 0.05) with a high score for psychiatric symptoms. A significant model including 6 independent variables was identified: lack of support from superior and organization OR = 3.58 (1.58-8.13), self perception of bad quality work OR = 2.99 (1.35-6.59), inadequate work schedule OR = 2.84 (1.22-6.62), high mental/intellectual demand OR = 2.56 (1.12-5.86), age (in decades) OR = 1.82 (1.21-2.73), and score for physical environment complaints OR = 1.30 (1.03-1.64). Conclusions Most of work stressors considered are associated with psychiatric symptoms. Prevention should target the most frequent stressors with high association to symptoms. Complaints of police officers about stressors should receive proper consideration by the management of public administration. Such complaints might be the expression of psychiatric caseness requiring medical assistance. Particular attention should be given to police officers complaining about many stressors identified in this study's multiple model. [Authors]
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
Objective: To assess the associations between obesity markers (BMI, waist circumference and %body fat) and inflammatory markers (interleukin-1β (IL-1β); interleukin-6 (IL-6); tumor necrosis factor-α (TNF-α) and high-sensitivity C-reactive protein (hs-CRP)). Methods: Population sample of 2,884 men and 3,201 women aged 35-75 years. Associations were assessed using ridge regression adjusting for age, leisure-time physical activity, and smoking. Results: No differences were found in IL-1β levels between participants with increased obesity markers and healthy counterparts; multivariate regression showed %body fat to be negatively associated with IL-1β. Participants with high %body fat or abdominal obesity had higher IL-6 levels, but no independent association between IL-6 levels and obesity markers was found on multivariate regression. Participants with abdominal obesity had higher TNF-α levels, and positive associations were found between TNF-α levels and waist circumference in men and between TNF-α levels and BMI in women. Obese participants had higher hs-CRP levels, and these differences persisted after multivariate adjustment; similarly, positive associations were found between hs-CRP levels and all obesity markers studied. Conclusion: Obesity markers are differentially associated with cytokine levels. %Body fat is negatively associated with IL-1β; BMI (in women) and waist circumference (in men) are associated with TNF-α; all obesity markers are positively associated with hs-CRP. Copyright © 2012 S. Karger GmbH, Freiburg.
Resumo:
BACKGROUND: Active screening by mobile teams is considered the best method for detecting human African trypanosomiasis (HAT) caused by Trypanosoma brucei gambiense but the current funding context in many post-conflict countries limits this approach. As an alternative, non-specialist health care workers (HCWs) in peripheral health facilities could be trained to identify potential cases who need testing based on their symptoms. We explored the predictive value of syndromic referral algorithms to identify symptomatic cases of HAT among a treatment-seeking population in Nimule, South Sudan. METHODOLOGY/PRINCIPAL FINDINGS: Symptom data from 462 patients (27 cases) presenting for a HAT test via passive screening over a 7 month period were collected to construct and evaluate over 14,000 four item syndromic algorithms considered simple enough to be used by peripheral HCWs. For comparison, algorithms developed in other settings were also tested on our data, and a panel of expert HAT clinicians were asked to make referral decisions based on the symptom dataset. The best performing algorithms consisted of three core symptoms (sleep problems, neurological problems and weight loss), with or without a history of oedema, cervical adenopathy or proximity to livestock. They had a sensitivity of 88.9-92.6%, a negative predictive value of up to 98.8% and a positive predictive value in this context of 8.4-8.7%. In terms of sensitivity, these out-performed more complex algorithms identified in other studies, as well as the expert panel. The best-performing algorithm is predicted to identify about 9/10 treatment-seeking HAT cases, though only 1/10 patients referred would test positive. CONCLUSIONS/SIGNIFICANCE: In the absence of regular active screening, improving referrals of HAT patients through other means is essential. Systematic use of syndromic algorithms by peripheral HCWs has the potential to increase case detection and would increase their participation in HAT programmes. The algorithms proposed here, though promising, should be validated elsewhere.
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
Orientation: Research that considers the effects of individual characteristics and job characteristics jointly in burnout is necessary, especially when one considers the possibility of curvilinear relationships between job characteristics and burnout. Research purpose: This study examines the contribution of sense of coherence (SOC) and job characteristics to predicting burnout by considering direct and moderating effects. Motivation for this study: Understanding the relationships of individual and job characteristics with burnout is necessary for preventing burnout. It also informs the design of interventions. Research design, approach and method: The participants were 632 working adults (57% female) in South Africa. The measures included the Job Content Questionnaire, the Sense of Coherence Questionnaire and the Maslach Burnout Inventory. The authors analysed the data using hierarchical multiple regression with the enter method. Main findings: Job characteristics and SOC show the expected direct effects on burnout. SOC has a direct negative effect on burnout. Job demands and supervisor social support show nonlinear relationships with burnout. SOC moderates the effect of demands on burnout and has a protective function so that the demands-burnout relationship differs for those with high and low SOC. Practical/managerial implications: The types of effects, the shape of the stressor-strain relationship and the different contributions of individual and job characteristics have implications for designing interventions. Contribution/value add: SOC functions differently when combined with demands, control and support. These different effects suggest that it is not merely the presence or absence of a job characteristic that is important for well-being outcomes but how people respond to its presence or absence.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Impressive developments in X-ray imaging are associated with X-ray phase contrast computed tomography based on grating interferometry, a technique that provides increased contrast compared with conventional absorption-based imaging. A new "single-step" method capable of separating phase information from other contributions has been recently proposed. This approach not only simplifies data-acquisition procedures, but, compared with the existing phase step approach, significantly reduces the dose delivered to a sample. However, the image reconstruction procedure is more demanding than for traditional methods and new algorithms have to be developed to take advantage of the "single-step" method. In the work discussed in this paper, a fast iterative image reconstruction method named OSEM (ordered subsets expectation maximization) was applied to experimental data to evaluate its performance and range of applicability. The OSEM algorithm with different subsets was also characterized by comparison of reconstruction image quality and convergence speed. Computer simulations and experimental results confirm the reliability of this new algorithm for phase-contrast computed tomography applications. Compared with the traditional filtered back projection algorithm, in particular in the presence of a noisy acquisition, it furnishes better images at a higher spatial resolution and with lower noise. We emphasize that the method is highly compatible with future X-ray phase contrast imaging clinical applications.
Resumo:
BACKGROUND: Dried blood spots (DBS) sampling has gained popularity in the bioanalytical community as an alternative to conventional plasma sampling, as it provides numerous benefits in terms of sample collection and logistics. The aim of this work was to show that these advantages can be coupled with a simple and cost-effective sample pretreatment, with subsequent rapid LC-MS/MS analysis for quantitation of 15 benzodiazepines, six metabolites and three Z-drugs. For this purpose, a simplified offline procedure was developed that consisted of letting a 5-µl DBS infuse directly into 100 µl of MeOH, in a conventional LC vial. RESULTS: The parameters related to the DBS pretreatment, such as extraction time or internal standard addition, were investigated and optimized, demonstrating that passive infusion in a regular LC vial was sufficient to quantitatively extract the analytes of interest. The method was validated according to international criteria in the therapeutic concentration ranges of the selected compounds. CONCLUSION: The presented strategy proved to be efficient for the rapid analysis of the selected drugs. Indeed, the offline sample preparation was reduced to a minimum, using a small amount of organic solvent and consumables, without affecting the accuracy of the method. Thus, this approach enables simple and rapid DBS analysis, even when using a non-DBS-dedicated autosampler, while lowering the costs and environmental impact.
Resumo:
In the last two decades, the third-dimension has become a focus of attention in electron microscopy to better understand the interactions within subcellular compartments. Initially, transmission electron tomography (TEM tomography) was introduced to image the cell volume in semi-thin sections (∼500nm). With the introduction of the focused ion beam scanning electron microscope, a new tool, FIB-SEM tomography, became available to image much larger volumes. During TEM tomography and FIB-SEM tomography, the resin section is exposed to a high electron/ion dose such that the stability of the resin embedded biological sample becomes an important issue. The shrinkage of a resin section in each dimension, especially in depth, is a well-known phenomenon. To ensure the dimensional integrity of the final volume of the cell, it is important to assess the properties of the different resins and determine the formulation which has the best stability in the electron/ion beam. Here, eight different resin formulations were examined. The effects of radiation damage were evaluated after different times of TEM irradiation. To get additional information on mass-loss and the physical properties of the resins (stiffness and adhesion), the topography of the irradiated areas was analysed with atomic force microscopy (AFM). Further, the behaviour of the resins was analysed after ion milling of the surface of the sample with different ion currents. In conclusion, two resin formulations, Hard Plus and the mixture of Durcupan/Epon, emerged that were considerably less affected and reasonably stable in the electron/ion beam and thus suitable for the 3-D investigation of biological samples.
Resumo:
The potential consequences of early and late puberty on the psychological and behavioural development of the adolescent are not well known. This paper presents focused analyses from the Swiss SMASH study, a self-administered questionnaire survey conducted among a representative sample of 7488 adolescents from 16 to 20 years old. Data from participants reporting early or late timing of puberty were compared with those reporting average timing of maturation. Early maturing girls reported a higher rate of dissatisfaction with body image (OR=1.32) and functional symptoms (OR=1.52) and reported engaging in sexual activity more often (OR=1.93). Early maturing boys reported engaging in exploratory behaviours (sexual intercourse, legal and illegal substance use) at a significantly higher rate (OR varying between 1.4 and 1.99). Both early and late maturing boys reported higher rates of dysfunctional eating patterns (OR=1.59 and 1.38, respectively), victimisation (OR=1.61 and 1.37, respectively) and depressive symptoms (OR=2.11 and 1.53, respectively). Clinicians should take into account the pubertal stage of their patients and provide them, as well as their parents, with appropriate counselling in the field of mental health and health behaviour.