195 resultados para Naive Bayes classifier
Resumo:
Background Depressive disorders were a leading cause of burden in the Global Burden of Disease (GBD) 1990 and 2000 studies. Here, we analyze the burden of depressive disorders in GBD 2010 and present severity proportions, burden by country, region, age, sex, and year, as well as burden of depressive disorders as a risk factor for suicide and ischemic heart disease. Methods and Findings Burden was calculated for major depressive disorder (MDD) and dysthymia. A systematic review of epidemiological data was conducted. The data were pooled using a Bayesian meta-regression. Disability weights from population survey data quantified the severity of health loss from depressive disorders. These weights were used to calculate years lived with disability (YLDs) and disability adjusted life years (DALYs). Separate DALYs were estimated for suicide and ischemic heart disease attributable to depressive disorders.Depressive disorders were the second leading cause of YLDs in 2010. MDD accounted for 8.2% (5.9%-10.8%) of global YLDs and dysthymia for 1.4% (0.9%-2.0%). Depressive disorders were a leading cause of DALYs even though no mortality was attributed to them as the underlying cause. MDD accounted for 2.5% (1.9%-3.2%) of global DALYs and dysthymia for 0.5% (0.3%-0.6%). There was more regional variation in burden for MDD than for dysthymia; with higher estimates in females, and adults of working age. Whilst burden increased by 37.5% between 1990 and 2010, this was due to population growth and ageing. MDD explained 16 million suicide DALYs and almost 4 million ischemic heart disease DALYs. This attributable burden would increase the overall burden of depressive disorders from 3.0% (2.2%-3.8%) to 3.8% (3.0%-4.7%) of global DALYs. Conclusions GBD 2010 identified depressive disorders as a leading cause of burden. MDD was also a contributor of burden allocated to suicide and ischemic heart disease. These findings emphasize the importance of including depressive disorders as a public-health priority and implementing cost-effective interventions to reduce its burden.Please see later in the article for the Editors' Summary.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
We present a systematic, practical approach to developing risk prediction systems, suitable for use with large databases of medical information. An important part of this approach is a novel feature selection algorithm which uses the area under the receiver operating characteristic (ROC) curve to measure the expected discriminative power of different sets of predictor variables. We describe this algorithm and use it to select variables to predict risk of a specific adverse pregnancy outcome: failure to progress in labour. Neural network, logistic regression and hierarchical Bayesian risk prediction models are constructed, all of which achieve close to the limit of performance attainable on this prediction task. We show that better prediction performance requires more discriminative clinical information rather than improved modelling techniques. It is also shown that better diagnostic criteria in clinical records would greatly assist the development of systems to predict risk in pregnancy. We present a systematic, practical approach to developing risk prediction systems, suitable for use with large databases of medical information. An important part of this approach is a novel feature selection algorithm which uses the area under the receiver operating characteristic (ROC) curve to measure the expected discriminative power of different sets of predictor variables. We describe this algorithm and use it to select variables to predict risk of a specific adverse pregnancy outcome: failure to progress in labour. Neural network, logistic regression and hierarchical Bayesian risk prediction models are constructed, all of which achieve close to the limit of performance attainable on this prediction task. We show that better prediction performance requires more discriminative clinical information rather than improved modelling techniques. It is also shown that better diagnostic criteria in clinical records would greatly assist the development of systems to predict risk in pregnancy.
Resumo:
We describe an investigation into how Massey University's Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University's pollen reference collection (2890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set. In addition to the Classifynder's native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples. © 2013 AIP Publishing LLC.
Resumo:
We describe a sequence of experiments investigating the strengths and limitations of Fukushima's neocognitron as a handwritten digit classifier. Using the results of these experiments as a foundation, we propose and evaluate improvements to Fukushima's original network in an effort to obtain higher recognition performance. The neocognitron's performance is shown to be strongly dependent on the choice of selectivity parameters and we present two methods to adjust these variables. Performance of the network under the more effective of the two new selectivity adjustment techniques suggests that the network fails to exploit the features that distinguish different classes of input data. To avoid this shortcoming, the network's final layer cells were replaced by a nonlinear classifier (a multilayer perceptron) to create a hybrid architecture. Tests of Fukushima's original system and the novel systems proposed in this paper suggest that it may be difficult for the neocognitron to achieve the performance of existing digit classifiers due to its reliance upon the supervisor's choice of selectivity parameters and training data. These findings pertain to Fukushima's implementation of the system and should not be seen as diminishing the practical significance of the concept of hierarchical feature extraction embodied in the neocognitron. © 1997 IEEE.
Resumo:
The commercialization of aerial image processing is highly dependent on the platforms such as UAVs (Unmanned Aerial Vehicles). However, the lack of an automated UAV forced landing site detection system has been identified as one of the main impediments to allow UAV flight over populated areas in civilian airspace. This article proposes a UAV forced landing site detection system that is based on machine learning approaches including the Gaussian Mixture Model and the Support Vector Machine. A range of learning parameters are analysed including the number of Guassian mixtures, support vector kernels including linear, radial basis function Kernel (RBF) and polynormial kernel (poly), and the order of RBF kernel and polynormial kernel. Moreover, a modified footprint operator is employed during feature extraction to better describe the geometric characteristics of the local area surrounding a pixel. The performance of the presented system is compared to a baseline UAV forced landing site detection system which uses edge features and an Artificial Neural Network (ANN) region type classifier. Experiments conducted on aerial image datasets captured over typical urban environments reveal improved landing site detection can be achieved with an SVM classifier with an RBF kernel using a combination of colour and texture features. Compared to the baseline system, the proposed system provides significant improvement in term of the chance to detect a safe landing area, and the performance is more stable than the baseline in the presence of changes to the UAV altitude.
Resumo:
Background The requirement for dual screening of titles and abstracts to select papers to examine in full text can create a huge workload, not least when the topic is complex and a broad search strategy is required, resulting in a large number of results. An automated system to reduce this burden, while still assuring high accuracy, has the potential to provide huge efficiency savings within the review process. Objectives To undertake a direct comparison of manual screening with a semi‐automated process (priority screening) using a machine classifier. The research is being carried out as part of the current update of a population‐level public health review. Methods Authors have hand selected studies for the review update, in duplicate, using the standard Cochrane Handbook methodology. A retrospective analysis, simulating a quasi‐‘active learning’ process (whereby a classifier is repeatedly trained based on ‘manually’ labelled data) will be completed, using different starting parameters. Tests will be carried out to see how far different training sets, and the size of the training set, affect the classification performance; i.e. what percentage of papers would need to be manually screened to locate 100% of those papers included as a result of the traditional manual method. Results From a search retrieval set of 9555 papers, authors excluded 9494 papers at title/abstract and 52 at full text, leaving 9 papers for inclusion in the review update. The ability of the machine classifier to reduce the percentage of papers that need to be manually screened to identify all the included studies, under different training conditions, will be reported. Conclusions The findings of this study will be presented along with an estimate of any efficiency gains for the author team if the screening process can be semi‐automated using text mining methodology, along with a discussion of the implications for text mining in screening papers within complex health reviews.
Resumo:
Essentialism is an ontological belief that there exists an underlying essence to a category. This article advances and tests in three studies the hypothesis that communication about a social category, and expected or actual mutual validation, promotes essentialism about a social category. In Study 1, people who wrote communications about a social category to their ingroup audiences essentialized it more strongly than those who simply memorized about it. In Study 2, communicators whose messages about a novel social category were more elaborately discussed with a confederate showed a stronger tendency to essentialize it. In Study 3, communicators who elaborately talked about a social category with a naive conversant also essentialized the social category. A meta-analysis of the results supported the hypothesis that communication promotes essentialism. Although essentialism has been discussed primarily in perceptual and cognitive domains, the role of social processes as its antecedent deserves greater attention.
Resumo:
This study aimed to investigate the spatial clustering and dynamic dispersion of dengue incidence in Queensland, Australia. We used Moran's I statistic to assess the spatial autocorrelation of reported dengue cases. Spatial empirical Bayes smoothing estimates were used to display the spatial distribution of dengue in postal areas throughout Queensland. Local indicators of spatial association (LISA) maps and logistic regression models were used to identify spatial clusters and examine the spatio-temporal patterns of the spread of dengue. The results indicate that the spatial distribution of dengue was clustered during each of the three periods of 1993–1996, 1997–2000 and 2001–2004. The high-incidence clusters of dengue were primarily concentrated in the north of Queensland and low-incidence clusters occurred in the south-east of Queensland. The study concludes that the geographical range of notified dengue cases has significantly expanded in Queensland over recent years.
Resumo:
Traditional text classification technology based on machine learning and data mining techniques has made a big progress. However, it is still a big problem on how to draw an exact decision boundary between relevant and irrelevant objects in binary classification due to much uncertainty produced in the process of the traditional algorithms. The proposed model CTTC (Centroid Training for Text Classification) aims to build an uncertainty boundary to absorb as many indeterminate objects as possible so as to elevate the certainty of the relevant and irrelevant groups through the centroid clustering and training process. The clustering starts from the two training subsets labelled as relevant or irrelevant respectively to create two principal centroid vectors by which all the training samples are further separated into three groups: POS, NEG and BND, with all the indeterminate objects absorbed into the uncertain decision boundary BND. Two pairs of centroid vectors are proposed to be trained and optimized through the subsequent iterative multi-learning process, all of which are proposed to collaboratively help predict the polarities of the incoming objects thereafter. For the assessment of the proposed model, F1 and Accuracy have been chosen as the key evaluation measures. We stress the F1 measure because it can display the overall performance improvement of the final classifier better than Accuracy. A large number of experiments have been completed using the proposed model on the Reuters Corpus Volume 1 (RCV1) which is important standard dataset in the field. The experiment results show that the proposed model has significantly improved the binary text classification performance in both F1 and Accuracy compared with three other influential baseline models.
Resumo:
OBJECTIVE Impaired regulation of the hypothalamic-pituitary-adrenal (HPA) axis and hyper-activity of this system have been described in patients with psychosis. Conversely, some psychiatric disorders such as post-traumatic stress disorder (PTSD) are characterised by HPA hypo-activity, which could be related to prior exposure to trauma. This study examined the cortisol response to the administration of low-dose dexamethasone in first-episode psychosis (FEP) patients and its relationship to childhood trauma. METHOD The low-dose (0.25 mg) Dexamethasone Suppression Test (DST) was performed in 21 neuroleptic-naive or minimally treated FEP patients and 20 healthy control participants. Childhood traumatic events were assessed in all participants using the Childhood Trauma Questionnaire (CTQ) and psychiatric symptoms were assessed in patients using standard rating scales. RESULTS FEP patients reported significantly higher rates of childhood trauma compared to controls (p = 0.001) and exhibited lower basal (a.m.) cortisol (p = 0.04) and an increased rate of cortisol hyper-suppression following dexamethasone administration compared to controls (33% (7/21) vs 5% (1/20), respectively; p = 0.04). There were no significant group differences in mean cortisol decline or percent cortisol suppression following the 0.25 mg DST. This study shows for the first time that a subset of patients experiencing their first episode of psychosis display enhanced cortisol suppression. CONCLUSIONS These findings suggest there may be distinct profiles of HPA axis dysfunction in psychosis which should be further explored.
Resumo:
BACKGROUND: Pituitary volume is currently measured as a marker of hypothalamic-pituitary-adrenal hyperactivity in patients with psychosis despite suggestions of susceptibility to antipsychotics. Qualifying and quantifying the effect of atypical antipsychotics on the volume of the pituitary gland will determine whether this measure is valid as a future estimate of HPA-axis activation in psychotic populations. AIMS: To determine the qualitative and quantitative effect of atypical antipsychotic medications on pituitary gland volume in a first-episode psychosis population. METHOD: Pituitary volume was measured from T1-weighted magnetic resonance images in a group of 43 first-episode psychosis patients, the majority of whom were neuroleptic-naive, at baseline and after 3months of treatment, to determine whether change in pituitary volume was correlated with cumulative dose of atypical antipsychotic medication. RESULTS: There was no significant baseline difference in pituitary volume between subjects and controls, or between neuroleptic-naive and neuroleptic-treated subjects. Over the follow-up period there was a negative correlation between percentage change in pituitary volume and cumulative 3-month dose of atypical antipsychotic (r=-0.37), i.e. volume increases were associated with lower doses and volume decreases with higher doses. CONCLUSIONS: Atypical antipsychotic medications may reduce pituitary gland volume in a dose-dependent manner suggesting that atypical antipsychotic medication may support affected individuals to cope with stress associated with emerging psychotic disorders.
Resumo:
Increased permeability of blood vessels is an indicator for various injuries and diseases, including multiple sclerosis (MS), of the central nervous system. Nanoparticles have the potential to deliver drugs locally to sites of tissue damage, reducing the drug administered and limiting associated side effects, but efficient accumulation still remains a challenge. We developed peptide-functionalized polymeric nanoparticles to target blood clots and the extracellular matrix molecule nidogen, which are associated with areas of tissue damage. Using the induction of experimental autoimmune encephalomyelitis in rats to provide a model of MS associated with tissue damage and blood vessel lesions, all targeted nanoparticles were delivered systemically. In vivo data demonstrates enhanced accumulation of peptide functionalized nanoparticles at the injury site compared to scrambled and naive controls, particularly for nanoparticles functionalized to target fibrin clots. This suggests that further investigations with drug laden, peptide functionalized nanoparticles might be of particular interest in the development of treatment strategies for MS.
Resumo:
This paper presents a novel vision-based underwater robotic system for the identification and control of Crown-Of-Thorns starfish (COTS) in coral reef environments. COTS have been identified as one of the most significant threats to Australia's Great Barrier Reef. These starfish literally eat coral, impacting large areas of reef and the marine ecosystem that depends on it. Evidence has suggested that land-based nutrient runoff has accelerated recent outbreaks of COTS requiring extensive use of divers to manually inject biological agents into the starfish in an attempt to control population numbers. Facilitating this control program using robotics is the goal of our research. In this paper we introduce a vision-based COTS detection and tracking system based on a Random Forest Classifier (RFC) trained on images from underwater footage. To track COTS with a moving camera, we embed the RFC in a particle filter detector and tracker where the predicted class probability of the RFC is used as an observation probability to weight the particles, and we use a sparse optical flow estimation for the prediction step of the filter. The system is experimentally evaluated in a realistic laboratory setup using a robotic arm that moves a camera at different speeds and heights over a range of real-size images of COTS in a reef environment.
Resumo:
Automated digital recordings are useful for large-scale temporal and spatial environmental monitoring. An important research effort has been the automated classification of calling bird species. In this paper we examine a related task, retrieval of birdcalls from a database of audio recordings, similar to a user supplied query call. Such a retrieval task can sometimes be more useful than an automated classifier. We compare three approaches to similarity-based birdcall retrieval using spectral ridge features and two kinds of gradient features, structure tensor and the histogram of oriented gradients. The retrieval accuracy of our spectral ridge method is 94% compared to 82% for the structure tensor method and 90% for the histogram of gradients method. Additionally, this approach potentially offers a more compact representation and is more computationally efficient.