14 resultados para classification and equivalence classes
em Aston University Research Archive
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.
Resumo:
The aims of the project were twofold: 1) To investigate classification procedures for remotely sensed digital data, in order to develop modifications to existing algorithms and propose novel classification procedures; and 2) To investigate and develop algorithms for contextual enhancement of classified imagery in order to increase classification accuracy. The following classifiers were examined: box, decision tree, minimum distance, maximum likelihood. In addition to these the following algorithms were developed during the course of the research: deviant distance, look up table and an automated decision tree classifier using expert systems technology. Clustering techniques for unsupervised classification were also investigated. Contextual enhancements investigated were: mode filters, small area replacement and Wharton's CONAN algorithm. Additionally methods for noise and edge based declassification and contextual reclassification, non-probabilitic relaxation and relaxation based on Markov chain theory were developed. The advantages of per-field classifiers and Geographical Information Systems were investigated. The conclusions presented suggest suitable combinations of classifier and contextual enhancement, given user accuracy requirements and time constraints. These were then tested for validity using a different data set. A brief examination of the utility of the recommended contextual algorithms for reducing the effects of data noise was also carried out.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.
Resumo:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Resumo:
Analyzing geographical patterns by collocating events, objects or their attributes has a long history in surveillance and monitoring, and is particularly applied in environmental contexts, such as ecology or epidemiology. The identification of patterns or structures at some scales can be addressed using spatial statistics, particularly marked point processes methodologies. Classification and regression trees are also related to this goal of finding "patterns" by deducing the hierarchy of influence of variables on a dependent outcome. Such variable selection methods have been applied to spatial data, but, often without explicitly acknowledging the spatial dependence. Many methods routinely used in exploratory point pattern analysis are2nd-order statistics, used in a univariate context, though there is also a wide literature on modelling methods for multivariate point pattern processes. This paper proposes an exploratory approach for multivariate spatial data using higher-order statistics built from co-occurrences of events or marks given by the point processes. A spatial entropy measure, derived from these multinomial distributions of co-occurrences at a given order, constitutes the basis of the proposed exploratory methods. © 2010 Elsevier Ltd.
Herbal medicines:physician's recommendation and clinical evaluation of St.John's Wort for depression
Resumo:
Why some physicians recommend herbal medicines while others do not is not well understood. We undertook a survey designed to identify factors, which predict recommendation of herbal medicines by physicians in Malaysia. About a third (206 out of 626) of the physicians working at the University of Malaya Medical Centre ' were interviewed face-to-face, using a structured questionnaire. Physicians were asked about their personal use of, recommendation of, perceived interest in and, usefulness and safety of herbal medicines. Using logistic regression modelling we identified personal use, general interest, interest in receiving training, race and higher level of medical training as significant predictors of recommendation. St. John's wort is one of the most widely used herbal remedies. It is also probably the most widely evaluated herbal remedy with no fewer than 57 randomised controlled trials. Evidence from the depression trials suggests that St. John's wort is more effective than placebo while its comparative efficacy to conventional antidepressants is not well established. We updated previous meta-analyses of St. John's wort, described the characteristics of the included trials, applied methods of data imputation and transformation for incomplete trial data and examined sources of heterogeneity in the design and results of those trials. Thirty randomised controlled trials, which were heterogeneous in design, were identified. Our meta-analysis showed that St. John's wort was significantly more effective than placebo [pooled RR 1.90 (1.54-2.35)] and [Pooled WMD 4.09 (2.33 to 5.84)]. However, the remedy was similar to conventional antidepressant in its efficacy [Pooled RR I. 0 I (0.93 -1.10)] and [Pooled WMD 0.18 (- 0.66 to 1.02). Subgroup analyses of the placebo-controlled trials suggested that use of different diagnostic classifications at the inclusion stage led to different estimates of effect. Similarly a significant difference in the estimates of efficacy was observed when trials were categorised according to length of follow-up. Confounding between the variables, diagnostic classification and length of trial was shown by loglinear analysis. Despite extensive study, there is still no consensus on how effective St. lohn's wort is in depression. However, most experts would agree that it has some effect. Our meta-analysis highlights the problems associated with the clinical evaluation of herbal medicines when the active ingredients are poorly defined or unknown. The problem is compounded when the target disease (e.g. depression) is also difficult to define and different instruments are available to diagnose and evaluate it.
Resumo:
Tuberculosis is one of the most devastating diseases in the world primarily due to several decades of neglect and an emergence of multidrug-resitance strains (MDR) of M. tuberculosis together with the increased incidence of disseminated infections produced by other mycobacterium in AIDS patients. This has prompted the search for new antimycobacterial drugs. A series of pyridine-2-, pyridine-3-, pyridine-4-, pyrazine and quinoline-2-carboxamidrazone derivatives and new classes of carboxamidrazone were prepared in an automated fashion and by traditional synthesis. Over nine hundred synthesized compounds were screened for their anti mycobacterial activity against M. fortutium (NGTG 10394) as a surrogate for M. tuberculosis. The new classes of amidrazones were also screened against tuberculosis H37 Rv and antimicrobial activities against various bacteria. Fifteen tested compounds were found to provide 90-100% inhibition of mycobacterium growth of M. tuberculosis H37 Rv in the primary screen at 6.25 μg mL-1. The most active compound in the carboxamidrazone amide series had an MIG value of 0.1-2 μg mL-1 against M. fortutium. The enzyme dihydrofolate reductase (DHFR) has been a drug-design target for decades. Blocking of the enzymatic activity of DHFR is a key element in the treatment of many diseases, including cancer, bacterial and protozoal infection. The x-ray structure of DHFR from M. tuberculosis and human DHFR were found to have differences in substrate binding site. The presence of glycerol molecule in the Xray structure from M. tuberculosis DHFR provided opportunity to design new antifolates. The new antifolates described herein were designed to retain the pharmcophore of pyrimethamine (2,4- diamino-5(4-chlorophenyl)-6-ethylpyrimidine), but encompassing a range of polar groups that might interact with the M. tuberculosis DHFR glycerol binding pockets. Finally, the research described in this thesis contributes to the preparation of molecularly imprinted polymers for the recognition of 2,4-diaminopyrimidine for the binding the target. The formation of hydrogen bonding between the model functional monomer 5-(4-tert-butyl-benzylidene)-pyrimidine-2,4,6-trione and 2,4-diaminopyrimidine in the pre-polymerisation stage was verified by 1H-NMR studies. Having proven that 2,4-diaminopyrimidine interacts strongly with the model 5-(4-tert-butylbenzylidene)- pyrimidine-2,4,6-trione, 2,4-diaminopyrimidine-imprinted polymers were prepared using a novel cyclobarbital derived functional monomer, acrylic acid 4-(2,4,6-trioxo-tetrahydro-pyrimidin-5- ylidenemethyl)phenyl ester, capable of multiple hydrogen bond formation with the 2,4- diaminopyrimidine. The recognition property of the respective polymers toward the template and other test compounds was evaluated by fluorescence. The results demonstrate that the polymers showed dose dependent enhancement of fluorescence emissions. In addition, the results also indicate that synthesized MIPs have higher 2,4-diaminopyrimidine binding ability as compared with corresponding non-imprinting polymers.
Resumo:
This research sets out to compare the values in British and German political discourse, especially the discourse of social policy, and to analyse their relationship to political culture through an analysis of the values of health care reform. The work proceeds from the hypothesis that the known differences in political culture between the two countries will be reflected in the values of political discourse, and takes a comparison of two major recent legislative debates on health care reform as a case study. The starting point in the first chapter is a brief comparative survey of the post-war political cultures of the two countries, including a brief account of the historical background to their development and an overview of explanatory theoretical models. From this are developed the expected contrasts in values in accordance with the hypothesis. The second chapter explains the basis for selecting the corpus texts and the contextual information which needs to be recorded to make a comparative analysis, including the context and content of the reform proposals which comprise the case study. It examines any contextual factors which may need to be taken into account in the analysis. The third and fourth chapters explain the analytical method, which is centred on the use of definition-based taxonomies of value items and value appeal methods to identify, on a sentence-by-sentence basis, the value items in the corpus texts and the methods used to make appeals to those value items. The third chapter is concerned with the classification and analysis of values, the fourth with the classification and analysis of value appeal methods. The fifth chapter will present and explain the results of the analysis, and the sixth will summarize the conclusions and make suggestions for further research.
Resumo:
Elevated free fatty acids (FFA) are a feature of ageing and a risk factor for metabolic disorders such as cardiovascular disease (CVD) and type-2 diabetes (T2D). Elevated FFA contribute to insulin resistance, production of inflammatory cytokines and expression of adhesion molecules on immune cells and endothelial cells, risk factors for CVD and T2D. Molecular mechanisms of FFA effects on monocyte function and how FFA phenotype is affected by healthy ageing remain poorly understood. This thesis evaluated the effects of the two major FFA in plasma, oleate and palmitate on monocyte viability, cell surface antigen expression, and inflammatory activation in THP-1 monocytes. Palmitate but not oleate increased cell surface expression of CD11b and CD36 after 24h, independent of mitochondrial superoxide, but dependent on de novo synthesis of ceramides. LPS-mediated cytokine production in THP-1 monocytes was enhanced and decreased following incubation with palmitate and oleate respectively. In a model of monocyte-macrophage differentiation, palmitate induced a pro-inflammatory macrophage phenotype which required de novo ceramide synthesis, whilst oleate reduced cytokine secretion, producing a macrophage with enhanced clearance apoptotic cells. Plasma fatty acid analysis in young and mid-life populations revealed age-related increases in both the SFA and MUFA classes, especially the medium and very long chain C14 and C24 fatty acids, which were accompanied by increases in the estimated activities of desaturase enzymes. Changes were independently correlated with increased PBMC CD11b, plasma TNF-a and insulin resistance. In conclusion, the pro-atherogenic phenotype, enhanced LPS responses in monocytes, and pro-inflammatory macrophage in the presence of palmitate but not oleate is reliant upon de novo ceramide synthesis. Age-related increases in inflammation, cell surface integrin expression are related to increases in both the MUFA and SFA fatty acids, which in part may be explained by altered de novo fatty acid synthesis.
Resumo:
Understanding the pharmacological principles and safe use of drugs is just as important in surgical practice as in any other medical specialty. With an ageing population with often multiple comorbidities and medications, as well as an expanding list of new pharmacological treatments, it is important that surgeons understand the implications of therapeutic drugs on their daily practice. The increasing emphasis on high quality and safe patient care demands that doctors are aware of preventable adverse drug reactions (ADRs) and interactions, try to minimize the potential for medication errors, and consider the benefits and harms of medicines in their patients. This chapter examines these aspects from the view of surgical practice and expands on the implications of some of the most common medical conditions and drug classes in the perioperative period. The therapeutic care of surgical patients is obvious in many circumstances – for example, antibacterial prophylaxis, thromboprophylaxis, and postoperative analgesia. However, the careful examination of other drug therapies is often critical not only to the sustained treatment of the associated medical conditions but to the perioperative outcomes of patients undergoing surgery. The benefit–harm balance of many therapies may be fundamentally altered by the stress of an operation in one direction or the other; this is not a decision that should wait until the anaesthetist arrives for a preoperative assessment or one that should be left to junior medical or nursing staff on the ward.
Resumo:
Support Vector Machines (SVMs) are widely used classifiers for detecting physiological patterns in Human-Computer Interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the application of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables, and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported.