32 resultados para Representation. Rationalities. Race. Recognition. Culture. Classification.Ontology. Fetish.
em Aston University Research Archive
Resumo:
Many Object recognition techniques perform some flavour of point pattern matching between a model and a scene. Such points are usually selected through a feature detection algorithm that is robust to a class of image transformations and a suitable descriptor is computed over them in order to get a reliable matching. Moreover, some approaches take an additional step by casting the correspondence problem into a matching between graphs defined over feature points. The motivation is that the relational model would add more discriminative power, however the overall effectiveness strongly depends on the ability to build a graph that is stable with respect to both changes in the object appearance and spatial distribution of interest points. In fact, widely used graph-based representations, have shown to suffer some limitations, especially with respect to changes in the Euclidean organization of the feature points. In this paper we introduce a technique to build relational structures over corner points that does not depend on the spatial distribution of the features. © 2012 ICPR Org Committee.
Resumo:
Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.
Resumo:
In the context of the needs of the Semantic Web and Knowledge Management, we consider what the requirements are of ontologies. The ontology as an artifact of knowledge representation is in danger of becoming a Chimera. We present a series of facts concerning the foundations on which automated ontology construction must build. We discuss a number of different functions that an ontology seeks to fulfill, and also a wish list of ideal functions. Our objective is to stimulate discussion as to the real requirements of ontology engineering and take the view that only a selective and restricted set of requirements will enable the beast to fly.
Resumo:
Recently, we have seen an explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas. Various research communities commonly assume that ontologies are the appropriate modeling structure for representing knowledge. However, little discussion has occurred regarding the actual range of knowledge an ontology can successfully represent.
Resumo:
Automatic Term Recognition (ATR) is a fundamental processing step preceding more complex tasks such as semantic search and ontology learning. From a large number of methodologies available in the literature only a few are able to handle both single and multi-word terms. In this paper we present a comparison of five such algorithms and propose a combined approach using a voting mechanism. We evaluated the six approaches using two different corpora and show how the voting algorithm performs best on one corpus (a collection of texts from Wikipedia) and less well using the Genia corpus (a standard life science corpus). This indicates that choice and design of corpus has a major impact on the evaluation of term recognition algorithms. Our experiments also showed that single-word terms can be equally important and occupy a fairly large proportion in certain domains. As a result, algorithms that ignore single-word terms may cause problems to tasks built on top of ATR. Effective ATR systems also need to take into account both the unstructured text and the structured aspects and this means information extraction techniques need to be integrated into the term recognition process.
Resumo:
We describe a method of recognizing handwritten digits by fitting generative models that are built from deformable B-splines with Gaussian ``ink generators'' spaced along the length of the spline. The splines are adjusted using a novel elastic matching procedure based on the Expectation Maximization (EM) algorithm that maximizes the likelihood of the model generating the data. This approach has many advantages. (1) After identifying the model most likely to have generated the data, the system not only produces a classification of the digit but also a rich description of the instantiation parameters which can yield information such as the writing style. (2) During the process of explaining the image, generative models can perform recognition driven segmentation. (3) The method involves a relatively small number of parameters and hence training is relatively easy and fast. (4) Unlike many other recognition schemes it does not rely on some form of pre-normalization of input images, but can handle arbitrary scalings, translations and a limited degree of image rotation. We have demonstrated our method of fitting models to images does not get trapped in poor local minima. The main disadvantage of the method is it requires much more computation than more standard OCR techniques.
Resumo:
Cells undergoing apoptosis in vivo are rapidly detected and cleared by phagocytes. Swift recognition and removal of apoptotic cells is important for normal tissue homeostasis and failure in the underlying clearance mechanisms has pathological consequences associated with inflammatory and auto-immune diseases. Cell cultures in vitro usually lack the capacity for removal of non-viable cells because of the absence of phagocytes and, as such, fail to emulate the healthy in vivo micro-environment from which dead cells are absent. While a key objective in cell culture is to maintain viability at maximal levels, cell death is unavoidable and non-viable cells frequently contaminate cultures in significant numbers. Here we show that the presence of apoptotic cells in monoclonal antibody-producing hybridoma cultures has markedly detrimental effects on antibody productivity. Removal of apoptotic hybridoma cells by macrophages at the time of seeding resulted in 100% improved antibody productivity that was, surprisingly to us, most pronounced late on in the cultures. Furthermore, we were able to recapitulate this effect using novel super-paramagnetic Dead-Cert Nanoparticles to remove non-viable cells simply and effectively at culture seeding. These results (1) provide direct evidence that apoptotic cells have a profound influence on their non-phagocytic neighbors in culture and (2) demonstrate the effectiveness of a simple dead-cell removal strategy for improving antibody manufacture in vitro.
The transformational implementation of JSD process specifications via finite automata representation
Resumo:
Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.
Resumo:
An uptake system was developed using Caco-2 cell monolayers and the dipeptide, glycyl-[3H]L-proline, as a probe compound. Glycyl-[3H]L-proline uptake was via the di-/tripeptide transport system (DTS) and, exhibited concentration-, pH- and temperature-dependency. Dipeptides inhibited uptake of the probe, and the design of the system allowed competitors to be ranked against one another with respect to affinity for the transporter. The structural features required to ensure or increase interaction with the DTS were defined by studying the effect of a series of glycyl-L-proline and angiotensin-converting enzyme (ACE)-inhibitor (SQ-29852) analogues on the uptake of the probe. The SQ-29852 structure was divided into six domains (A-F) and competitors were grouped into series depending on structural variations within specific regions. Domain A was found to prefer a hydrophobic function, such as a phenyl group, and was intolerant to positive charges and H+ -acceptors and donors. SQ-29852 analogues were more tolerant of substitutions in the C domain, compared to glycyl-L-proline analogues, suggesting that interactions along the length of the SQ-29852 molecule may override the effects of substitutions in the C domain. SQ-29852 analogues showed a preference for a positive function, such as an amine group in this region, but dipeptide structures favoured an uncharged substitution. Lipophilic substituents in domain D increased affinity of SQ-29852 analogues with the DTS. A similar effect was observed for ACE-NEP inhibitor analogues. Domain E, corresponding to the carboxyl group was found to be tolerant of esterification for SQ-29852 analogues but not for dipeptides. Structural features which may increase interaction for one series of compounds, may not have the same effect for another series, indicating that the presence of multiple recognition sites on a molecule may override the deleterious effect of anyone change. Modifying current, poorly absorbed peptidomimetic structures to fit the proposed hypothetical model may improve oral bioavailability by increasing affinity for the DTS. The stereochemical preference of the transporter was explored using four series of compounds (SQ-29852, lysylproline, alanylproline and alanylalanine enantiomers). The L, L stereochemistry was the preferred conformation for all four series, agreeing with previous studies. However, D, D enantiomers were shown in some cases to be substrates for the DTS, although exhibiting a lower affinity than their L, L counterparts. All the ACE-inhibitors and β-lactam antibiotics investigated, produced a degree of inhibition of the probe, and thus show some affinity for the DTS. This contrasts with previous reports that found several ACE inhibitors to be absorbed via a passive process, thus suggesting that compounds are capable of binding to the transporter site and inhibiting the probe without being translocated into the cell. This was also shown to be the case for oligodeoxynucleotide conjugated to a lipophilic group (vitamin E), and highlights the possibility that other orally administered drug candidates may exert non-specific effects on the DTS and possibly have a nutritional impact. Molecular modelling of selected ACE-NEP inhibitors revealed that the three carbonyl functions can be oriented in a similar direction, and this conformation was found to exist in a local energy-minimised state, indicating that the carbonyls may possibly be involved in hydrogen-bond formation with the binding site of the DTS.
Resumo:
Urban regions present some of the most challenging areas for the remote sensing community. Many different types of land cover have similar spectral responses, making them difficult to distinguish from one another. Traditional per-pixel classification techniques suffer particularly badly because they only use these spectral properties to determine a class, and no other properties of the image, such as context. This project presents the results of the classification of a deeply urban area of Dudley, West Midlands, using 4 methods: Supervised Maximum Likelihood, SMAP, ECHO and Unsupervised Maximum Likelihood. An accuracy assessment method is then developed to allow a fair representation of each procedure and a direct comparison between them. Subsequently, a classification procedure is developed that makes use of the context in the image, though a per-polygon classification. The imagery is broken up into a series of polygons extracted from the Marr-Hildreth zero-crossing edge detector. These polygons are then refined using a region-growing algorithm, and then classified according to the mean class of the fine polygons. The imagery produced by this technique is shown to be of better quality and of a higher accuracy than that of other conventional methods. Further refinements are suggested and examined to improve the aesthetic appearance of the imagery. Finally a comparison with the results produced from a previous study of the James Bridge catchment, in Darleston, West Midlands, is made, showing that the Polygon classified ATM imagery performs significantly better than the Maximum Likelihood classified videography used in the initial study, despite the presence of geometric correction errors.
Resumo:
This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.
Resumo:
This research sets out to compare the values in British and German political discourse, especially the discourse of social policy, and to analyse their relationship to political culture through an analysis of the values of health care reform. The work proceeds from the hypothesis that the known differences in political culture between the two countries will be reflected in the values of political discourse, and takes a comparison of two major recent legislative debates on health care reform as a case study. The starting point in the first chapter is a brief comparative survey of the post-war political cultures of the two countries, including a brief account of the historical background to their development and an overview of explanatory theoretical models. From this are developed the expected contrasts in values in accordance with the hypothesis. The second chapter explains the basis for selecting the corpus texts and the contextual information which needs to be recorded to make a comparative analysis, including the context and content of the reform proposals which comprise the case study. It examines any contextual factors which may need to be taken into account in the analysis. The third and fourth chapters explain the analytical method, which is centred on the use of definition-based taxonomies of value items and value appeal methods to identify, on a sentence-by-sentence basis, the value items in the corpus texts and the methods used to make appeals to those value items. The third chapter is concerned with the classification and analysis of values, the fourth with the classification and analysis of value appeal methods. The fifth chapter will present and explain the results of the analysis, and the sixth will summarize the conclusions and make suggestions for further research.
Resumo:
This paper describes an innovative sensing approach allowing capture, discrimination, and classification of transients automatically in gait. A walking platform is described, which offers an alternative design to that of standard force plates with advantages that include mechanical simplicity and less restriction on dimensions. The scope of the work is to investigate as an experiment the sensitivity of the distributive tactile sensing method with the potential to address flexibility on gait assessment, including patient targeting and the extension to a variety of ambulatory applications. Using infrared sensors to measure plate deflection, gait patterns are compared with stored templates using a pattern recognition algorithm. This information is input into a neural network to classify normal and affected walking events, with a classification accuracy of just under 90 per cent achieved. The system developed has potential applications in gait analysis and rehabilitation, whereby it can be used as a tool for early diagnosis of walking disorders or to determine changes between pre- and post-operative gait.
Resumo:
The Irish have been relentlessly racialized in their diaspora settings, yet little historical work engages with “race to understand Irish history on the island of Ireland. This article provides an interpretation of two key periods of Irish history—the second half of the sixteenth century and the period since 1996—through the lens of racialization. I argue that Ireland's history is exceptional in its capacity to reveal key elements of the history of the development of race as an idea and a set of practices. The English colonization of Ireland was underpinned by a form of racism reliant on linking bodies to unchanging hierarchically stacked cultures, without reference to physical differences. For example, the putative unproductiveness of the Gaelic Irish not only placed them at a lower level of civilization than the industrious English but it also authorizes increasingly draconian ways of dealing with the Irish populace. The period since 1996, during which Ireland has become a country of immigration, illustrates how racism has undergone a transformation into the object of official state policies to eliminate it. Yet it flourishes as part of a globalized set of power relations that has brought immigrants to the developing Irish economy. In response to immigration the state simultaneously exerts neoliberal controls and reduces pathways to citizenship through residence while passing antiracism legislation. Today, the indigenous nomadic Travellers and asylum seekers are the ones that are seen as pathologically unproductive. Irish history thus demonstrates that race is not only about color but also very much about culture. It also illustrates notable elements of the West's journey from racism without race to racism without racists.