817 resultados para Representation. Rationalities. Race. Recognition. Culture. Classification.Ontology. Fetish.
Resumo:
A reliable perception of the real world is a key-feature for an autonomous vehicle and the Advanced Driver Assistance Systems (ADAS). Obstacles detection (OD) is one of the main components for the correct reconstruction of the dynamic world. Historical approaches based on stereo vision and other 3D perception technologies (e.g. LIDAR) have been adapted to the ADAS first and autonomous ground vehicles, after, providing excellent results. The obstacles detection is a very broad field and this domain counts a lot of works in the last years. In academic research, it has been clearly established the essential role of these systems to realize active safety systems for accident prevention, reflecting also the innovative systems introduced by industry. These systems need to accurately assess situational criticalities and simultaneously assess awareness of these criticalities by the driver; it requires that the obstacles detection algorithms must be reliable and accurate, providing: a real-time output, a stable and robust representation of the environment and an estimation independent from lighting and weather conditions. Initial systems relied on only one exteroceptive sensor (e.g. radar or laser for ACC and camera for LDW) in addition to proprioceptive sensors such as wheel speed and yaw rate sensors. But, current systems, such as ACC operating at the entire speed range or autonomous braking for collision avoidance, require the use of multiple sensors since individually they can not meet these requirements. It has led the community to move towards the use of a combination of them in order to exploit the benefits of each one. Pedestrians and vehicles detection are ones of the major thrusts in situational criticalities assessment, still remaining an active area of research. ADASs are the most prominent use case of pedestrians and vehicles detection. Vehicles should be equipped with sensing capabilities able to detect and act on objects in dangerous situations, where the driver would not be able to avoid a collision. A full ADAS or autonomous vehicle, with regard to pedestrians and vehicles, would not only include detection but also tracking, orientation, intent analysis, and collision prediction. The system detects obstacles using a probabilistic occupancy grid built from a multi-resolution disparity map. Obstacles classification is based on an AdaBoost SoftCascade trained on Aggregate Channel Features. A final stage of tracking and fusion guarantees stability and robustness to the result.
Resumo:
In the context of the needs of the Semantic Web and Knowledge Management, we consider what the requirements are of ontologies. The ontology as an artifact of knowledge representation is in danger of becoming a Chimera. We present a series of facts concerning the foundations on which automated ontology construction must build. We discuss a number of different functions that an ontology seeks to fulfill, and also a wish list of ideal functions. Our objective is to stimulate discussion as to the real requirements of ontology engineering and take the view that only a selective and restricted set of requirements will enable the beast to fly.
Resumo:
Recently, we have seen an explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas. Various research communities commonly assume that ontologies are the appropriate modeling structure for representing knowledge. However, little discussion has occurred regarding the actual range of knowledge an ontology can successfully represent.
Resumo:
Automatic Term Recognition (ATR) is a fundamental processing step preceding more complex tasks such as semantic search and ontology learning. From a large number of methodologies available in the literature only a few are able to handle both single and multi-word terms. In this paper we present a comparison of five such algorithms and propose a combined approach using a voting mechanism. We evaluated the six approaches using two different corpora and show how the voting algorithm performs best on one corpus (a collection of texts from Wikipedia) and less well using the Genia corpus (a standard life science corpus). This indicates that choice and design of corpus has a major impact on the evaluation of term recognition algorithms. Our experiments also showed that single-word terms can be equally important and occupy a fairly large proportion in certain domains. As a result, algorithms that ignore single-word terms may cause problems to tasks built on top of ATR. Effective ATR systems also need to take into account both the unstructured text and the structured aspects and this means information extraction techniques need to be integrated into the term recognition process.
Resumo:
We describe a method of recognizing handwritten digits by fitting generative models that are built from deformable B-splines with Gaussian ``ink generators'' spaced along the length of the spline. The splines are adjusted using a novel elastic matching procedure based on the Expectation Maximization (EM) algorithm that maximizes the likelihood of the model generating the data. This approach has many advantages. (1) After identifying the model most likely to have generated the data, the system not only produces a classification of the digit but also a rich description of the instantiation parameters which can yield information such as the writing style. (2) During the process of explaining the image, generative models can perform recognition driven segmentation. (3) The method involves a relatively small number of parameters and hence training is relatively easy and fast. (4) Unlike many other recognition schemes it does not rely on some form of pre-normalization of input images, but can handle arbitrary scalings, translations and a limited degree of image rotation. We have demonstrated our method of fitting models to images does not get trapped in poor local minima. The main disadvantage of the method is it requires much more computation than more standard OCR techniques.
Resumo:
Cells undergoing apoptosis in vivo are rapidly detected and cleared by phagocytes. Swift recognition and removal of apoptotic cells is important for normal tissue homeostasis and failure in the underlying clearance mechanisms has pathological consequences associated with inflammatory and auto-immune diseases. Cell cultures in vitro usually lack the capacity for removal of non-viable cells because of the absence of phagocytes and, as such, fail to emulate the healthy in vivo micro-environment from which dead cells are absent. While a key objective in cell culture is to maintain viability at maximal levels, cell death is unavoidable and non-viable cells frequently contaminate cultures in significant numbers. Here we show that the presence of apoptotic cells in monoclonal antibody-producing hybridoma cultures has markedly detrimental effects on antibody productivity. Removal of apoptotic hybridoma cells by macrophages at the time of seeding resulted in 100% improved antibody productivity that was, surprisingly to us, most pronounced late on in the cultures. Furthermore, we were able to recapitulate this effect using novel super-paramagnetic Dead-Cert Nanoparticles to remove non-viable cells simply and effectively at culture seeding. These results (1) provide direct evidence that apoptotic cells have a profound influence on their non-phagocytic neighbors in culture and (2) demonstrate the effectiveness of a simple dead-cell removal strategy for improving antibody manufacture in vitro.
The transformational implementation of JSD process specifications via finite automata representation
Resumo:
Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.
Resumo:
An uptake system was developed using Caco-2 cell monolayers and the dipeptide, glycyl-[3H]L-proline, as a probe compound. Glycyl-[3H]L-proline uptake was via the di-/tripeptide transport system (DTS) and, exhibited concentration-, pH- and temperature-dependency. Dipeptides inhibited uptake of the probe, and the design of the system allowed competitors to be ranked against one another with respect to affinity for the transporter. The structural features required to ensure or increase interaction with the DTS were defined by studying the effect of a series of glycyl-L-proline and angiotensin-converting enzyme (ACE)-inhibitor (SQ-29852) analogues on the uptake of the probe. The SQ-29852 structure was divided into six domains (A-F) and competitors were grouped into series depending on structural variations within specific regions. Domain A was found to prefer a hydrophobic function, such as a phenyl group, and was intolerant to positive charges and H+ -acceptors and donors. SQ-29852 analogues were more tolerant of substitutions in the C domain, compared to glycyl-L-proline analogues, suggesting that interactions along the length of the SQ-29852 molecule may override the effects of substitutions in the C domain. SQ-29852 analogues showed a preference for a positive function, such as an amine group in this region, but dipeptide structures favoured an uncharged substitution. Lipophilic substituents in domain D increased affinity of SQ-29852 analogues with the DTS. A similar effect was observed for ACE-NEP inhibitor analogues. Domain E, corresponding to the carboxyl group was found to be tolerant of esterification for SQ-29852 analogues but not for dipeptides. Structural features which may increase interaction for one series of compounds, may not have the same effect for another series, indicating that the presence of multiple recognition sites on a molecule may override the deleterious effect of anyone change. Modifying current, poorly absorbed peptidomimetic structures to fit the proposed hypothetical model may improve oral bioavailability by increasing affinity for the DTS. The stereochemical preference of the transporter was explored using four series of compounds (SQ-29852, lysylproline, alanylproline and alanylalanine enantiomers). The L, L stereochemistry was the preferred conformation for all four series, agreeing with previous studies. However, D, D enantiomers were shown in some cases to be substrates for the DTS, although exhibiting a lower affinity than their L, L counterparts. All the ACE-inhibitors and β-lactam antibiotics investigated, produced a degree of inhibition of the probe, and thus show some affinity for the DTS. This contrasts with previous reports that found several ACE inhibitors to be absorbed via a passive process, thus suggesting that compounds are capable of binding to the transporter site and inhibiting the probe without being translocated into the cell. This was also shown to be the case for oligodeoxynucleotide conjugated to a lipophilic group (vitamin E), and highlights the possibility that other orally administered drug candidates may exert non-specific effects on the DTS and possibly have a nutritional impact. Molecular modelling of selected ACE-NEP inhibitors revealed that the three carbonyl functions can be oriented in a similar direction, and this conformation was found to exist in a local energy-minimised state, indicating that the carbonyls may possibly be involved in hydrogen-bond formation with the binding site of the DTS.
Resumo:
Urban regions present some of the most challenging areas for the remote sensing community. Many different types of land cover have similar spectral responses, making them difficult to distinguish from one another. Traditional per-pixel classification techniques suffer particularly badly because they only use these spectral properties to determine a class, and no other properties of the image, such as context. This project presents the results of the classification of a deeply urban area of Dudley, West Midlands, using 4 methods: Supervised Maximum Likelihood, SMAP, ECHO and Unsupervised Maximum Likelihood. An accuracy assessment method is then developed to allow a fair representation of each procedure and a direct comparison between them. Subsequently, a classification procedure is developed that makes use of the context in the image, though a per-polygon classification. The imagery is broken up into a series of polygons extracted from the Marr-Hildreth zero-crossing edge detector. These polygons are then refined using a region-growing algorithm, and then classified according to the mean class of the fine polygons. The imagery produced by this technique is shown to be of better quality and of a higher accuracy than that of other conventional methods. Further refinements are suggested and examined to improve the aesthetic appearance of the imagery. Finally a comparison with the results produced from a previous study of the James Bridge catchment, in Darleston, West Midlands, is made, showing that the Polygon classified ATM imagery performs significantly better than the Maximum Likelihood classified videography used in the initial study, despite the presence of geometric correction errors.
Resumo:
This thesis presents an investigation into the application of methods of uncertain reasoning to the biological classification of river water quality. Existing biological methods for reporting river water quality are critically evaluated, and the adoption of a discrete biological classification scheme advocated. Reasoning methods for managing uncertainty are explained, in which the Bayesian and Dempster-Shafer calculi are cited as primary numerical schemes. Elicitation of qualitative knowledge on benthic invertebrates is described. The specificity of benthic response to changes in water quality leads to the adoption of a sensor model of data interpretation, in which a reference set of taxa provide probabilistic support for the biological classes. The significance of sensor states, including that of absence, is shown. Novel techniques of directly eliciting the required uncertainty measures are presented. Bayesian and Dempster-Shafer calculi were used to combine the evidence provided by the sensors. The performance of these automatic classifiers was compared with the expert's own discrete classification of sampled sites. Variations of sensor data weighting, combination order and belief representation were examined for their effect on classification performance. The behaviour of the calculi under evidential conflict and alternative combination rules was investigated. Small variations in evidential weight and the inclusion of evidence from sensors absent from a sample improved classification performance of Bayesian belief and support for singleton hypotheses. For simple support, inclusion of absent evidence decreased classification rate. The performance of Dempster-Shafer classification using consonant belief functions was comparable to Bayesian and singleton belief. Recommendations are made for further work in biological classification using uncertain reasoning methods, including the combination of multiple-expert opinion, the use of Bayesian networks, and the integration of classification software within a decision support system for water quality assessment.
Resumo:
This thesis presents a thorough and principled investigation into the application of artificial neural networks to the biological monitoring of freshwater. It contains original ideas on the classification and interpretation of benthic macroinvertebrates, and aims to demonstrate their superiority over the biotic systems currently used in the UK to report river water quality. The conceptual basis of a new biological classification system is described, and a full review and analysis of a number of river data sets is presented. The biological classification is compared to the common biotic systems using data from the Upper Trent catchment. This data contained 292 expertly classified invertebrate samples identified to mixed taxonomic levels. The neural network experimental work concentrates on the classification of the invertebrate samples into biological class, where only a subset of the sample is used to form the classification. Other experimentation is conducted into the identification of novel input samples, the classification of samples from different biotopes and the use of prior information in the neural network models. The biological classification is shown to provide an intuitive interpretation of a graphical representation, generated without reference to the class labels, of the Upper Trent data. The selection of key indicator taxa is considered using three different approaches; one novel, one from information theory and one from classical statistical methods. Good indicators of quality class based on these analyses are found to be in good agreement with those chosen by a domain expert. The change in information associated with different levels of identification and enumeration of taxa is quantified. The feasibility of using neural network classifiers and predictors to develop numeric criteria for the biological assessment of sediment contamination in the Great Lakes is also investigated.