30 resultados para face recognition,face detection,face verification,web application

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper, addresses the problem of novelty detection in the case that the observed data is a mixture of a known 'background' process contaminated with an unknown other process, which generates the outliers, or novel observations. The framework we describe here is quite general, employing univariate classification with incomplete information, based on knowledge of the distribution (the 'probability density function', 'pdf') of the data generated by the 'background' process. The relative proportion of this 'background' component (the 'prior' 'background' 'probability), the 'pdf' and the 'prior' probabilities of all other components are all assumed unknown. The main contribution is a new classification scheme that identifies the maximum proportion of observed data following the known 'background' distribution. The method exploits the Kolmogorov-Smirnov test to estimate the proportions, and afterwards data are Bayes optimally separated. Results, demonstrated with synthetic data, show that this approach can produce more reliable results than a standard novelty detection scheme. The classification algorithm is then applied to the problem of identifying outliers in the SIC2004 data set, in order to detect the radioactive release simulated in the 'oker' data set. We propose this method as a reliable means of novelty detection in the emergency situation which can also be used to identify outliers prior to the application of a more general automatic mapping algorithm. © Springer-Verlag 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The 5-HT3 receptors are members of the cys-loop family of ligand-gated ion channels. Two functional subtypes are known, the homomeric 5HT3A and the heteromeric 5HT3A/B receptors, which exhibit distinct biophysical characteristics but are difficult to differentiate pharmacologically. Atomic force microscopy has been used to determine the stoichiometry and architecture of the heteromeric 5HT3A/B receptor. Each subunit was engineered to express a unique C-terminal epitope tag, together with six sequential histidine residues to facilitate nickel affinity purification. The 5-HT3 receptors, ectopically expressed in HEK293 cells, were solubilised, purified and decorated with antibodies to the subunit specific epitope tags. Imaging of individual receptors by atomic force microscopy revealed a pentameric arrangement of subunits in the order BBABA, reading anti-clockwise when viewed from the extracellular face. Homology models for the heteromeric receptor were then constructed using both the electron microscopic structure of the nicotinic acetylcholine receptor, from Torpedo marmorata, and the X-ray crystallographic structure of the soluble acetylcholine binding protein, from Lymnaea stagnalis, as templates. These homology models were used, together with equivalent models constructed for the homomeric receptor, to interpret mutagenesis experiments designed to explore the minimal recognition differences of both the natural agonist, 5-HT, and the competitive antagonist, granisetron, for the two human receptor subtypes. The results of this work revealed that the 5-HT3B subunit residues within the ligand binding site, for both the agonist and antagonist, are accommodating to conservative mutations. They are consistent with the view that the 5-HT3A subunit provides the principal and the 5-HT38 subunit the complementary recognition interactions at the binding interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study examines the effect of the goodness of view on the minimal exposure time required to recognize depth-rotated objects. In a previous study, Verfaillie and Boutsen (1995) derived scales of goodness of view, using a new corpus of images of depth-rotated objects. In the present experiment, a subset of this corpus (five views of 56 objects) is used to determine the recognition exposure time for each view, by increasing exposure time across successive presentations until the object is recognized. The results indicate that, for two thirds of the objects, good views are recognized more frequently and have lower recognition exposure times than bad views.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models are central tools for modern scientists and decision makers, and there are many existing frameworks to support their creation, execution and composition. Many frameworks are based on proprietary interfaces, and do not lend themselves to the integration of models from diverse disciplines. Web based systems, or systems based on web services, such as Taverna and Kepler, allow composition of models based on standard web service technologies. At the same time the Open Geospatial Consortium has been developing their own service stack, which includes the Web Processing Service, designed to facilitate the executing of geospatial processing - including complex environmental models. The current Open Geospatial Consortium service stack employs Extensible Markup Language as a default data exchange standard, and widely-used encodings such as JavaScript Object Notation can often only be used when incorporated with Extensible Markup Language. Similarly, no successful engagement of the Web Processing Service standard with the well-supported technologies of Simple Object Access Protocol and Web Services Description Language has been seen. In this paper we propose a pure Simple Object Access Protocol/Web Services Description Language processing service which addresses some of the issues with the Web Processing Service specication and brings us closer to achieving a degree of interoperability between geospatial models, and thus realising the vision of a useful 'model web'.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most research in the area of emotion detection in written text focused on detecting explicit expressions of emotions in text. In this paper, we present a rule-based pipeline approach for detecting implicit emotions in written text without emotion-bearing words based on the OCC Model. We have evaluated our approach on three different datasets with five emotion categories. Our results show that the proposed approach outperforms the lexicon matching method consistently across all the three datasets by a large margin of 17–30% in F-measure and gives competitive performance compared to a supervised classifier. In particular, when dealing with formal text which follows grammatical rules strictly, our approach gives an average F-measure of 82.7% on “Happy”, “Angry-Disgust” and “Sad”, even outperforming the supervised baseline by nearly 17% in F-measure. Our preliminary results show the feasibility of the approach for the task of implicit emotion detection in written text.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dimensional and form inspections are key to the manufacturing and assembly of products. Product verification can involve a number of different measuring instruments operated using their dedicated software. Typically, each of these instruments with their associated software is more suitable for the verification of a pre-specified quality characteristic of the product than others. The number of different systems and software applications to perform a complete measurement of products and assemblies within a manufacturing organisation is therefore expected to be large. This number becomes even larger as advances in measurement technologies are made. The idea of a universal software application for any instrument still appears to be only a theoretical possibility. A need for information integration is apparent. In this paper, a design of an information system to consistently manage (store, search, retrieve, search, secure) measurement results from various instruments and software applications is introduced. Two of the main ideas underlying the proposed system include abstracting structures and formats of measurement files from the data so that complexity and compatibility between different approaches to measurement data modelling is avoided. Secondly, the information within a file is enriched with meta-information to facilitate its consistent storage and retrieval. To demonstrate the designed information system, a web application is implemented. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that even slight changes in nonuniform illumination lead to a large image variability and are crucial for many visual tasks. This paper presents a new ICA related probabilistic model where the number of sources exceeds the number of sensors to perform an image segmentation and illumination removal, simultaneously. We model illumination and reflectance in log space by a generalized autoregressive process and Hidden Gaussian Markov random field, respectively. The model ability to deal with segmentation of illuminated images is compared with a Canny edge detector and homomorphic filtering. We apply the model to two problems: synthetic image segmentation and sea surface pollution detection from intensity images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of amplitude-modulated phase-shift-keyed (AM-PSK) optical data transmission is investigated in a sequence of concatenated links in a wavelength-division-multiplexed clockwork-routed network. The narrower channel spacing made possible by using AM-PSK format allows the network to contain a greater number of network nodes. Full differential precoding at the packet source reduces the amount of high-speed electronics required in the network and also offers simplified header recognition and time-to-live mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A protein's isoelectric point or pI corresponds to the solution pH at which its net surface charge is zero. Since the early days of solution biochemistry, the pI has been recorded and reported, and thus literature reports of pI abound. The Protein Isoelectric Point database (PIP-DB) has collected and collated these data to provide an increasingly comprehensive database for comparison and benchmarking purposes. A web application has been developed to warehouse this database and provide public access to this unique resource. PIP-DB is a web-enabled SQL database with an HTML GUI front-end. PIP-DB is fully searchable across a range of properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bladder cancer is among the most common cancers worldwide (4th in men). It is responsible for high patient morbidity and displays rapid recurrence and progression. Lack of sensitivity of gold standard techniques (white light cystoscopy, voided urine cytology) means many early treatable cases are missed. The result is a large number of advanced cases of bladder cancer which require extensive treatment and monitoring. For this reason, bladder cancer is the single most expensive cancer to treat on a per patient basis. In recent years, autofluorescence spectroscopy has begun to shed light into disease research. Of particular interest in cancer research are the fluorescent metabolic cofactors NADH and FAD. Early in tumour development, cancer cells often undergo a metabolic shift (the Warburg effect) resulting in increased NADH. The ratio of NADH to FAD ("redox ratio") can therefore be used as an indicator of the metabolic status of cells. Redox ratio measurements have been used to differentiate between healthy and cancer breast cells and to monitor cellular responses to therapies. Here, we have demonstrated, using healthy and bladder cancer cell lines, a statistically significant difference in the redox ratio of bladder cancer cells, indicative of a metabolic shift. To do this we customised a standard flow cytometer to excite and record fluorescence specifically from NADH and FAD, along with a method for automatically calculating the redox ratio of individual cells within large populations. These results could inform the design of novel probes and screening systems for the early detection of bladder cancer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new method for human face recognition by utilizing Gabor-based region covariance matrices as face descriptors. Both pixel locations and Gabor coefficients are employed to form the covariance matrices. Experimental results demonstrate the advantages of this proposed method.