20 resultados para Ontologies Representing the same Conceptualisation

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed integer programming models, which are computationally demanding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to comparatively investigate the impact of visual-verbal relationships that exist in expository texts on the reading process and comprehension of readers from different language background: native speakers of English (LI) and speakers of English as a foreign language (EFL). The study focussed, in this respect, on the visual elements (VEs) mainly graphs and tables that accompanied the selected texts. Two major experiments were undertaken. The first, was for the reading process using the post-reading questionnaire technique. Participants were 163 adult readers representing three groups: 77 (LI), 56 (EFL postgraduates); and 30 (EFL undergraduates). The second experiment was for the reading comprehension using cloze procedure. Participants were 123 representing the same above gorups: 50, 33 and 40 respectively. It was hypothesised that the LI readers would make use of VEs in the reading process in ways different from both EFL groups and that use would enhance each group's comprehension in different aspects and to different levels. In the analysis of the data of both experiments two statistical measurements were used. The chi-square was used to measure the differences between frequencies and the t-test was used to measure the differences between means. The results indicated a significant relationship between readers' language background and the impact of visual-verbal relationships on their reading processes and comprehension of such type of texts. The results also revealed considerable similarities between the two EFL groups in the reading process of texts accompanied by VEs. In the reading comprehension, however, the EFL undergraduates seemed to benefit from the visual-verbal relationships in their comprehension more than the postgraduates, suggesting a weak relationship of this impact for older EFL readers. Furthermore, the results showed considerable similarities between the reading process of texts accompanied by VEs and of whole prose texts. Finally an evaluation of this study was undertaken as well as practical implications for EFL readers and suggestions for future research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the key challenges that organizations face when trying to integrate knowledge across different functions is the need to overcome knowledge boundaries between team members. In cross-functional teams, these boundaries, associated with different knowledge backgrounds of people from various disciplines, create communication problems, necessitating team members to engage in complex cognitive processes when integrating knowledge toward a joint outcome. This research investigates the impact of syntactic, semantic, and pragmatic knowledge boundaries on a team’s ability to develop a transactive memory system (TMS)—a collective memory system for knowledge coordination in groups. Results from our survey show that syntactic and pragmatic knowledge boundaries negatively affect TMS development. These findings extend TMS theory beyond the information-processing view, which treats knowledge as an object that can be stored and retrieved, to the interpretive and practice-based views of knowledge, which recognize that knowledge (in particular specialized knowledge) is localized, situated, and embedded in practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We re-analysed visuo-spatial perspective taking data from Kessler and Thomson (2010) plus a previously unpublished pilot with respect to individual- and sex differences in embodied processing (defined as body-posture congruence effects). We found that so-called 'systemisers' (males/low-social-skills) showed weaker embodiment than so-called 'embodiers' (females/high-social-skills). We conclude that 'systemisers' either have difficulties with embodied processing or, alternatively, they have a strategic advantage in selecting different mechanisms or the appropriate level of embodiment. In contrast, 'embodiers' have an advantageous strategy of "deep" embodied processing reflecting their urge to empathise or, alternatively, less flexibility in fine-tuning the involvement of bodily representations. © 2012 Copyright Taylor and Francis Group, LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research describes a computerized model of human classification which has been constructed to represent the process by which assessments are made for psychodynamic psychotherapy. The model assigns membership grades (MGs) to clients so that the most suitable ones have high values in the therapy category. Categories consist of a hierarchy of components, one of which, ego strength, is analysed in detail to demonstrate the way it has captured the psychotherapist's knowledge. The bottom of the hierarchy represents the measurable factors being assessed during an interview. A questionnaire was created to gather the identified information and was completed by the psychotherapist after each assessment. The results were fed into the computerized model, demonstrating a high correlation between the model MGs and the suitability ratings of the psychotherapist (r = .825 for 24 clients). The model has successfully identified the relevant data involved in assessment and simulated the decision-making process of the expert. Its cognitive validity enables decisions to be explained, which means that it has potential for therapist training and also for enhancing the referral process, with benefits in cost effectiveness as well as in the reduction of trauma to clients. An adapted version measuring client improvement would give quantitative evidence for the benefit of therapy, thereby supporting auditing and accountability. © 1997 The British Psychological Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The activation-deactivation pseudo-equilibrium coefficient Qt and constant K0 (=Qt x PaT1,t = ([A1]x[Ox])/([T1]x[T])) as well as the factor of activation (PaT1,t) and rate constants of elementary steps reactions that govern the increase of Mn with conversion in controlled cationic ring-opening polymerization of oxetane (Ox) in 1,4-dioxane (1,4-D) and in tetrahydropyran (THP) (i.e. cyclic ethers which have no homopolymerizability (T)) were determined using terminal-model kinetics. We show analytically that the dynamic behavior of the two growing species (A1 and T1) competing for the same resources (Ox and T) follows a Lotka-Volterra model of predator-prey interactions. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, we have seen an explosion of interest in ontologies as artifacts to represent human knowledge and as critical components in knowledge management, the semantic Web, business-to-business applications, and several other application areas. Various research communities commonly assume that ontologies are the appropriate modeling structure for representing knowledge. However, little discussion has occurred regarding the actual range of knowledge an ontology can successfully represent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whereas the competitive advantage of firms can arise from size and position within their industry as well as physical assets, the pattern of competition in advanced economies has increasingly come to favour those firms that can mobilise knowledge and technological skills to create novelty in their products. At the same time, regions are attracting growing attention as an economic unit of analysis, with firms increasingly locating their functions in select regions within the global space. This article introduces the concept of knowledge competitiveness, defined as an economy’s knowledge capacity, capability and sustainability, and the extent to which this knowledge is translated into economic value and transferred into the wealth of the citizens. The article discusses the way in which the knowledge competitiveness of regions is measured and further introduces the World Knowledge Competitiveness Index, which is the first composite and relative measure of the knowledge competitiveness of the globe’s best performing regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Representing knowledge using domain ontologies has shown to be a useful mechanism and format for managing and exchanging information. Due to the difficulty and cost of building ontologies, a number of ontology libraries and search engines are coming to existence to facilitate reusing such knowledge structures. The need for ontology ranking techniques is becoming crucial as the number of ontologies available for reuse is continuing to grow. In this paper we present AKTiveRank, a prototype system for ranking ontologies based on the analysis of their structures. We describe the metrics used in the ranking system and present an experiment on ranking ontologies returned by a popular search engine for an example query.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brewers spent grain (BSG) is a widely available feedstock representing approximately 85% of the total by-products generated in the brewing industry. This is currently either disposed of to landfill or used as cattle feed due to its high protein content. BSG has received little or no attention as a potential energy resource, but increasing disposal costs and environmental constraints are now prompting the consideration of this. One possibility for the utilisation of BSG for energy is via intermediate pyrolysis to produce gases, vapours and chars. Intermediate pyrolysis is characterised by indirect heating in the absence of oxygen for short solids residence times of a few minutes, at temperatures of 350-450 °C. In the present work BSG has been characterised by chemical, proximate, ultimate and thermo-gravimetric analysis. Intermediate pyrolysis of BSG at 450 °C was carried out using a twin coaxial screw reactor known as Pyroformer to give yields of char 29%, 51% of bio-oil and 19% of permanent gases. The bio-oil liquid was found to separate in to an aqueous phase and organic phase. The organic phase contained viscous compounds that could age over time leading to solid tars that can present problems in CHP application. The quality of the pyrolysis vapour products before quenching can be upgraded to achieve much improved suitability as a fuel by downstream catalytic reforming. A Bench Scale batch pyrolysis reactor has then been used to pyrolyse small samples of BSG under a range of conditions of heating rate and temperature simulating the Pyroformer. A small catalytic reformer has been added downstream of the reactor in which the pyrolysis vapours can be further cracked and reformed. A commercial reforming nickel catalyst was used at 500, 750 and 850 °C at a space velocity about 10,000 L/h with and without the addition of steam. Results are presented for the properties of BSG, and the products of the pyrolysis process both with and without catalytic post-processing. Results indicate that catalytic reforming produced a significant increase in permanent gases mainly (H2 and CO) with H2 content exceeding 50 vol% at higher reforming temperatures. Bio-oil yield decreased significantly as reforming temperature increased with char remaining the same as pyrolysis condition remained unchanged. The process shows an increase in heating value for the product gas ranging between 10.8-25.2 MJ/m as reforming temperature increased. © 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose algorithms for combining and ranking answers from distributed heterogeneous data sources in the context of a multi-ontology Question Answering task. Our proposal includes a merging algorithm that aggregates, combines and filters ontology-based search results and three different ranking algorithms that sort the final answers according to different criteria such as popularity, confidence and semantic interpretation of results. An experimental evaluation on a large scale corpus indicates improvements in the quality of the search results with respect to a scenario where the merging and ranking algorithms were not applied. These collective methods for merging and ranking allow to answer questions that are distributed across ontologies, while at the same time, they can filter irrelevant answers, fuse similar answers together, and elicit the most accurate answer(s) to a question.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the existing work on information integration in the Semantic Web concentrates on resolving schema-level problems. Specific issues of data-level integration (instance coreferencing, conflict resolution, handling uncertainty) are usually tackled by applying the same techniques as for ontology schema matching or by reusing the solutions produced in the database domain. However, data structured according to OWL ontologies has its specific features: e.g., the classes are organized into a hierarchy, the properties are inherited, data constraints differ from those defined by database schema. This paper describes how these features are exploited in our architecture KnoFuss, designed to support data-level integration of semantic annotations.