964 resultados para implicit categorization
Resumo:
Currently many ontologies are available for addressing different domains. However, it is not always possible to deploy such ontologies to support collaborative working, so that their full potential can be exploited to implement intelligent cooperative applications capable of reasoning over a network of context-specific ontologies. The main problem arises from the fact that presently ontologies are created in an isolated way to address specific needs. However we foresee the need for a network of ontologies which will support the next generation of intelligent applications/devices, and, the vision of Ambient Intelligence. The main objective of this paper is to motivate the design of a networked ontology (Meta) model which formalises ways of connecting available ontologies so that they are easy to search, to characterise and to maintain. The aim is to make explicit the virtual and implicit network of ontologies serving the Semantic Web.
Resumo:
Alternative meshes of the sphere and adaptive mesh refinement could be immensely beneficial for weather and climate forecasts, but it is not clear how mesh refinement should be achieved. A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented. The accuracy and cost effectiveness of four quasi-uniform meshes of the sphere are compared: a cubed sphere, reduced latitude–longitude, hexagonal–icosahedral, and triangular–icosahedral. On some standard shallow-water tests, the hexagonal–icosahedral mesh performs best and the reduced latitude–longitude mesh performs well only when the flow is aligned with the mesh. The inclusion of a refined mesh over a disc-shaped region is achieved using either gradual Delaunay, gradual Voronoi, or abrupt 2:1 block-structured refinement. These refined regions can actually degrade global accuracy, presumably because of changes in wave dispersion where the mesh is highly nonuniform. However, using gradual refinement to resolve a mountain in an otherwise coarse mesh can improve accuracy for the same cost. The model prognostic variables are height and momentum collocated at cell centers, and (to remove grid-scale oscillations of the A grid) the mass flux between cells is advanced from the old momentum using the momentum equation. Quadratic and upwind biased cubic differencing methods are used as explicit corrections to a fast implicit solution that uses linear differencing.
Resumo:
In the 12th annual Broadbent Lecture at the Annual Conference Dianne Berry outlined Broadbent’s explicit and implicit influences on psychological science and scientists.
Resumo:
This article describes an empirical, user-centred approach to explanation design. It reports three studies that investigate what patients want to know when they have been prescribed medication. The question is asked in the context of the development of a drug prescription system called OPADE. The system is aimed primarily at improving the prescribing behaviour of physicians, but will also produce written explanations for indirect users such as patients. In the first study, a large number of people were presented with a scenario about a visit to the doctor, and were asked to list the questions that they would like to ask the doctor about the prescription. On the basis of the results of the study, a categorization of question types was developed in terms of how frequently particular questions were asked. In the second and third studies a number of different explanations were generated in accordance with this categorization, and a new sample of people were presented with another scenario and were asked to rate the explanations on a number of dimensions. The results showed significant differences between the different explanations. People preferred explanations that included items corresponding to frequently asked questions in study 1. For an explanation to be considered useful, it had to include information about side effects, what the medication does, and any lifestyle changes involved. The implications of the results of the three studies are discussed in terms of the development of OPADE's explanation facility.
Resumo:
This paper reports three experiments that examine the role of similarity processing in McGeorge and Burton's (1990) incidental learning task. In the experiments subjects performed a distractor task involving four-digit number strings, all of which conformed to a simple hidden rule. They were then given a forced-choice memory test in which they were presented with pairs of strings and were led to believe that one string of each pair had appeared in the prior learning phase. Although this was not the case, one string of each pair did conform to the hidden rule. Experiment 1 showed that, as in the McGeorge and Burton study, subjects were significantly more likely to select test strings that conformed to the hidden rule. However, additional analyses suggested that rather than having implicitly abstracted the rule, subjects may have been selecting strings that were in some way similar to those seen during the learning phase. Experiments 2 and 3 were designed to try to separate out effects due to similarity from those due to implicit rule abstraction. It was found that the results were more consistent with a similarity-based model than implicit rule abstraction per se.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
It is argued that the essential aspect of atmospheric blocking may be seen in the wave breaking of potential temperature (θ) on a potential vorticity (PV) surface, which may be identified with the tropopause, and the consequent reversal of the usual meridional temperature gradient of θ. A new dynamical blocking index is constructed using a meridional θ difference on a PV surface. Unlike in previous studies, the central blocking latitude about which this difference is constructed is allowed to vary with longitude. At each longitude it is determined by the latitude at which the climatological high-pass transient eddy kinetic energy is a maximum. Based on the blocking index, at each longitude local instantaneous blocking, large-scale blocking, and blocking episodes are defined. For longitudinal sectors, sector blocking and sector blocking episodes are also defined. The 5-yr annual climatologies of the three longitudinally defined blocking event frequencies and the seasonal climatologies of blocking episode frequency are shown. The climatologies all pick out the eastern North Atlantic–Europe and eastern North Pacific–western North America regions. There is evidence that Pacific blocking shifts into the western central Pacific in the summer. Sector blocking episodes of 4 days or more are shown to exhibit different persistence characteristics to shorter events, showing that blocking is not just the long timescale tail end of a distribution. The PV–θ index results for the annual average location of Pacific blocking agree with synoptic studies but disagree with modern quantitative height field–based studies. It is considered that the index used here is to be preferred anyway because of its dynamical basis. However, the longitudinal discrepancy is found to be associated with the use in the height field index studies of a central blocking latitude that is independent of longitude. In particular, the use in the North Pacific of a latitude that is suitable for the eastern North Atlantic leads to spurious categorization of blocking there. Furthermore, the PV–θ index is better able to detect Ω blocking than conventional height field indices.
Resumo:
Three experiments examine whether simple pair-wise comparison judgments, involving the “recognition heuristic” (Goldstein & Gigerenzer, 2002), are sensitive to implicit cues to the nature of the comparison required. Experiments 1 & 2 show that participants frequently choose the recognized option of a pair if asked to make “larger” judgments but are significantly less likely to choose the unrecognized option when asked to make “smaller” judgments. Experiment 3 demonstrates that, overall, participants consider recognition to be a more reliable guide to judgments of a magnitude criterion than lack of recognition and that this intuition drives the framing effect. These results support the idea that, when making pair-wise comparison judgments, inferring that the recognized item is large is simpler than inferring that the unrecognized item is small.
Recategorization and subgroup identification: predicting and preventing threats from common ingroups
Resumo:
Much work has supported the idea that recategorization of ingroups and outgroups into a superordinate category can have beneficial effects for intergroup relations. Recently, however, increases in bias following recategorization have been observed in some contexts. It is argued that such unwanted consequences of recategorization will only be apparent for perceivers who are highly committed to their ingroup subgroups. In Experiments 1 to 3, the authors observed, on both explicit and implicit measures, that an increase in bias following recategorization occurred only for high subgroup identifiers. In Experiment 4, it was found that maintaining the salience of subgroups within a recategorized superordinate group averted this increase in bias for high identifiers and led overall to the lowest levels of bias. These findings are discussed in the context of recent work on the Common Ingroup Identity Model.
Resumo:
We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.
Resumo:
Slantwise convective available potential energy (SCAPE) is a measure of the degree to which the atmosphere is unstable to conditional symmetric instability (CSI). It has, until now, been defined by parcel theory in which the atmosphere is assumed to be nonevolving and balanced, that is, two-dimensional. When applying this two-dimensional theory to three-dimensional evolving flows, these assumptions can be interpreted as an implicit assumption that a timescale separation exists between a relatively rapid timescale for slantwise ascent and a slower timescale for the development of the system. An approximate extension of parcel theory to three dimensions is derived and it is shown that calculations of SCAPE based on the assumption of relatively rapid slantwise ascent can be qualitatively in error. For a case study example of a developing extratropical cyclone, SCAPE calculated along trajectories determined without assuming the existence of the timescale separation show large SCAPE values for parcels ascending from the warm sector and along the warm front. These parcels ascend into the cloud head within which there is some evidence consistent with the release of CSI from observational and model cross sections. This region of high SCAPE was not found for calculations along the relatively rapidly ascending trajectories determined by assuming the existence of the timescale separation.
Resumo:
Existing data on animal health and welfare in organic livestock production systems in the European Community countries are reviewed in the light of the demands and challenges of the recently implemented EU regulation on organic livestock production. The main conclusions and recommendations of a three-year networking project on organic livestock production are summarised and the future challenges to organic livestock production in terms of welfare and health management are discussed. The authors conclude that, whilst the available data are limited and the implementation of the EC regulation is relatively recent, there is little evidence to suggest that organic livestock management causes major threats to animal health and welfare in comparison with conventional systems. There are, however, some well-identified areas, like parasite control and balanced ration formulation, where efforts are needed to find solutions that meet with organic standard requirements and guarantee high levels of health and welfare. It is suggested that, whilst organic standards offer an implicit framework for animal health and welfare management, there is a need to solve apparent conflicts between the organic farming objectives in regard to environment, public health, farmer income and animal health and welfare. The key challenges for the future of organic livestock production in Europe are related to the feasibility of implementing improved husbandry inputs and the development of evidence-based decision support systems for health and feeding management.
Resumo:
The complexity of rural economies in developing countries is increasingly recognised, as is the need to tailor poverty reduction policies according to the diversity of rural households and their requirements. By reference to a village in Western India, the paper examines the results of a longitudinal micro-level research approach, employed for the study of livelihood diversification and use of informal finance. Over a 25-year period, livelihoods are shown to have become more complex, in terms of location, types of non-farm activities, and combinations of activities. Moreover, livelihood pathways taken continue to be critically affected by economic and social inequalities implicit in the caste system and tribal economy. A longitudinal micro-level research approach is shown to be one that can effectively identify the many complexities of rural livelihoods and the continued dependence on the informal financial sector, providing important insights into the requirements for rural financial products and services.
Resumo:
Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.
Resumo:
Mitochondrial DNA (mtDNA) is one of the most Popular population genetic markers. Its relevance as an indicator Of Population size and history has recently been questioned by several large-scale studies in animals reporting evidence for recurrent adaptive evolution, at least in invertebrates. Here we focus on mammals, a more restricted taxonomic group for which the issue of mtDNA near neutrality is crucial. By analyzing the distribution of mtDNA diversity across species and relating 4 to allozyme diversity, life-history traits, and taxonomy, we show that (i) mtDNA in mammals (toes not reject the nearly neutral model; (ii) mtDNA diversity, however, is unrelated to any of the 14 life-history and ecological variables that we analyzed, including body mass, geographic range, and The World Conservation Union (IUCN) categorization; (iii) mtDNA diversity is highly variable between mammalian orders and families; (iv) this taxonomic effect is most likely explained by variations of mutation rate between lineages. These results are indicative of a strong stochasticity of effective population size in mammalian species. They Suggest that, even in the absence of selection, mtDNA genetic diversity is essentially unpredictable, knowing species biology, and probably uncorrelated to species abundance.