114 resultados para implicit categorization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

While an awareness of age-related changes in memory may help older adults gain insight into their own cognitive abilities, it may also have a negative impact on memory performance through a mechanism of stereotype threat (ST). The consequence of ST is under-performance in abilities related to the stereotype. Here, we examined the degree to which explicit and implicit memory were affected by ST across a wide age-range. We found that explicit memory was affected by ST, but only in an Early-Aging group (mean age 67.83), and not in a Later-Aging group (mean age 84.59). Implicit memory was not affected in either the Early or Later Aging group. These results demonstrate that ST for age-related memory decline affects memory processes requiring controlled retrieval while sparing item encoding. Furthermore, this form of ST appears to dissipate as aging progresses. These results have implications for understanding psychological development across the span of aging.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Timediscretization in weatherandclimate modelsintroduces truncation errors that limit the accuracy of the simulations. Recent work has yielded a method for reducing the amplitude errors in leap-frog integrations from first-order to fifth-order.This improvement is achieved by replacing the Robert–Asselin filter with the Robert–Asselin–Williams (RAW) filter and using a linear combination of unfiltered and filtered states to compute the tendency term. The purpose of the present article is to apply the composite-tendency RAW-filtered leapfrog scheme to semi-implicit integrations. A theoretical analysis shows that the stability and accuracy are unaffected by the introduction of the implicitly treated mode. The scheme is tested in semi-implicit numerical integrations in both a simple nonlinear stiff system and a medium-complexity atmospheric general circulation model and yields substantial improvements in both cases. We conclude that the composite-tendency RAW-filtered leap-frog scheme is suitable for use in semi-implicit integrations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Taking a generative perspective, we divide aspects of language into three broad categories: those that cannot be learned (are inherent in Universal Grammar), those that are derived from Universal Grammar, and those that must be learned from the input. Using this framework of language to clarify the “what” of learning, we take the acquisition of null (and overt) subjects in languages like Spanish as an example of how to apply the framework. We demonstrate what properties of a null-subject grammar cannot be learned explicitly, which properties can, but also argue that it is an open empirical question as to whether these latter properties are learned using explicit processes, showing how linguistic and psychological approaches may intersect to better understand acquisition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Studies show cross-linguistic differences in motion event encoding, such that English speakers preferentially encode manner of motion more than Spanish speakers, who preferentially encode path of motion. Focusing on native Spanish speaking children (aged 5;00-9;00) learning L2 English, we studied path and manner verb preferences during descriptions of motion stimuli, and tested the linguistic relativity hypothesis by investigating categorization preferences in a non-verbal similarity judgement task of motion clip triads. Results revealed L2 influence on L1 motion event encoding, such that bilinguals used more manner verbs and fewer path verbs in their L1, under the influence of English. We found no effects of linguistic structure on non-verbal similarity judgements, and demonstrate for the first time effects of L2 on L1 lexicalization in child L2 learners in the domain of motion events. This pattern of verbal behaviour supports theories of bilingual semantic representation that postulate a merged lexico-semantic system in early bilinguals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Filter degeneracy is the main obstacle for the implementation of particle filter in non-linear high-dimensional models. A new scheme, the implicit equal-weights particle filter (IEWPF), is introduced. In this scheme samples are drawn implicitly from proposal densities with a different covariance for each particle, such that all particle weights are equal by construction. We test and explore the properties of the new scheme using a 1,000-dimensional simple linear model, and the 1,000-dimensional non-linear Lorenz96 model, and compare the performance of the scheme to a Local Ensemble Kalman Filter. The experiments show that the new scheme can easily be implemented in high-dimensional systems and is never degenerate, with good convergence properties in both systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Currently many ontologies are available for addressing different domains. However, it is not always possible to deploy such ontologies to support collaborative working, so that their full potential can be exploited to implement intelligent cooperative applications capable of reasoning over a network of context-specific ontologies. The main problem arises from the fact that presently ontologies are created in an isolated way to address specific needs. However we foresee the need for a network of ontologies which will support the next generation of intelligent applications/devices, and, the vision of Ambient Intelligence. The main objective of this paper is to motivate the design of a networked ontology (Meta) model which formalises ways of connecting available ontologies so that they are easy to search, to characterise and to maintain. The aim is to make explicit the virtual and implicit network of ontologies serving the Semantic Web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alternative meshes of the sphere and adaptive mesh refinement could be immensely beneficial for weather and climate forecasts, but it is not clear how mesh refinement should be achieved. A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented. The accuracy and cost effectiveness of four quasi-uniform meshes of the sphere are compared: a cubed sphere, reduced latitude–longitude, hexagonal–icosahedral, and triangular–icosahedral. On some standard shallow-water tests, the hexagonal–icosahedral mesh performs best and the reduced latitude–longitude mesh performs well only when the flow is aligned with the mesh. The inclusion of a refined mesh over a disc-shaped region is achieved using either gradual Delaunay, gradual Voronoi, or abrupt 2:1 block-structured refinement. These refined regions can actually degrade global accuracy, presumably because of changes in wave dispersion where the mesh is highly nonuniform. However, using gradual refinement to resolve a mountain in an otherwise coarse mesh can improve accuracy for the same cost. The model prognostic variables are height and momentum collocated at cell centers, and (to remove grid-scale oscillations of the A grid) the mass flux between cells is advanced from the old momentum using the momentum equation. Quadratic and upwind biased cubic differencing methods are used as explicit corrections to a fast implicit solution that uses linear differencing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the 12th annual Broadbent Lecture at the Annual Conference Dianne Berry outlined Broadbent’s explicit and implicit influences on psychological science and scientists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes an empirical, user-centred approach to explanation design. It reports three studies that investigate what patients want to know when they have been prescribed medication. The question is asked in the context of the development of a drug prescription system called OPADE. The system is aimed primarily at improving the prescribing behaviour of physicians, but will also produce written explanations for indirect users such as patients. In the first study, a large number of people were presented with a scenario about a visit to the doctor, and were asked to list the questions that they would like to ask the doctor about the prescription. On the basis of the results of the study, a categorization of question types was developed in terms of how frequently particular questions were asked. In the second and third studies a number of different explanations were generated in accordance with this categorization, and a new sample of people were presented with another scenario and were asked to rate the explanations on a number of dimensions. The results showed significant differences between the different explanations. People preferred explanations that included items corresponding to frequently asked questions in study 1. For an explanation to be considered useful, it had to include information about side effects, what the medication does, and any lifestyle changes involved. The implications of the results of the three studies are discussed in terms of the development of OPADE's explanation facility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper reports three experiments that examine the role of similarity processing in McGeorge and Burton's (1990) incidental learning task. In the experiments subjects performed a distractor task involving four-digit number strings, all of which conformed to a simple hidden rule. They were then given a forced-choice memory test in which they were presented with pairs of strings and were led to believe that one string of each pair had appeared in the prior learning phase. Although this was not the case, one string of each pair did conform to the hidden rule. Experiment 1 showed that, as in the McGeorge and Burton study, subjects were significantly more likely to select test strings that conformed to the hidden rule. However, additional analyses suggested that rather than having implicitly abstracted the rule, subjects may have been selecting strings that were in some way similar to those seen during the learning phase. Experiments 2 and 3 were designed to try to separate out effects due to similarity from those due to implicit rule abstraction. It was found that the results were more consistent with a similarity-based model than implicit rule abstraction per se.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is argued that the essential aspect of atmospheric blocking may be seen in the wave breaking of potential temperature (θ) on a potential vorticity (PV) surface, which may be identified with the tropopause, and the consequent reversal of the usual meridional temperature gradient of θ. A new dynamical blocking index is constructed using a meridional θ difference on a PV surface. Unlike in previous studies, the central blocking latitude about which this difference is constructed is allowed to vary with longitude. At each longitude it is determined by the latitude at which the climatological high-pass transient eddy kinetic energy is a maximum. Based on the blocking index, at each longitude local instantaneous blocking, large-scale blocking, and blocking episodes are defined. For longitudinal sectors, sector blocking and sector blocking episodes are also defined. The 5-yr annual climatologies of the three longitudinally defined blocking event frequencies and the seasonal climatologies of blocking episode frequency are shown. The climatologies all pick out the eastern North Atlantic–Europe and eastern North Pacific–western North America regions. There is evidence that Pacific blocking shifts into the western central Pacific in the summer. Sector blocking episodes of 4 days or more are shown to exhibit different persistence characteristics to shorter events, showing that blocking is not just the long timescale tail end of a distribution. The PV–θ index results for the annual average location of Pacific blocking agree with synoptic studies but disagree with modern quantitative height field–based studies. It is considered that the index used here is to be preferred anyway because of its dynamical basis. However, the longitudinal discrepancy is found to be associated with the use in the height field index studies of a central blocking latitude that is independent of longitude. In particular, the use in the North Pacific of a latitude that is suitable for the eastern North Atlantic leads to spurious categorization of blocking there. Furthermore, the PV–θ index is better able to detect Ω blocking than conventional height field indices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three experiments examine whether simple pair-wise comparison judgments, involving the “recognition heuristic” (Goldstein & Gigerenzer, 2002), are sensitive to implicit cues to the nature of the comparison required. Experiments 1 & 2 show that participants frequently choose the recognized option of a pair if asked to make “larger” judgments but are significantly less likely to choose the unrecognized option when asked to make “smaller” judgments. Experiment 3 demonstrates that, overall, participants consider recognition to be a more reliable guide to judgments of a magnitude criterion than lack of recognition and that this intuition drives the framing effect. These results support the idea that, when making pair-wise comparison judgments, inferring that the recognized item is large is simpler than inferring that the unrecognized item is small.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much work has supported the idea that recategorization of ingroups and outgroups into a superordinate category can have beneficial effects for intergroup relations. Recently, however, increases in bias following recategorization have been observed in some contexts. It is argued that such unwanted consequences of recategorization will only be apparent for perceivers who are highly committed to their ingroup subgroups. In Experiments 1 to 3, the authors observed, on both explicit and implicit measures, that an increase in bias following recategorization occurred only for high subgroup identifiers. In Experiment 4, it was found that maintaining the salience of subgroups within a recategorized superordinate group averted this increase in bias for high identifiers and led overall to the lowest levels of bias. These findings are discussed in the context of recent work on the Common Ingroup Identity Model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.