134 resultados para implicit enumeration
Resumo:
Steep orography can cause noisy solutions and instability in models of the atmosphere. A new technique for modelling flow over orography is introduced which guarantees curl free gradients on arbitrary grids, implying that the pressure gradient term is not a spurious source of vorticity. This mimetic property leads to better hydrostatic balance and better energy conservation on test cases using terrain following grids. Curl-free gradients are achieved by using the co-variant components of velocity over orography rather than the usual horizontal and vertical components. In addition, gravity and acoustic waves are treated implicitly without the need for mean and perturbation variables or a hydrostatic reference profile. This enables a straightforward description of the implicit treatment of gravity waves. Results are presented of a resting atmosphere over orography and the curl-free pressure gradient formulation is advantageous. Results of gravity waves over orography are insensitive to the placement of terrain-following layers. The model with implicit gravity waves is stable in strongly stratified conditions, with N∆t up to at least 10 (where N is the Brunt-V ̈ais ̈al ̈a frequency). A warm bubble rising over orography is simulated and the curl free pressure gradient formulation gives much more accurate results for this test case than a model without this mimetic property.
Resumo:
While an awareness of age-related changes in memory may help older adults gain insight into their own cognitive abilities, it may also have a negative impact on memory performance through a mechanism of stereotype threat (ST). The consequence of ST is under-performance in abilities related to the stereotype. Here, we examined the degree to which explicit and implicit memory were affected by ST across a wide age-range. We found that explicit memory was affected by ST, but only in an Early-Aging group (mean age 67.83), and not in a Later-Aging group (mean age 84.59). Implicit memory was not affected in either the Early or Later Aging group. These results demonstrate that ST for age-related memory decline affects memory processes requiring controlled retrieval while sparing item encoding. Furthermore, this form of ST appears to dissipate as aging progresses. These results have implications for understanding psychological development across the span of aging.
Resumo:
Timediscretization in weatherandclimate modelsintroduces truncation errors that limit the accuracy of the simulations. Recent work has yielded a method for reducing the amplitude errors in leap-frog integrations from first-order to fifth-order.This improvement is achieved by replacing the Robert–Asselin filter with the Robert–Asselin–Williams (RAW) filter and using a linear combination of unfiltered and filtered states to compute the tendency term. The purpose of the present article is to apply the composite-tendency RAW-filtered leapfrog scheme to semi-implicit integrations. A theoretical analysis shows that the stability and accuracy are unaffected by the introduction of the implicitly treated mode. The scheme is tested in semi-implicit numerical integrations in both a simple nonlinear stiff system and a medium-complexity atmospheric general circulation model and yields substantial improvements in both cases. We conclude that the composite-tendency RAW-filtered leap-frog scheme is suitable for use in semi-implicit integrations.
Resumo:
Taking a generative perspective, we divide aspects of language into three broad categories: those that cannot be learned (are inherent in Universal Grammar), those that are derived from Universal Grammar, and those that must be learned from the input. Using this framework of language to clarify the “what” of learning, we take the acquisition of null (and overt) subjects in languages like Spanish as an example of how to apply the framework. We demonstrate what properties of a null-subject grammar cannot be learned explicitly, which properties can, but also argue that it is an open empirical question as to whether these latter properties are learned using explicit processes, showing how linguistic and psychological approaches may intersect to better understand acquisition.
Resumo:
Filter degeneracy is the main obstacle for the implementation of particle filter in non-linear high-dimensional models. A new scheme, the implicit equal-weights particle filter (IEWPF), is introduced. In this scheme samples are drawn implicitly from proposal densities with a different covariance for each particle, such that all particle weights are equal by construction. We test and explore the properties of the new scheme using a 1,000-dimensional simple linear model, and the 1,000-dimensional non-linear Lorenz96 model, and compare the performance of the scheme to a Local Ensemble Kalman Filter. The experiments show that the new scheme can easily be implemented in high-dimensional systems and is never degenerate, with good convergence properties in both systems.
Resumo:
Currently many ontologies are available for addressing different domains. However, it is not always possible to deploy such ontologies to support collaborative working, so that their full potential can be exploited to implement intelligent cooperative applications capable of reasoning over a network of context-specific ontologies. The main problem arises from the fact that presently ontologies are created in an isolated way to address specific needs. However we foresee the need for a network of ontologies which will support the next generation of intelligent applications/devices, and, the vision of Ambient Intelligence. The main objective of this paper is to motivate the design of a networked ontology (Meta) model which formalises ways of connecting available ontologies so that they are easy to search, to characterise and to maintain. The aim is to make explicit the virtual and implicit network of ontologies serving the Semantic Web.
Resumo:
Alternative meshes of the sphere and adaptive mesh refinement could be immensely beneficial for weather and climate forecasts, but it is not clear how mesh refinement should be achieved. A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented. The accuracy and cost effectiveness of four quasi-uniform meshes of the sphere are compared: a cubed sphere, reduced latitude–longitude, hexagonal–icosahedral, and triangular–icosahedral. On some standard shallow-water tests, the hexagonal–icosahedral mesh performs best and the reduced latitude–longitude mesh performs well only when the flow is aligned with the mesh. The inclusion of a refined mesh over a disc-shaped region is achieved using either gradual Delaunay, gradual Voronoi, or abrupt 2:1 block-structured refinement. These refined regions can actually degrade global accuracy, presumably because of changes in wave dispersion where the mesh is highly nonuniform. However, using gradual refinement to resolve a mountain in an otherwise coarse mesh can improve accuracy for the same cost. The model prognostic variables are height and momentum collocated at cell centers, and (to remove grid-scale oscillations of the A grid) the mass flux between cells is advanced from the old momentum using the momentum equation. Quadratic and upwind biased cubic differencing methods are used as explicit corrections to a fast implicit solution that uses linear differencing.
Resumo:
In the 12th annual Broadbent Lecture at the Annual Conference Dianne Berry outlined Broadbent’s explicit and implicit influences on psychological science and scientists.
Resumo:
This paper reports three experiments that examine the role of similarity processing in McGeorge and Burton's (1990) incidental learning task. In the experiments subjects performed a distractor task involving four-digit number strings, all of which conformed to a simple hidden rule. They were then given a forced-choice memory test in which they were presented with pairs of strings and were led to believe that one string of each pair had appeared in the prior learning phase. Although this was not the case, one string of each pair did conform to the hidden rule. Experiment 1 showed that, as in the McGeorge and Burton study, subjects were significantly more likely to select test strings that conformed to the hidden rule. However, additional analyses suggested that rather than having implicitly abstracted the rule, subjects may have been selecting strings that were in some way similar to those seen during the learning phase. Experiments 2 and 3 were designed to try to separate out effects due to similarity from those due to implicit rule abstraction. It was found that the results were more consistent with a similarity-based model than implicit rule abstraction per se.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
Three experiments examine whether simple pair-wise comparison judgments, involving the “recognition heuristic” (Goldstein & Gigerenzer, 2002), are sensitive to implicit cues to the nature of the comparison required. Experiments 1 & 2 show that participants frequently choose the recognized option of a pair if asked to make “larger” judgments but are significantly less likely to choose the unrecognized option when asked to make “smaller” judgments. Experiment 3 demonstrates that, overall, participants consider recognition to be a more reliable guide to judgments of a magnitude criterion than lack of recognition and that this intuition drives the framing effect. These results support the idea that, when making pair-wise comparison judgments, inferring that the recognized item is large is simpler than inferring that the unrecognized item is small.
Recategorization and subgroup identification: predicting and preventing threats from common ingroups
Resumo:
Much work has supported the idea that recategorization of ingroups and outgroups into a superordinate category can have beneficial effects for intergroup relations. Recently, however, increases in bias following recategorization have been observed in some contexts. It is argued that such unwanted consequences of recategorization will only be apparent for perceivers who are highly committed to their ingroup subgroups. In Experiments 1 to 3, the authors observed, on both explicit and implicit measures, that an increase in bias following recategorization occurred only for high subgroup identifiers. In Experiment 4, it was found that maintaining the salience of subgroups within a recategorized superordinate group averted this increase in bias for high identifiers and led overall to the lowest levels of bias. These findings are discussed in the context of recent work on the Common Ingroup Identity Model.
Resumo:
We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.
Resumo:
Slantwise convective available potential energy (SCAPE) is a measure of the degree to which the atmosphere is unstable to conditional symmetric instability (CSI). It has, until now, been defined by parcel theory in which the atmosphere is assumed to be nonevolving and balanced, that is, two-dimensional. When applying this two-dimensional theory to three-dimensional evolving flows, these assumptions can be interpreted as an implicit assumption that a timescale separation exists between a relatively rapid timescale for slantwise ascent and a slower timescale for the development of the system. An approximate extension of parcel theory to three dimensions is derived and it is shown that calculations of SCAPE based on the assumption of relatively rapid slantwise ascent can be qualitatively in error. For a case study example of a developing extratropical cyclone, SCAPE calculated along trajectories determined without assuming the existence of the timescale separation show large SCAPE values for parcels ascending from the warm sector and along the warm front. These parcels ascend into the cloud head within which there is some evidence consistent with the release of CSI from observational and model cross sections. This region of high SCAPE was not found for calculations along the relatively rapidly ascending trajectories determined by assuming the existence of the timescale separation.
Resumo:
Existing data on animal health and welfare in organic livestock production systems in the European Community countries are reviewed in the light of the demands and challenges of the recently implemented EU regulation on organic livestock production. The main conclusions and recommendations of a three-year networking project on organic livestock production are summarised and the future challenges to organic livestock production in terms of welfare and health management are discussed. The authors conclude that, whilst the available data are limited and the implementation of the EC regulation is relatively recent, there is little evidence to suggest that organic livestock management causes major threats to animal health and welfare in comparison with conventional systems. There are, however, some well-identified areas, like parasite control and balanced ration formulation, where efforts are needed to find solutions that meet with organic standard requirements and guarantee high levels of health and welfare. It is suggested that, whilst organic standards offer an implicit framework for animal health and welfare management, there is a need to solve apparent conflicts between the organic farming objectives in regard to environment, public health, farmer income and animal health and welfare. The key challenges for the future of organic livestock production in Europe are related to the feasibility of implementing improved husbandry inputs and the development of evidence-based decision support systems for health and feeding management.