902 resultados para Estoppel by representation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is in the interests of everybody that the environment is protected. In view of the recent leaps in environmental awareness it would seem timely and sensible, therefore, for people to pool vehicle resources to minimise the damaging impact of emissions. However, this is often contrary to how complex social systems behave – local decisions made by self-interested individuals often have emergent effects that are in the interests of nobody. For software engineers a major challenge is to help facilitate individual decision-making such that individual preferences can be met, which, when accumulated, minimise adverse effects at the level of the transport system. We introduce this general problem through a concrete example based on vehicle-sharing. Firstly, we outline the kind of complex transportation problem that is directly addressed by our technology (CO2y™ - pronounced “cosy”), and also show how this differs from other more basic software solutions. The CO2y™ architecture is then briefly introduced. We outline the practical advantages of the advanced, intelligent software technology that is designed to satisfy a number of individual preference criteria and thereby find appropriate matches within a population of vehicle-share users. An example scenario of use is put forward, i.e., minimisation of grey-fleets within a medium-sized company. Here we comment on some of the underlying assumptions of the scenario, and how in a detailed real-world situation such assumptions might differ between different companies, and individual users. Finally, we summarise the paper, and conclude by outlining how the problem of pooled transportation is likely to benefit from the further application of emergent, nature-inspired computing technologies. These technologies allow systems-level behaviour to be optimised with explicit representation of individual actors. With these techniques we hope to make real progress in facing the complexity challenges that transportation problems produce.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

N.J. Lacey and M.H. Lee, ?The Implications of Philosophical Foundations for Knowledge Representation and Learning in Agents?, Springer-Verlag Lecture Notes on Artificial Intelligence, Vol 2636 on Adaptive Agents and Multi-Agent Systems, 2002.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

X. Fu and Q. Shen. 'Knowledge representation for fuzzy model composition', in Proceedings of the 21st International Workshop on Qualitative Reasoning, 2007, pp. 47-54. Sponsorship: EPSRC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scully, Roger, Farrell, David, Representing Europe's Citizens? Electoral Institutions and the Failure of Parliamentary Representation (Oxford: Oxford University Press, 2007), pp.xiii+230 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One finding of user studies is that information on meaning tends to be what diction¬ary users want most from their dictionaries. This is consistent with the traditional image of the dictionary as a repository of meanings of words, and this is also borne out in definitions of the item DICTIONARY itself as given in dictionaries. While this popular view has not changed much, the growing role of electronic dictionaries can change the lexicographers' approach to meaning repre¬sentation. Traditionally, paper dictionaries have explained words with words, using either a defi¬nition or an equivalent, and occasionally a line-drawn picture. However, a prominent feature of the electronic medium is its multimodality, and this offers potential for the description of meaning. While it is much easier to include pictorial content, electronic dictionaries can also hold media objects which paper cannot carry, such as audio, animation or video. Publishers are drawn by the attraction of these new options, but are they always functionally useful for the dictionary users? In this article, the existing evidence is examined, and informed guesses are offered where evidence is not yet available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CIL compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that non-duplicating formulations of intersection (and union) types would not achieve significantly better space complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study develops a neuromorphic model of human lightness perception that is inspired by how the mammalian visual system is designed for this function. It is known that biological visual representations can adapt to a billion-fold change in luminance. How such a system determines absolute lightness under varying illumination conditions to generate a consistent interpretation of surface lightness remains an unsolved problem. Such a process, called "anchoring" of lightness, has properties including articulation, insulation, configuration, and area effects. The model quantitatively simulates such psychophysical lightness data, as well as other data such as discounting the illuminant, the double brilliant illusion, and lightness constancy and contrast effects. The model retina embodies gain control at retinal photoreceptors, and spatial contrast adaptation at the negative feedback circuit between mechanisms that model the inner segment of photoreceptors and interacting horizontal cells. The model can thereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A new anchoring mechanism, called the Blurred-Highest-Luminance-As-White (BHLAW) rule, helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural color images under variable lighting conditions, and is compared with the popular RETINEX model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article applies a recent theory of 3-D biological vision, called FACADE Theory, to explain several percepts which Kanizsa pioneered. These include 3-D pop-out of an occluding form in front of an occluded form, leading to completion and recognition of the occluded form; 3-D transparent and opaque percepts of Kanizsa squares, with and without Varin wedges; and interactions between percepts of illusory contours, brightness, and depth in response to 2-D Kanizsa images. These explanations clarify how a partially occluded object representation can be completed for purposes of object recognition, without the completed part of the representation necessarily being seen. The theory traces these percepts to neural mechanisms that compensate for measurement uncertainty and complementarity at individual cortical processing stages by using parallel and hierarchical interactions among several cortical processing stages. These interactions are modelled by a Boundary Contour System (BCS) that generates emergent boundary segmentations and a complementary Feature Contour System (FCS) that fills-in surface representations of brightness, color, and depth. The BCS and FCS interact reciprocally with an Object Recognition System (ORS) that binds BCS boundary and FCS surface representations into attentive object representations. The BCS models the parvocellular LGN→Interblob→Interstripe→V4 cortical processing stream, the FCS models the parvocellular LGN→Blob→Thin Stripe→V4 cortical processing stream, and the ORS models inferotemporal cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a self-organizing neural network that rapidly learns a body-centered representation of 3-D target positions. This representation remains invariant under head and eye movements, and is a key component of sensory-motor systems for producing motor equivalent reaches to targets (Bullock, Grossberg, and Guenther, 1993).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a new neural pattern recognition architecture on multichannel data representation. The architecture emploies generalized ART modules as building blocks to construct a supervised learning system generating recognition codes on channels dynamically selected in context using serial and parallel match trackings led by inter-ART vigilance signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extension to the orientational harmonic model is presented as a rotation, translation, and scale invariant representation of geometrical form in biological vision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proposed model, called the combinatorial and competitive spatio-temporal memory or CCSTM, provides an elegant solution to the general problem of having to store and recall spatio-temporal patterns in which states or sequences of states can recur in various contexts. For example, fig. 1 shows two state sequences that have a common subsequence, C and D. The CCSTM assumes that any state has a distributed representation as a collection of features. Each feature has an associated competitive module (CM) containing K cells. On any given occurrence of a particular feature, A, exactly one of the cells in CMA will be chosen to represent it. It is the particular set of cells active on the previous time step that determines which cells are chosen to represent instances of their associated features on the current time step. If we assume that typically S features are active in any state then any state has K^S different neural representations. This huge space of possible neural representations of any state is what underlies the model's ability to store and recall numerous context-sensitive state sequences. The purpose of this paper is simply to describe this mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most associative memory models perform one level mapping between predefined sets of input and output patterns1 and are unable to represent hierarchical knowledge. Complex AI systems allow hierarchical representation of concepts, but generally do not have learning capabilities. In this paper, a memory model is proposed which forms concept hierarchy by learning sample relations between concepts. All concepts are represented in a concept layer. Relations between a concept and its defining lower level concepts, are chunked as cognitive codes represented in a coding layer. By updating memory contents in the concept layer through code firing in the coding layer, the system is able to perform an important class of commonsense reasoning, namely recognition and inheritance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new neural network architecture is introduced for the recognition of pattern classes after supervised and unsupervised learning. Applications include spatio-temporal image understanding and prediction and 3-D object recognition from a series of ambiguous 2-D views. The architecture, called ART-EMAP, achieves a synthesis of adaptive resonance theory (ART) and spatial and temporal evidence integration for dynamic predictive mapping (EMAP). ART-EMAP extends the capabilities of fuzzy ARTMAP in four incremental stages. Stage 1 introduces distributed pattern representation at a view category field. Stage 2 adds a decision criterion to the mapping between view and object categories, delaying identification of ambiguous objects when faced with a low confidence prediction. Stage 3 augments the system with a field where evidence accumulates in medium-term memory (MTM). Stage 4 adds an unsupervised learning process to fine-tune performance after the limited initial period of supervised network training. Each ART-EMAP stage is illustrated with a benchmark simulation example, using both noisy and noise-free data. A concluding set of simulations demonstrate ART-EMAP performance on a difficult 3-D object recognition problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A neural model is described of how the brain may autonomously learn a body-centered representation of 3-D target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation--otherwise known as a parcellated distributed representation--of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of non-foveated target position to learn a visuomotor representation of both foveated and non-foveated target position that is capable of commanding yoked eye movementes. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates arc compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.