29 resultados para Modular neural systems

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Self-consciousness implies not only self or group recognition, but also real knowledge of one’s own identity. Self-consciousness is only possible if an individual is intelligent enough to formulate an abstract self-representation. Moreover, it necessarily entails the capability of referencing and using this elf-representation in connection with other cognitive features, such as inference, and the anticipation of the consequences of both one’s own and other individuals’ acts. In this paper, a cognitive architecture for self-consciousness is proposed. This cognitive architecture includes several modules: abstraction, self-representation, other individuals'representation, decision and action modules. It includes a learning process of self-representation by direct (self-experience based) and observational learning (based on the observation of other individuals). For model implementation a new approach is taken using Modular Artificial Neural Networks (MANN). For model testing, a virtual environment has been implemented. This virtual environment can be described as a holonic system or holarchy, meaning that it is composed of autonomous entities that behave both as a whole and as part of a greater whole. The system is composed of a certain number of holons interacting. These holons are equipped with cognitive features, such as sensory perception, and a simplified model of personality and self-representation. We explain holons’ cognitive architecture that enables dynamic self-representation. We analyse the effect of holon interaction, focusing on the evolution of the holon’s abstract self-representation. Finally, the results are explained and analysed and conclusions drawn.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Within the regression framework, we show how different levels of nonlinearity influence the instantaneous firing rate prediction of single neurons. Nonlinearity can be achieved in several ways. In particular, we can enrich the predictor set with basis expansions of the input variables (enlarging the number of inputs) or train a simple but different model for each area of the data domain. Spline-based models are popular within the first category. Kernel smoothing methods fall into the second category. Whereas the first choice is useful for globally characterizing complex functions, the second is very handy for temporal data and is able to include inner-state subject variations. Also, interactions among stimuli are considered. We compare state-of-the-art firing rate prediction methods with some more sophisticated spline-based nonlinear methods: multivariate adaptive regression splines and sparse additive models. We also study the impact of kernel smoothing. Finally, we explore the combination of various local models in an incremental learning procedure. Our goal is to demonstrate that appropriate nonlinearity treatment can greatly improve the results. We test our hypothesis on both synthetic data and real neuronal recordings in cat primary visual cortex, giving a plausible explanation of the results from a biological perspective.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dynamically Reconfigurable Systems are attracting a growing interest, mainly due to the emergence of novel applications based on this technology. However, commercial tools do not provide enough flexibility to design solutions, while keeping an acceptable design productivity. In this paper, a novel design flow is proposed, targeting dynamically reconfigurable systems. It is fully supported by a tool called Dreams, which is able to implement flexible systems, starting from a set of netlists corresponding to the modules, as well as a system description provided by the user. The tool automatically post-processes the nets, implementing a solution for the communications between reconfigurable regions, as well as the handling of routing conflicts, by means of a custom router. Since the design process of every module and the static system are independent, the proposed flow is compatible with system upgrade at run-time. In this paper, a use case corresponding to the design of a highly regular and parallel mesh-type architecture is described, in order to show the architectural flexibility offered by the tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim is to obtain computationally more powerful, neuro physiologically founded, artificial neurons and neural nets. Artificial Neural Nets (ANN) of the Perceptron type evolved from the original proposal by McCulloch an Pitts classical paper [1]. Essentially, they keep the computing structure of a linear machine followed by a non linear operation. The McCulloch-Pitts formal neuron (which was never considered by the author’s to be models of real neurons) consists of the simplest case of a linear computation of the inputs followed by a threshold. Networks of one layer cannot compute anylogical function of the inputs, but only those which are linearly separable. Thus, the simple exclusive OR (contrast detector) function of two inputs requires two layers of formal neurons

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fuzzy min–max neural network classifier is a supervised learning method. This classifier takes the hybrid neural networks and fuzzy systems approach. All input variables in the network are required to correspond to continuously valued variables, and this can be a significant constraint in many real-world situations where there are not only quantitative but also categorical data. The usual way of dealing with this type of variables is to replace the categorical by numerical values and treat them as if they were continuously valued. But this method, implicitly defines a possibly unsuitable metric for the categories. A number of different procedures have been proposed to tackle the problem. In this article, we present a new method. The procedure extends the fuzzy min–max neural network input to categorical variables by introducing new fuzzy sets, a new operation, and a new architecture. This provides for greater flexibility and wider application. The proposed method is then applied to missing data imputation in voting intention polls. The micro data—the set of the respondents’ individual answers to the questions—of this type of poll are especially suited for evaluating the method since they include a large number of numerical and categorical attributes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the development of an ontology for autonomous systems, as the initial stage of a research programe on autonomous systems’ engineering within a model-based control approach. The ontology aims at providing a unified conceptual framework for the autonomous systems’ stakeholders, from developers to software engineers. The modular ontology contains both generic and domain-specific concepts for autonomous systems description and engineering. The ontology serves as the basis in a methodology to obtain the autonomous system’s conceptual models. The objective is to obtain and to use these models as main input for the autonomous system’s model-based control system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern FPGAs with Dynamic and Partial Reconfiguration (DPR) feature allow the implementation of complex, yet flexible, hardware systems. Combining this flexibility with evolvable hardware techniques, real adaptive systems, able to reconfigure themselves according to environmental changes, can be envisaged. In this paper, a highly regular and modular architecture combined with a fast reconfiguration mechanism is proposed, allowing the introduction of dynamic and partial reconfiguration in the evolvable hardware loop. Results and use case show that, following this approach, evolvable processing IP Cores can be built, providing intensive data processing capabilities, improving data and delay overheads with respect to previous proposals. Results also show that, in the worst case (maximum mutation rate), average reconfiguration time is 5 times lower than evaluation time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of developing mechanisms for easily implementing modular extensions to modular (logic) languages. By(language) extensions we refer to different groups of syntactic definitions and translation rules that extend a language. Our use of the concept of modularity in this context is twofold. We would like these extensions to be modular, in the sense above, i.e., we should be able to develop different extensions mostly separately. At the same time, the sources and targets for the extensions are modular languages, i.e., such extensions may take as input sepárate pieces of code and also produce sepárate pieces of code. Dealing with this double requirement involves interesting challenges to ensure that modularity is not broken: first, combinations of extensions (as if they were a single extensión) must be given a precise meaning. Also, the sepárate translation of múltiple sources (as if they were a single source) must be feasible. We present a detailed description of a code expansion-based framework that proposes novel solutions for these problems. We argüe that the approach, while implemented for Ciao, can be adapted for other Prolog-based systems and languages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a modular, assertion-based system for verification and debugging of large logic programs, together with several interesting models for checking assertions statically in modular programs, each with different characteristics and representing different trade-offs. Our proposal is a modular and multivariant extensión of our previously proposed abstract assertion checking model and we also report on its implementation in the CiaoPP system. In our approach, the specification of the program, given by a set of assertions, may be partial, instead of the complete specification required by raditional verification systems. Also, the system can deal with properties which cannot always be determined at compile-time. As a result, the proposed system needs to work with safe approximations: all assertions proved correct are guaranteed to be valid and all errors actual errors. The use of modular, context-sensitive static analyzers also allows us to introduce a new distinction between assertions checked in a particular context or checked in general.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CIAO is an advanced programming environment supporting Logic and Constraint programming. It offers a simple concurrent kernel on top of which declarative and non-declarative extensions are added via librarles. Librarles are available for supporting the ISOProlog standard, several constraint domains, functional and higher order programming, concurrent and distributed programming, internet programming, and others. The source language allows declaring properties of predicates via assertions, including types and modes. Such properties are checked at compile-time or at run-time. The compiler and system architecture are designed to natively support modular global analysis, with the two objectives of proving properties in assertions and performing program optimizations, including transparently exploiting parallelism in programs. The purpose of this paper is to report on recent progress made in the context of the CIAO system, with special emphasis on the capabilities of the compiler, the techniques used for supporting such capabilities, and the results in the áreas of program analysis and transformation already obtained with the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modularity allows the construction of complex designs from simpler, independent units that most of the time can be developed separately. In this paper we are concerned with developing mechanisms for easily implementing modular extensions to modular (logic) languages. By (language) extensions we refer to different groups of syntactic definitions and translation rules that extend a language. Our application of the concept of modularity in this context is twofold. We would like these extensions to be modular, in the above sense, i.e., we should be able to develop different extensions mostly separately. At the same time, the sources and targets for the extensions are modular languages, i.e., such extensions may take as input separate pieces of code and also produce separate pieces of code. Dealing with this double requirement involves interesting challenges to ensure that modularity is not broken: first, combinations of extensions (as if they were a single extension) must be given a precise meaning. Also, the separate translation of multiple sources (as if they were a single source) must be feasible. We present a detailed description of a code expansion-based framework that proposes novel solutions for these problems. We argue that the approach, while implemented for Ciao, can be adapted for other languages and Prolog-based systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of new-generation intelligent vehicle technologies will lead to a better level of road safety and CO2 emission reductions. However, the weak point of all these systems is their need for comprehensive and reliable data. For traffic data acquisition, two sources are currently available: 1) infrastructure sensors and 2) floating vehicles. The former consists of a set of fixed point detectors installed in the roads, and the latter consists of the use of mobile probe vehicles as mobile sensors. However, both systems still have some deficiencies. The infrastructure sensors retrieve information fromstatic points of the road, which are spaced, in some cases, kilometers apart. This means that the picture of the actual traffic situation is not a real one. This deficiency is corrected by floating cars, which retrieve dynamic information on the traffic situation. Unfortunately, the number of floating data vehicles currently available is too small and insufficient to give a complete picture of the road traffic. In this paper, we present a floating car data (FCD) augmentation system that combines information fromfloating data vehicles and infrastructure sensors, and that, by using neural networks, is capable of incrementing the amount of FCD with virtual information. This system has been implemented and tested on actual roads, and the results show little difference between the data supplied by the floating vehicles and the virtual vehicles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present in this paper a neural-like membrane system solving the SAT problem in linear time. These neural Psystems are nets of cells working with multisets. Each cell has a finite state memory, processes multisets of symbol-impulses, and can send impulses (?excitations?) to the neighboring cells. The maximal mode of rules application and the replicative mode of communication between cells are at the core of the eficiency of these systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present ARGoS, a novel open source multi-robot simulator. The main design focus of ARGoS is the real-time simulation of large heterogeneous swarms of robots. Existing robot simulators obtain scalability by imposing limitations on their extensibility and on the accuracy of the robot models. By contrast, in ARGoS we pursue a deeply modular approach that allows the user both to easily add custom features and to allocate computational resources where needed by the experiment. A unique feature of ARGoS is the possibility to use multiple physics engines of different types and to assign them to different parts of the environment. Robots can migrate from one engine to another transparently. This feature enables entirely novel classes of optimizations to improve scalability and paves the way for a new approach to parallelism in robotics simulation. Results show that ARGoS can simulate about 10,000 simple wheeled robots 40% faster than real-time.