47 resultados para Intelligent systems


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes a new technique referred to as watched subgraphs which improves the performance of BBMC, a leading state of the art exact maximum clique solver (MCP). It is based on watched literals employed by modern SAT solvers for boolean constraint propagation. In efficient SAT algorithms, a list of clauses is kept for each literal (it is said that the clauses watch the literal) so that only those in the list are checked for constraint propagation when a (watched) literal is assigned during search. BBMC encodes vertex sets as bit strings, a bit block representing a subset of vertices (and the corresponding induced subgraph) the size of the CPU register word. The paper proposes to watch two subgraphs of critical sets during MCP search to efficiently compute a number of basic operations. Reported results validate the approach as the size and density of problem instances rise, while achieving comparable performance in the general case.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Existe una proliferación de los llamados Smart Products. Ello es debido a que cada vez se apueste más por este tipo de productos tanto en la vida cotidiana como en el sector industrial. Sin embargo el término Smart Product se utiliza con diferentes acepciones en diferentes contextos o dominios de aplicación. La utilización del término con una semántica diferente de la habitual en un contexto puede llevar a problemas serios de compresión. El objetivo de este trabajo es analizar las diferentes definiciones de Smart Products—Productos Inteligentes, Smart Products en terminología inglesa, ampliamente utilizada—que aparecen en la literatura con el objeto de estudiar los diferentes matices y alcances que ofrecen para valorar si es posible obtener una definición de consenso que satisfaga a todas las partes, y especificarla. Con el fin de poder abarcar definiciones conexas introducimos el concepto Smart Thing—este concepto incluirá aquellas definiciones que puedan estar relacionadas con los Smart Products, como es el caso de los Intelligent Products, Smart Objects, Intelligent Systems, Intelligent Object. Para poder analizar las diferentes definiciones existentes en la literatura existente realizamos una Revisión Sistemática de la Literatura. El enfoque de Computación Autonómica—Autonomic Computing—tiene varios aspectos en común con Smart Products. Por ello una vez analizadas las diferentes definiciones existentes en la literatura hemos procedido a estudiar los puntos en común que tienen con Autonomic Computing, con el fin de valorar si Autonomic Computing es un enfoque adecuado en el que nos podamos apoyar para especificar, y diseñar Smart Products.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and we demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF-based method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los sismos afectan a las estructuras en función de su intensidad. Normalmente se espera de las estructuras daños irreparables por motivos de ductilidad en los sismos nominales o de diseño, para protección de las personas y sus bienes. No obstante, las estructuras en zonas sísmicas sufren terremotos de baja o media intensidad de manera continuada y éstos pueden afectar a la capacidad resistente residual de las mismas, es por eso que en el presente trabajo se plantea lo siguiente: a) Identificar cuál es la estrategia o nivel de protección, que consideran las diferentes Normativas y Reglamentos frente a sismos de baja o mediana intensidad, puesto que durante la vida útil de una estructura, esta puede verse afectada por sismos de intensidad baja o moderada, los cuales también provocan daños; es por ello que es de mucha importancia conocer y estudiar el aporte, estrategias y demás parámetros que consideran las Normas, esto mediante la técnica de revisión de documentación o Literatura. b) Identificar la manera con que un terremoto de baja o media intensidad afecta a la capacidad resistente de las estructuras, sus señales, sus síntomas de daño, etc. Esto a través de tres técnicas de Investigación : Revisión en Literatura, Tormenta de ideas con un grupo de expertos en el tema, y mediante la Técnica Delphi; para finalmente aplicar una método de refinamiento para elaborar un listado y un mapa de síntomas esperables en las estructuras, consecuencia de eventos sísmicos de baja o mediana intensidad. Los cuales se podrían controlar con sistemas inteligentes y así monitorizar las construcciones para prever su comportamiento futuro. Earthquakes affect structures depending on its intensity. Normally it expected of the irreparable damage structures. It due to ductility in nominal earthquakes to protect people and property. Structures in seismic areas suffer earthquakes of low to medium intensity continually, and it may affect the residual resistant ability, therefore posed in this investigation is the following: (a) Identifying what is the strategy or level of protection, which consider different guidelines and regulations against earthquakes of low to medium intensity. Since during the service life of a structure may be affected by low or moderate intensity earthquakes, which also cause damage. For this reason it is very important also to meet and study the contribution, strategies and other parameters considered by the Guidelines by reviewing the documentation or literature technique. b) Identifying the way an earthquake of low to medium intensity affects the resistant ability of structures, their signs, their symptoms of injury, etc. Through three research techniques: review of documentation or literature, brainstorming technique with a group of experts, and using the Delphi technique. Finally applying a method of refining to produce a list and a map of symptoms expected in structures, consequence of low to medium intensity earthquakes. It could be controlled with intelligent systems and thus to monitor structures to predict its future behavior

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Clinicians could model the brain injury of a patient through his brain activity. However, how this model is defined and how it changes when the patient is recovering are questions yet unanswered. In this paper, the use of MedVir framework is proposed with the aim of answering these questions. Based on complex data mining techniques, this provides not only the differentiation between TBI patients and control subjects (with a 72% of accuracy using 0.632 Bootstrap validation), but also the ability to detect whether a patient may recover or not, and all of that in a quick and easy way through a visualization technique which allows interaction.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En la actualidad encontramos una gran y creciente cantidad de información en las redes sociales. Esta información en su mayoría se encuentra desestructurada o no organizada de forma adecuada, esto produce que sea difícil alcanzar consensos en argumentaciones y además impide la rápida participación de nuevos agentes en las mismas. Se han estudiado diferentes soluciones para alcanzar consensos en áreas concretos y en su mayoría centrados en el entorno académico, sin embargo se pueden encontrar pocas aplicaciones que traten de acercarse a una solución dentro de un contexto abierto como son las redes sociales. El contexto de las redes sociales es complejo pues no existe un control sobre los usuarios, los hilos de argumentación pueden desvirtuarse y es complejo alcanzar consensos cuando no existe una figura de experto bien definida como suele ocurrir en el contexto académico. Este trabajo trata de crear una herramienta web en forma de red social, con una base en sistemas inteligentes que permita a los usuarios poder obtener suficiente información de una conversación minimizando el esfuerzo para poder participar activamente.---ABSTRACT---Nowadays a large and an increasing amount of information can be found on social networks. This information is mostly unstructured and not properly organized, which is a problem when conclusions are needed to reach a consensus in argumentations. In addition new participants can find difficulties to join argumentations. Different solutions have been studied to solve these problems focused in academic contexts, however few applications which attempt to solve these problems on social networks can be found. It is not a simple task to handle the complexity of arguments on a social network. Besides, the free context and the lack of control over users make reaching a consensus even harder. This academic work seeks to create a tool in the form of an intelligent systems based social networks which may allow users to minimize the effort needed to join and participate in an argumentation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Providing security to the emerging field of ambient intelligence will be difficult if we rely only on existing techniques, given their dynamic and heterogeneous nature. Moreover, security demands of these systems are expected to grow, as many applications will require accurate context modeling. In this work we propose an enhancement to the reputation systems traditionally deployed for securing these systems. Different anomaly detectors are combined using the immunological paradigm to optimize reputation system performance in response to evolving security requirements. As an example, the experiments show how a combination of detectors based on unsupervised techniques (self-organizing maps and genetic algorithms) can help to significantly reduce the global response time of the reputation system. The proposed solution offers many benefits: scalability, fast response to adversarial activities, ability to detect unknown attacks, high adaptability, and high ability in detecting and confining attacks. For these reasons, we believe that our solution is capable of coping with the dynamism of ambient intelligence systems and the growing requirements of security demands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The TALISMAN+ project, financed by the Spanish Ministry of Science and Innovation, aims to research and demonstrate innovative solutions transferable to society which offer services and products based on information and communication technologies in order to promote personal autonomy in prevention and monitoring scenarios. It will solve critical interoperability problems among systems and emerging technologies in a context where heterogeneity brings about accessibility barriers not yet overcome and demanded by the scientific, technological or social-health settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the stress is a frequent problem in the society. The level of stress could be important in order to recognise health problems later. Electrocardiogram technics allows to supervise the heart condition and the detection of anomalies about the patient. Sometimes the data collection systems by sensors placed on the patient restrict his mobility. Therefore the elimination of wires is a good solution for this trouble. Then the Bluetooth protocol is chosen as way for transmitting and receive data between stations. There are three ECG sensors placed on the right hand, the left hand and the right leg. It is possible to measure the heart signal with this technique. Besides there is an extra sensor in order to measure the temperature of the patient. Depending of the value of these parameters is possible to recognise stress levels. All sensors are connected to a special box with a microcontroller which treat every signal. This module has a Bluetooth part that transmitts wireless the new digital signal to the receiver. This one will be a dongle connected to the computer by Serial Port. A program in the computer has been implemented in order to receive the Bluetooth Data sent from the box and saving the data in a file for subsequent activities. El objetivo principal de este proyecto es el estudio de parámetros como la temperatura corporal y las señales de electrocardiograma para el diagnóstico del estrés. Existen varios estudios que relacionan estos parámetros y sus niveles con posibles casos de estrés y ansiedad. Para este fin usamos unos sensores colocados en el brazo derecho, brazo izquierdo y pierna izquierda. Esto forma el Eindhoven Triangle, que es conocido por dar una señal de electrocardiograma. A su vez también tendremos un sensor de temperatura colocado en un dedo de la mano para medir los grados a los que está el cuerpo en ese momento y así poder detectar ciertas anomalías. Estos sensores están conectados a un modulo que trata las señales analógicas recogidas, las une, y digitaliza para que el modulo transmisor pueda enviar via Bluetooth los datos hacia un receptor colocado en un área cercana. En el módulo hay una electrónica que ayuda a resolver problemas importantes como ruido o interferencias. Este receptor está conectado a un ordenador en el cual he desarrollado una aplicación que implementa el protocolo HCI y cuya funcionalidad es recoger los datos recibidos. Este programa es capaz de crear y gestionar conexiones Bluetooth entre dispositivos. El programa está preparado para que si las conexiones se cortan, se traten en la medida de lo posible los datos recogidos. Los datos se interpretarán y guardarán en un fichero .bin para posteriores usos, como graficaciones y análisis de parámetros. El programa está enteramente hecho en lenguaje Java y tiene un mecanismo de eventos que se activa cada vez que hay datos en el receptor, los recoge y los procesa con el fin de darles un trato posteriormente. Se eligió el formato .bin para los ficheros debido a su pequeño tamaño, ya que aunque sean más laboriosos de usar es mucho más eficiente que un .txt, que en este caso podría ocupar varios megabytes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report addresses speculative parallelism (the assignment of spare processing resources to tasks which are not known to be strictly required for the successful completion of a computation) at the user and application level. At this level, the execution of a program is seen as a (dynamic) tree —a graph, in general. A solution for a problem is a traversal of this graph from the initial state to a node known to be the answer. Speculative parallelism then represents the assignment of resources to múltiple branches of this graph even if they are not positively known to be on the path to a solution. In highly non-deterministic programs the branching factor can be very high and a naive assignment will very soon use up all the resources. This report presents work assignment strategies other than the usual depth-first and breadth-first. Instead, best-first strategies are used. Since their definition is application-dependent, the application language contains primitives that allow the user (or application programmer) to a) indícate when intelligent OR-parallelism should be used; b) provide the functions that define "best," and c) indícate when to use them. An abstract architecture enables those primitives to perform the search in a "speculative" way, using several processors, synchronizing them, killing the siblings of the path leading to the answer, etc. The user is freed from worrying about these interactions. Several search strategies are proposed and their implementation issues are addressed. "Armageddon," a global pruning method, is introduced, together with both a software and a hardware implementation for it. The concepts exposed are applicable to áreas of Artificial Intelligence such as extensive expert systems, planning, game playing, and in general to large search problems. The proposed strategies, although showing promise, have not been evaluated by simulation or experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article proposes an agent-oriented methodology called MAS-CommonKADS and develops a case study. This methodology extends the knowledge engineering methodology CommonKADSwith techniquesfrom objectoriented and protocol engineering methodologies. The methodology consists of the development of seven models: Agent Model, that describes the characteristics of each agent; Task Model, that describes the tasks that the agents carry out; Expertise Model, that describes the knowledge needed by the agents to achieve their goals; Organisation Model, that describes the structural relationships between agents (software agents and/or human agents); Coordination Model, that describes the dynamic relationships between software agents; Communication Model, that describes the dynamic relationships between human agents and their respective personal assistant software agents; and Design Model, that refines the previous models and determines the most suitable agent architecture for each agent, and the requirements of the agent network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The area of Human-Machine Interface is growing fast due to its high importance in all technological systems. The basic idea behind designing human-machine interfaces is to enrich the communication with the technology in a natural and easy way. Gesture interfaces are a good example of transparent interfaces. Such interfaces must identify properly the action the user wants to perform, so the proper gesture recognition is of the highest importance. However, most of the systems based on gesture recognition use complex methods requiring high-resource devices. In this work, we propose to model gestures capturing their temporal properties, which significantly reduce storage requirements, and use clustering techniques, namely self-organizing maps and unsupervised genetic algorithm, for their classification. We further propose to train a certain number of algorithms with different parameters and combine their decision using majority voting in order to decrease the false positive rate. The main advantage of the approach is its simplicity, which enables the implementation using devices with limited resources, and therefore low cost. The testing results demonstrate its high potential.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Cross-Entropy (CE) is an efficient method for the estimation of rare-event probabilities and combinatorial optimization. This work presents a novel approach of the CE for optimization of a Soft-Computing controller. A Fuzzy controller was designed to command an unmanned aerial system (UAS) for avoiding collision task. The only sensor used to accomplish this task was a forward camera. The CE is used to reach a near-optimal controller by modifying the scaling factors of the controller inputs. The optimization was realized using the ROS-Gazebo simulation system. In order to evaluate the optimization a big amount of tests were carried out with a real quadcopter.