23 resultados para Human information processing.
em Universidad Politécnica de Madrid
Resumo:
Based on a previously reported logic cell structure (see SPIE, vol. 2038, p. 67-77, 1993), the two types of cells present at the inner and ganglion cell layers of the vertebrate retina and their intracellular response, as well as their connections with each other, have been simulated. These cells are amacrines and ganglion cells. The main scheme of the authors' configuration is shown in a figure. These two types of cells, as well as some of their possible interconnections, have been implemented with the authors' previously reported optical-processing element. As it has been shown, the authors' logic structure is able to process two optical input binary signals, being the output two logical functions. Moreover, if a delayed feedback from one of the two possible outputs to one or both of the inputs is introduced, a very different behaviour is obtained. Depending on the value of the time delay, an oscillatory output can be obtained from a constant optical input signal. Period and length pulses are dependent on delay values, both external and internal, as well as on other control signals. Moreover, a chaotic behaviour can be obtained too under certain conditions
Resumo:
Brain oscillations are closely correlated with human information processing and fundamental aspects of cognition. Previous literature shows that due to the relation between brain oscillations and memory processes, spectral dynamics during such tasks are good candidates to study and characterize memory related pathologies. Mild cognitive impairment (MCI), defined as a clinical condition characterized by memory impairment and/ or deterioration of additional cognitive domains, is considered a preliminary stage in the dementia process. In consequence, the study of its brain patterns could help to achieve an early diagnosis of Alzheimer Disease.
Resumo:
In order to establish an active internal know-how -reserve~ in an information processing and engineering services . company, a training architecture tailored to the company as an whole must be defined. When a company' s earnings come from . advisory services dynamically structured i.n the form of projects, as is the case at hand, difficulties arise that must be taken into account in the architectural design. The first difficulties are of a psychological nature and the design method proposed here begjns wi th the definition of the highest training metasystem, which is aimed at making adjustments for the variety of perceptions of the company's human components, before the architecture can be designed. This approach may be considered as an application of the cybernetic Law of Requisita Variety (Ashby) and of the Principle of Conceptual Integrity (Brooks) . Also included is a description of sorne of the results of the first steps of metasystems at the level of company organization.
Resumo:
Once admitted the advantages of object-based classification compared to pixel-based classification; the need of simple and affordable methods to define and characterize objects to be classified, appears. This paper presents a new methodology for the identification and characterization of objects at different scales, through the integration of spectral information provided by the multispectral image, and textural information from the corresponding panchromatic image. In this way, it has defined a set of objects that yields a simplified representation of the information contained in the two source images. These objects can be characterized by different attributes that allow discriminating between different spectral&textural patterns. This methodology facilitates information processing, from a conceptual and computational point of view. Thus the vectors of attributes defined can be used directly as training pattern input for certain classifiers, as for example artificial neural networks. Growing Cell Structures have been used to classify the merged information.
Resumo:
One of the most challenging problems that must be solved by any theoretical model purporting to explain the competence of the human brain for relational tasks is the one related with the analysis and representation of the internal structure in an extended spatial layout of múltiple objects. In this way, some of the problems are related with specific aims as how can we extract and represent spatial relationships among objects, how can we represent the movement of a selected object and so on. The main objective of this paper is the study of some plausible brain structures that can provide answers in these problems. Moreover, in order to achieve a more concrete knowledge, our study will be focused on the response of the retinal layers for optical information processing and how this information can be processed in the first cortex layers. The model to be reported is just a first trial and some major additions are needed to complete the whole vision process.
Resumo:
La investigación para el conocimiento del cerebro es una ciencia joven, su inicio se remonta a Santiago Ramón y Cajal en 1888. Desde esta fecha a nuestro tiempo la neurociencia ha avanzado mucho en el desarrollo de técnicas que permiten su estudio. Desde la neurociencia cognitiva hoy se explican muchos modelos que nos permiten acercar a nuestro entendimiento a capacidades cognitivas complejas. Aun así hablamos de una ciencia casi en pañales que tiene un lago recorrido por delante. Una de las claves del éxito en los estudios de la función cerebral ha sido convertirse en una disciplina que combina conocimientos de diversas áreas: de la física, de las matemáticas, de la estadística y de la psicología. Esta es la razón por la que a lo largo de este trabajo se entremezclan conceptos de diferentes campos con el objetivo de avanzar en el conocimiento de un tema tan complejo como el que nos ocupa: el entendimiento de la mente humana. Concretamente, esta tesis ha estado dirigida a la integración multimodal de la magnetoencefalografía (MEG) y la resonancia magnética ponderada en difusión (dMRI). Estas técnicas son sensibles, respectivamente, a los campos magnéticos emitidos por las corrientes neuronales, y a la microestructura de la materia blanca cerebral. A lo largo de este trabajo hemos visto que la combinación de estas técnicas permiten descubrir sinergias estructurofuncionales en el procesamiento de la información en el cerebro sano y en el curso de patologías neurológicas. Más específicamente en este trabajo se ha estudiado la relación entre la conectividad funcional y estructural y en cómo fusionarlas. Para ello, se ha cuantificado la conectividad funcional mediante el estudio de la sincronización de fase o la correlación de amplitudes entre series temporales, de esta forma se ha conseguido un índice que mide la similitud entre grupos neuronales o regiones cerebrales. Adicionalmente, la cuantificación de la conectividad estructural a partir de imágenes de resonancia magnética ponderadas en difusión, ha permitido hallar índices de la integridad de materia blanca o de la fuerza de las conexiones estructurales entre regiones. Estas medidas fueron combinadas en los capítulos 3, 4 y 5 de este trabajo siguiendo tres aproximaciones que iban desde el nivel más bajo al más alto de integración. Finalmente se utilizó la información fusionada de MEG y dMRI para la caracterización de grupos de sujetos con deterioro cognitivo leve, la detección de esta patología resulta relevante en la identificación precoz de la enfermedad de Alzheimer. Esta tesis está dividida en seis capítulos. En el capítulos 1 se establece un contexto para la introducción de la connectómica dentro de los campos de la neuroimagen y la neurociencia. Posteriormente en este capítulo se describen los objetivos de la tesis, y los objetivos específicos de cada una de las publicaciones científicas que resultaron de este trabajo. En el capítulo 2 se describen los métodos para cada técnica que fue empleada: conectividad estructural, conectividad funcional en resting state, redes cerebrales complejas y teoría de grafos y finalmente se describe la condición de deterioro cognitivo leve y el estado actual en la búsqueda de nuevos biomarcadores diagnósticos. En los capítulos 3, 4 y 5 se han incluido los artículos científicos que fueron producidos a lo largo de esta tesis. Estos han sido incluidos en el formato de la revista en que fueron publicados, estando divididos en introducción, materiales y métodos, resultados y discusión. Todos los métodos que fueron empleados en los artículos están descritos en el capítulo 2 de la tesis. Finalmente, en el capítulo 6 se concluyen los resultados generales de la tesis y se discuten de forma específica los resultados de cada artículo. ABSTRACT In this thesis I apply concepts from mathematics, physics and statistics to the neurosciences. This field benefits from the collaborative work of multidisciplinary teams where physicians, psychologists, engineers and other specialists fight for a common well: the understanding of the brain. Research on this field is still in its early years, being its birth attributed to the neuronal theory of Santiago Ramo´n y Cajal in 1888. In more than one hundred years only a very little percentage of the brain functioning has been discovered, and still much more needs to be explored. Isolated techniques aim at unraveling the system that supports our cognition, nevertheless in order to provide solid evidence in such a field multimodal techniques have arisen, with them we will be able to improve current knowledge about human cognition. Here we focus on the multimodal integration of magnetoencephalography (MEG) and diffusion weighted magnetic resonance imaging. These techniques are sensitive to the magnetic fields emitted by the neuronal currents and to the white matter microstructure, respectively. The combination of such techniques could bring up evidences about structural-functional synergies in the brain information processing and which part of this synergy fails in specific neurological pathologies. In particular, we are interested in the relationship between functional and structural connectivity, and how two integrate this information. We quantify the functional connectivity by studying the phase synchronization or the amplitude correlation between time series obtained by MEG, and so we get an index indicating similarity between neuronal entities, i.e. brain regions. In addition we quantify structural connectivity by performing diffusion tensor estimation from the diffusion weighted images, thus obtaining an indicator of the integrity of the white matter or, if preferred, the strength of the structural connections between regions. These quantifications are then combined following three different approaches, from the lowest to the highest level of integration, in chapters 3, 4 and 5. We finally apply the fused information to the characterization or prediction of mild cognitive impairment, a clinical entity which is considered as an early step in the continuum pathological process of dementia. The dissertation is divided in six chapters. In chapter 1 I introduce connectomics within the fields of neuroimaging and neuroscience. Later in this chapter we describe the objectives of this thesis, and the specific objectives of each of the scientific publications that were produced as result of this work. In chapter 2 I describe the methods for each of the techniques that were employed, namely structural connectivity, resting state functional connectivity, complex brain networks and graph theory, and finally, I describe the clinical condition of mild cognitive impairment and the current state of the art in the search for early biomarkers. In chapters 3, 4 and 5 I have included the scientific publications that were generated along this work. They have been included in in their original format and they contain introduction, materials and methods, results and discussion. All methods that were employed in these papers have been described in chapter 2. Finally, in chapter 6 I summarize all the results from this thesis, both locally for each of the scientific publications and globally for the whole work.
Resumo:
El proyecto nace de un proyecto anterior donde se construyó un modelo para representar la información de los estudios superiores mediante una red de ontologías, proporcionando una definición común de conceptos importantes. Este proyecto consiste en desarrollar una herramienta capaz de generar datos educativos, a partir de la red de ontologías mencionadas anteriormente, siguiendo el paradigma de Linked Data [1]. La herramienta deberá extraer datos de diferentes fuentes educativas y transformará dichos datos educativos a datos enlazados (Linked Data). Para llevar a cabo esta labor se ha utilizado GATE Developer [2], es un entorno de desarrollo que proporciona un completo conjunto de herramientas gráficas interactivas para la creación, medición y mantenimiento de componentes de software para el procesamiento del lenguaje humano.---ABSTRACT---The project arises from a previous project in which a model was constructed to represent information of higher education through a network of ontologies, providing a common definition of important concepts. This project is to develop a tool capable of generating educational data from the ontology network mentioned above, following the paradigm of Linked Data [1]. The tool will extract data from different educational sources and transform said data to linked data (linked data). To carry out this work has been used GATE Developer [2]. It is a development environment that provides a comprehensive set of interactive graphical tools for creating, measuring and maintenance of software components for human language processing.
Resumo:
The failure detector class Omega (Ω) provides an eventual leader election functionality, i.e., eventually all correct processes permanently trust the same correct process. An algorithm is communication-efficient if the number of links that carry messages forever is bounded by n, being n the number of processes in the system. It has been defined that an algorithm is crash-quiescent if it eventually stops sending messages to crashed processes. In this regard, it has been recently shown the impossibility of implementing Ω crash quiescently without a majority of correct processes. We say that the membership is unknown if each process pi only knows its own identity and the number of processes in the system (that is, i and n), but pi does not know the identity of the rest of processes of the system. There is a type of link (denoted by ADD link) in which a bounded (but unknown) number of consecutive messages can be delayed or lost. In this work we present the first implementation (to our knowledge) of Ω in partially synchronous systems with ADD links and with unknown membership. Furthermore, it is the first implementation of Ω that combines two very interesting properties: communication-efficiency and crash-quiescence when the majority of processes are correct. Finally, we also obtain with the same algorithm a failure detector () such that every correct process eventually and permanently outputs the set of all correct processes.
Resumo:
Attentional control and Information processing speed are central concepts in cognitive psychology and neuropsychology. Functional neuroimaging and neuropsychological assessment have depicted theoretical models considering attention as a complex and non-unitary process. One of its component processes, Attentional set-shifting ability, is commonly assessed using the Trail Making Test (TMT). Performance in the TMT decreases with increasing age in adults, Mild Cognitive Impairment (MCI) and Alzheimer’s Disease (AD). Besides, speed of information processing (SIP) seems to modulate attentional performance. While neural correlates of attentional control have been widely studied, there are few evidences about the neural substrates of SIP in these groups of patients. Different authors have suggested that it could be a property of cerebral white matter, thus, deterioration of the white matter tracts that connect brain regions related to set-shifting may underlie the age-related, MCI and AD decrease in performance. The aim of this study was to study the anatomical dissociation of attentional and speed mechanisms. Diffusion tensor imaging (DTI) provides a unique insight into the cellular integrity of the brain, offering an in vivo view into the microarchitecture of cerebral white matter. At the same time, the study of ageing, characterized by white matter decline, provides the opportunity to study the anatomical substrates speeded or slowed information processing. We hypothesized that FA values would be inversely correlated with time to completion on Parts A and B of the TMT, but not the derived scores B/A and B-A.
Resumo:
This paper presents some brief considerations on the role of Computational Logic in the construction of Artificial Intelligence systems and in programming in general. It does not address how the many problems in AI can be solved but, rather more modestly, tries to point out some advantages of Computational Logic as a tool for the AI scientist in his quest. It addresses the interaction between declarative and procedural views of programs (deduction and action), the impact of the intrinsic limitations of logic, the relationship with other apparently competing computational paradigms, and finally discusses implementation-related issues, such as the efficiency of current implementations and their capability for efficiently exploiting existing and future sequential and parallel hardware. The purpose of the discussion is in no way to present Computational Logic as the unique overall vehicle for the development of intelligent systems (in the firm belief that such a panacea is yet to be found) but rather to stress its strengths in providing reasonable solutions to several aspects of the task.
Resumo:
La Internet de las Cosas (IoT), como parte de la Futura Internet, se ha convertido en la actualidad en uno de los principales temas de investigación; en parte gracias a la atención que la sociedad está poniendo en el desarrollo de determinado tipo de servicios (telemetría, generación inteligente de energía, telesanidad, etc.) y por las recientes previsiones económicas que sitúan a algunos actores, como los operadores de telecomunicaciones (que se encuentran desesperadamente buscando nuevas oportunidades), al frente empujando algunas tecnologías interrelacionadas como las comunicaciones Máquina a Máquina (M2M). En este contexto, un importante número de actividades de investigación a nivel mundial se están realizando en distintas facetas: comunicaciones de redes de sensores, procesado de información, almacenamiento de grandes cantidades de datos (big--‐data), semántica, arquitecturas de servicio, etc. Todas ellas, de forma independiente, están llegando a un nivel de madurez que permiten vislumbrar la realización de la Internet de las Cosas más que como un sueño, como una realidad tangible. Sin embargo, los servicios anteriormente mencionados no pueden esperar a desarrollarse hasta que las actividades de investigación obtengan soluciones holísticas completas. Es importante proporcionar resultados intermedios que eviten soluciones verticales realizadas para desarrollos particulares. En este trabajo, nos hemos focalizado en la creación de una plataforma de servicios que pretende facilitar, por una parte la integración de redes de sensores y actuadores heterogéneas y geográficamente distribuidas, y por otra lado el desarrollo de servicios horizontales utilizando dichas redes y la información que proporcionan. Este habilitador se utilizará para el desarrollo de servicios y para la experimentación en la Internet de las Cosas. Previo a la definición de la plataforma, se ha realizado un importante estudio focalizando no sólo trabajos y proyectos de investigación, sino también actividades de estandarización. Los resultados se pueden resumir en las siguientes aseveraciones: a) Los modelos de datos definidos por el grupo “Sensor Web Enablement” (SWE™) del “Open Geospatial Consortium (OGC®)” representan hoy en día la solución más completa para describir las redes de sensores y actuadores así como las observaciones. b) Las interfaces OGC, a pesar de las limitaciones que requieren cambios y extensiones, podrían ser utilizadas como las bases para acceder a sensores y datos. c) Las redes de nueva generación (NGN) ofrecen un buen sustrato que facilita la integración de redes de sensores y el desarrollo de servicios. En consecuencia, una nueva plataforma de Servicios, llamada Ubiquitous Sensor Networks (USN), se ha definido en esta Tesis tratando de contribuir a rellenar los huecos previamente mencionados. Los puntos más destacados de la plataforma USN son: a) Desde un punto de vista arquitectónico, sigue una aproximación de dos niveles (Habilitador y Gateway) similar a otros habilitadores que utilizan las NGN (como el OMA Presence). b) Los modelos de datos están basado en los estándares del OGC SWE. iv c) Está integrado en las NGN pero puede ser utilizado sin ellas utilizando infraestructuras IP abiertas. d) Las principales funciones son: Descubrimiento de sensores, Almacenamiento de observaciones, Publicacion--‐subscripcion--‐notificación, ejecución remota homogénea, seguridad, gestión de diccionarios de datos, facilidades de monitorización, utilidades de conversión de protocolos, interacciones síncronas y asíncronas, soporte para el “streaming” y arbitrado básico de recursos. Para demostrar las funcionalidades que la Plataforma USN propuesta pueden ofrecer a los futuros escenarios de la Internet de las Cosas, se presentan resultados experimentales de tres pruebas de concepto (telemetría, “Smart Places” y monitorización medioambiental) reales a pequeña escala y un estudio sobre semántica (sistema de información vehicular). Además, se está utilizando actualmente como Habilitador para desarrollar tanto experimentación como servicios reales en el proyecto Europeo SmartSantander (que aspira a integrar alrededor de 20.000 dispositivos IoT). v Abstract Internet of Things, as part of the Future Internet, has become one of the main research topics nowadays; in part thanks to the pressure the society is putting on the development of a particular kind of services (Smart metering, Smart Grids, eHealth, etc.), and by the recent business forecasts that situate some players, like Telecom Operators (which are desperately seeking for new opportunities), at the forefront pushing for some interrelated technologies like Machine--‐to--‐Machine (M2M) communications. Under this context, an important number of research activities are currently taking place worldwide at different levels: sensor network communications, information processing, big--‐ data storage, semantics, service level architectures, etc. All of them, isolated, are arriving to a level of maturity that envision the achievement of Internet of Things (IoT) more than a dream, a tangible goal. However, the aforementioned services cannot wait to be developed until the holistic research actions bring complete solutions. It is important to come out with intermediate results that avoid vertical solutions tailored for particular deployments. In the present work, we focus on the creation of a Service--‐level platform intended to facilitate, from one side the integration of heterogeneous and geographically disperse Sensors and Actuator Networks (SANs), and from the other the development of horizontal services using them and the information they provide. This enabler will be used for horizontal service development and for IoT experimentation. Prior to the definition of the platform, we have realized an important study targeting not just research works and projects, but also standardization topics. The results can be summarized in the following assertions: a) Open Geospatial Consortium (OGC®) Sensor Web Enablement (SWE™) data models today represent the most complete solution to describe SANs and observations. b) OGC interfaces, despite the limitations that require changes and extensions, could be used as the bases for accessing sensors and data. c) Next Generation Networks (NGN) offer a good substrate that facilitates the integration of SANs and the development of services. Consequently a new Service Layer platform, called Ubiquitous Sensor Networks (USN), has been defined in this Thesis trying to contribute to fill in the previous gaps. The main highlights of the proposed USN Platform are: a) From an architectural point of view, it follows a two--‐layer approach (Enabler and Gateway) similar to other enablers that run on top of NGN (like the OMA Presence). b) Data models and interfaces are based on the OGC SWE standards. c) It is integrated in NGN but it can be used without it over open IP infrastructures. d) Main functions are: Sensor Discovery, Observation Storage, Publish--‐Subscribe--‐Notify, homogeneous remote execution, security, data dictionaries handling, monitoring facilities, authorization support, protocol conversion utilities, synchronous and asynchronous interactions, streaming support and basic resource arbitration. vi In order to demonstrate the functionalities that the proposed USN Platform can offer to future IoT scenarios, some experimental results have been addressed in three real--‐life small--‐scale proofs--‐of concepts (Smart Metering, Smart Places and Environmental monitoring) and a study for semantics (in--‐vehicle information system). Furthermore we also present the current use of the proposed USN Platform as an Enabler to develop experimentation and real services in the SmartSantander EU project (that aims at integrating around 20.000 IoT devices).
Resumo:
In fact, much of the attraction of network theory initially stemmed from the fact that many networks seem to exhibit some sort of universality, as most of them belong to one of three classes: random, scale-free and small-world networks. Structural properties have been shown to translate into different important properties of a given system, including efficiency, speed of information processing, vulnerability to various forms of stress, and robustness. For example, scale-free and random topologies were shown to be...
Resumo:
Recommender systems play an important role in reducing the negative impact of informa- tion overload on those websites where users have the possibility of voting for their prefer- ences on items. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to cal- culate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corroborate the excellent behaviour of the singularity measure proposed.
Resumo:
Today?s knowledge management (KM) systems seldom account for language management and, especially, multilingual information processing. Document management is one of the strongest components of KM systems. If these systems do not include a multilingual knowledge management policy, intranet searches, excessive document space occupancy and redundant information slow down what are the most effective processes in a single language environment. In this paper, we model information flow from the sources of knowledge to the persons/systems searching for specific information. Within this framework, we focus on the importance of multilingual information processing, which is a hugely complex component of modern organizations.
Resumo:
La gestión del conocimiento (KM) se basa en la captación, filtración, procesamiento y análisis de unos datos en bruto que con dicho refinamiento podrán llegar a convertirse en conocimiento o Sabiduría. Estas prácticas tendrán lugar en este PFC en una WSN (Wireless Sensor Network) compuesta de unos sofisticados dispositivos comúnmente conocidos como “motas” y cuya principal característica son sus bajas capacidades en cuanto a memoria, batería o autonomía. Ha sido objetivo primordial de este Proyecto de fin de Carrera aunar una WSN con la Gestión del Conocimiento así como demostrar que es posible llevar a cabo grandes procesamientos de información, con tan bajas capacidades, si se distribuyen correctamente los procesos. En primera instancia, se introducen conceptos básicos acerca de las WSN (Wireless Sensor Networks) así como de los elementos principales en dichas redes. Tras conocer el modelo de arquitectura de comunicaciones se procede a presentar la Gestión del Conocimiento de forma teórica y a continuación la interpretación que se ha hecho a partir de diversas referencias bibliográficas para llevar a cabo la implementación del proyecto. El siguiente paso es describir punto por punto todos los componentes del Simulador; librerías, funcionamiento y demás cuestiones sobre configuración y puesta a punto. Como escenario de aplicación se plantea una red de sensores inalámbricos básica cuya topología y ubicación es completamente configurable. Se lleva a cabo una configuración a nivel de red basada en el protocolo 6LowPAN pero con posibilidad de simplificarlo. Los datos se procesan de acuerdo a un modelo piramidal de Gestión de Conocimiento adaptable a las necesidades del usuario. Mediante la utilización de las diversas opciones que proporciona la interfaz gráfica implementada y los documentos de resultados que se van generando, se puede llevar a cabo un detallado estudio posterior de la simulación y comprobar si se cumplen las expectativas planteadas. Knowledge management (KM) is based on the collection, filtering, processing and analysis of some raw data which such refinement it can be turned into knowledge or wisdom. These practices will take place in a WSN (Wireless Sensor Network) consists of sophisticated devices commonly known as "dots" and whose main characteristics are its low capacity for memory, battery or autonomy. A primary objective of this Project will be to join a WSN with Knowledge Management and show that it is possible make largo information processing, with such low capacity if the processes are properly distributed. First, we introduce basic concepts about the WSN (Wireless Sensor Networks) and major elements of these networks. After meeting the communications architecture model, we proceed to show the Knowledge Management theory and then the interpretation of several bibliographic references to carry out the project implementation. The next step is discovering point by point all over the Simulator components; libraries, operation and the rest of points about configuration and tuning. As application scenario we propose a basic wireless sensor network whose topology and location is completely customizable. It will perform a network level configuration based in W6LowPAN Protocol. Data is processed according to a pyramidal pattern Knowledge Management adaptable to the user´s needs. The hardware elements will suffer more or less energy dependence depending on their role and activity in the network. Through the various options that provide the graphical interface has been implemented and results documents that are generated, can be carried out after a detailed study of the simulation and verify compliance with the expectations raised.