929 resultados para computational model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

El funcionamiento interno del cerebro es todavía hoy en día un misterio, siendo su comprensión uno de los principales desafíos a los que se enfrenta la ciencia moderna. El córtex cerebral es el área del cerebro donde tienen lugar los procesos cerebrales de más alto nivel, cómo la imaginación, el juicio o el pensamiento abstracto. Las neuronas piramidales, un tipo específico de neurona, suponen cerca del 80% de los cerca de los 10.000 millones de que componen el córtex cerebral, haciendo de ellas un objetivo principal en el estudio del funcionamiento del cerebro. La morfología neuronal, y más específicamente la morfología dendrítica, determina cómo estas procesan la información y los patrones de conexión entre neuronas, siendo los modelos computacionales herramientas imprescindibles para el estudio de su rol en el funcionamiento del cerebro. En este trabajo hemos creado un modelo computacional, con más de 50 variables relativas a la morfología dendrítica, capaz de simular el crecimiento de arborizaciones dendríticas basales completas a partir de reconstrucciones de neuronas piramidales reales, abarcando desde el número de dendritas hasta el crecimiento los los árboles dendríticos. A diferencia de los trabajos anteriores, nuestro modelo basado en redes Bayesianas contempla la arborización dendrítica en su conjunto, teniendo en cuenta las interacciones entre dendritas y detectando de forma automática las relaciones entre las variables morfológicas que caracterizan la arborización. Además, el análisis de las redes Bayesianas puede ayudar a identificar relaciones hasta ahora desconocidas entre variables morfológicas. Motivado por el estudio de la orientación de las dendritas basales, en este trabajo se introduce una regularización L1 generalizada, aplicada al aprendizaje de la distribución von Mises multivariante, una de las principales distribuciones de probabilidad direccional multivariante. También se propone una distancia circular multivariante que puede utilizarse para estimar la divergencia de Kullback-Leibler entre dos muestras de datos circulares. Comparamos los modelos con y sin regularizaci ón en el estudio de la orientación de la dendritas basales en neuronas humanas, comprobando que, en general, el modelo regularizado obtiene mejores resultados. El muestreo, ajuste y representación de la distribución von Mises multivariante se implementa en un nuevo paquete de R denominado mvCircular.---ABSTRACT---The inner workings of the brain are, as of today, a mystery. To understand the brain is one of the main challenges faced by current science. The cerebral cortex is the region of the brain where all superior brain processes, like imagination, judge and abstract reasoning take place. Pyramidal neurons, a specific type of neurons, constitute approximately the 80% of the more than 10.000 million neurons that compound the cerebral cortex. It makes the study of the pyramidal neurons crucial in order to understand how the brain works. Neuron morphology, and specifically the dendritic morphology, determines how the information is processed in the neurons, as well as the connection patterns among neurons. Computational models are one of the main tools for studying dendritic morphology and its role in the brain function. We have built a computational model that contains more than 50 morphological variables of the dendritic arborizations. This model is able to simulate the growth of complete dendritic arborizations from real neuron reconstructions, starting with the number of basal dendrites, and ending modeling the growth of dendritic trees. One of the main diferences between our approach, mainly based on the use of Bayesian networks, and other models in the state of the art is that we model the whole dendritic arborization instead of focusing on individual trees, which makes us able to take into account the interactions between dendrites and to automatically detect relationships between the morphologic variables that characterize the arborization. Moreover, the posterior analysis of the relationships in the model can help to identify new relations between morphological variables. Motivated by the study of the basal dendrites orientation, a generalized L1 regularization applied to the multivariate von Mises distribution, one of the most used distributions in multivariate directional statistics, is also introduced in this work. We also propose a circular multivariate distance that can be used to estimate the Kullback-Leibler divergence between two circular data samples. We compare the regularized and unregularized models on basal dendrites orientation of human neurons and prove that regularized model achieves better results than non regularized von Mises model. Sampling, fitting and plotting functions for the multivariate von Mises are implemented in a new R packaged called mvCircular.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El concepto de algoritmo es básico en informática, por lo que es crucial que los alumnos profundicen en él desde el inicio de su formación. Por tanto, contar con una herramienta que guíe a los estudiantes en su aprendizaje puede suponer una gran ayuda en su formación. La mayoría de los autores coinciden en que, para determinar la eficacia de una herramienta de visualización de algoritmos, es esencial cómo se utiliza. Así, los estudiantes que participan activamente en la visualización superan claramente a los que la contemplan de forma pasiva. Por ello, pensamos que uno de los mejores ejercicios para un alumno consiste en simular la ejecución del algoritmo que desea aprender mediante el uso de una herramienta de visualización, i. e. consiste en realizar una simulación visual de dicho algoritmo. La primera parte de esta tesis presenta los resultados de una profunda investigación sobre las características que debe reunir una herramienta de ayuda al aprendizaje de algoritmos y conceptos matemáticos para optimizar su efectividad: el conjunto de especificaciones eMathTeacher, además de un entorno de aprendizaje que integra herramientas que las cumplen: GRAPHs. Hemos estudiado cuáles son las cualidades esenciales para potenciar la eficacia de un sistema e-learning de este tipo. Esto nos ha llevado a la definición del concepto eMathTeacher, que se ha materializado en el conjunto de especificaciones eMathTeacher. Una herramienta e-learning cumple las especificaciones eMathTeacher si actúa como un profesor virtual de matemáticas, i. e. si es una herramienta de autoevaluación que ayuda a los alumnos a aprender de forma activa y autónoma conceptos o algoritmos matemáticos, corrigiendo sus errores y proporcionando pistas para encontrar la respuesta correcta, pero sin dársela explícitamente. En estas herramientas, la simulación del algoritmo no continúa hasta que el usuario introduce la respuesta correcta. Para poder reunir en un único entorno una colección de herramientas que cumplan las especificaciones eMathTeacher hemos creado GRAPHs, un entorno ampliable, basado en simulación visual, diseñado para el aprendizaje activo e independiente de los algoritmos de grafos y creado para que en él se integren simuladores de diferentes algoritmos. Además de las opciones de creación y edición del grafo y la visualización de los cambios producidos en él durante la simulación, el entorno incluye corrección paso a paso, animación del pseudocódigo del algoritmo, preguntas emergentes, manejo de las estructuras de datos del algoritmo y creación de un log de interacción en XML. Otro problema que nos planteamos en este trabajo, por su importancia en el proceso de aprendizaje, es el de la evaluación formativa. El uso de ciertos entornos e-learning genera gran cantidad de datos que deben ser interpretados para llegar a una evaluación que no se limite a un recuento de errores. Esto incluye el establecimiento de relaciones entre los datos disponibles y la generación de descripciones lingüísticas que informen al alumno sobre la evolución de su aprendizaje. Hasta ahora sólo un experto humano era capaz de hacer este tipo de evaluación. Nuestro objetivo ha sido crear un modelo computacional que simule el razonamiento del profesor y genere un informe sobre la evolución del aprendizaje que especifique el nivel de logro de cada uno de los objetivos definidos por el profesor. Como resultado del trabajo realizado, la segunda parte de esta tesis presenta el modelo granular lingüístico de la evaluación del aprendizaje, capaz de modelizar la evaluación y generar automáticamente informes de evaluación formativa. Este modelo es una particularización del modelo granular lingüístico de un fenómeno (GLMP), en cuyo desarrollo y formalización colaboramos, basado en la lógica borrosa y en la teoría computacional de las percepciones. Esta técnica, que utiliza sistemas de inferencia basados en reglas lingüísticas y es capaz de implementar criterios de evaluación complejos, se ha aplicado a dos casos: la evaluación, basada en criterios, de logs de interacción generados por GRAPHs y de cuestionarios de Moodle. Como consecuencia, se han implementado, probado y utilizado en el aula sistemas expertos que evalúan ambos tipos de ejercicios. Además de la calificación numérica, los sistemas generan informes de evaluación, en lenguaje natural, sobre los niveles de competencia alcanzados, usando sólo datos objetivos de respuestas correctas e incorrectas. Además, se han desarrollado dos aplicaciones capaces de ser configuradas para implementar los sistemas expertos mencionados. Una procesa los archivos producidos por GRAPHs y la otra, integrable en Moodle, evalúa basándose en los resultados de los cuestionarios. ABSTRACT The concept of algorithm is one of the core subjects in computer science. It is extremely important, then, for students to get a good grasp of this concept from the very start of their training. In this respect, having a tool that helps and shepherds students through the process of learning this concept can make a huge difference to their instruction. Much has been written about how helpful algorithm visualization tools can be. Most authors agree that the most important part of the learning process is how students use the visualization tool. Learners who are actively involved in visualization consistently outperform other learners who view the algorithms passively. Therefore we think that one of the best exercises to learn an algorithm is for the user to simulate the algorithm execution while using a visualization tool, thus performing a visual algorithm simulation. The first part of this thesis presents the eMathTeacher set of requirements together with an eMathTeacher-compliant tool called GRAPHs. For some years, we have been developing a theory about what the key features of an effective e-learning system for teaching mathematical concepts and algorithms are. This led to the definition of eMathTeacher concept, which has materialized in the eMathTeacher set of requirements. An e-learning tool is eMathTeacher compliant if it works as a virtual math trainer. In other words, it has to be an on-line self-assessment tool that helps students to actively and autonomously learn math concepts or algorithms, correcting their mistakes and providing them with clues to find the right answer. In an eMathTeacher-compliant tool, algorithm simulation does not continue until the user enters the correct answer. GRAPHs is an extendible environment designed for active and independent visual simulation-based learning of graph algorithms, set up to integrate tools to help the user simulate the execution of different algorithms. Apart from the options of creating and editing the graph, and visualizing the changes made to the graph during simulation, the environment also includes step-by-step correction, algorithm pseudo-code animation, pop-up questions, data structure handling and XML-based interaction log creation features. On the other hand, assessment is a key part of any learning process. Through the use of e-learning environments huge amounts of data can be output about this process. Nevertheless, this information has to be interpreted and represented in a practical way to arrive at a sound assessment that is not confined to merely counting mistakes. This includes establishing relationships between the available data and also providing instructive linguistic descriptions about learning evolution. Additionally, formative assessment should specify the level of attainment of the learning goals defined by the instructor. Till now, only human experts were capable of making such assessments. While facing this problem, our goal has been to create a computational model that simulates the instructor’s reasoning and generates an enlightening learning evolution report in natural language. The second part of this thesis presents the granular linguistic model of learning assessment to model the assessment of the learning process and implement the automated generation of a formative assessment report. The model is a particularization of the granular linguistic model of a phenomenon (GLMP) paradigm, based on fuzzy logic and the computational theory of perceptions, to the assessment phenomenon. This technique, useful for implementing complex assessment criteria using inference systems based on linguistic rules, has been applied to two particular cases: the assessment of the interaction logs generated by GRAPHs and the criterion-based assessment of Moodle quizzes. As a consequence, several expert systems to assess different algorithm simulations and Moodle quizzes have been implemented, tested and used in the classroom. Apart from the grade, the designed expert systems also generate natural language progress reports on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. In addition, two applications, capable of being configured to implement the expert systems, have been developed. One is geared up to process the files output by GRAPHs and the other one is a Moodle plug-in set up to perform the assessment based on the quizzes results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los resultados presentados en la memoria de esta tesis doctoral se enmarcan en la denominada computación celular con membranas una nueva rama de investigación dentro de la computación natural creada por Gh. Paun en 1998, de ahí que habitualmente reciba el nombre de sistemas P. Este nuevo modelo de cómputo distribuido está inspirado en la estructura y funcionamiento de la célula. El objetivo de esta tesis ha sido analizar el poder y la eficiencia computacional de estos sistemas de computación celular. En concreto, se han analizado dos tipos de sistemas P: por un lado los sistemas P de neuronas de impulsos, y por otro los sistemas P con proteínas en las membranas. Para el primer tipo, los resultados obtenidos demuestran que es posible que estos sistemas mantengan su universalidad aunque muchas de sus características se limiten o incluso se eliminen. Para el segundo tipo, se analiza la eficiencia computacional y se demuestra que son capaces de resolver problemas de la clase de complejidad ESPACIO-P (PSPACE) en tiempo polinómico. Análisis del poder computacional: Los sistemas P de neuronas de impulsos (en adelante SN P, acrónimo procedente del inglés «Spiking Neural P Systems») son sistemas inspirados en el funcionamiento neuronal y en la forma en la que los impulsos se propagan por las redes sinápticas. Los SN P bio-inpirados poseen un numeroso abanico de características que ha cen que dichos sistemas sean universales y por tanto equivalentes, en poder computacional, a una máquina de Turing. Estos sistemas son potentes a nivel computacional, pero tal y como se definen incorporan numerosas características, quizás demasiadas. En (Ibarra et al. 2007) se demostró que en estos sistemas sus funcionalidades podrían ser limitadas sin comprometer su universalidad. Los resultados presentados en esta memoria son continuistas con la línea de trabajo de (Ibarra et al. 2007) y aportan nuevas formas normales. Esto es, nuevas variantes simplificadas de los sistemas SN P con un conjunto mínimo de funcionalidades pero que mantienen su poder computacional universal. Análisis de la eficiencia computacional: En esta tesis se ha estudiado la eficiencia computacional de los denominados sistemas P con proteínas en las membranas. Se muestra que este modelo de cómputo es equivalente a las máquinas de acceso aleatorio paralelas (PRAM) o a las máquinas de Turing alterantes ya que se demuestra que un sistema P con proteínas, es capaz de resolver un problema ESPACIOP-Completo como el QSAT(problema de satisfacibilidad de fórmulas lógicas cuantificado) en tiempo polinómico. Esta variante de sistemas P con proteínas es muy eficiente gracias al poder de las proteínas a la hora de catalizar los procesos de comunicación intercelulares. ABSTRACT The results presented at this thesis belong to membrane computing a new research branch inside of Natural computing. This new branch was created by Gh. Paun on 1998, hence usually receives the name of P Systems. This new distributed computing model is inspired on structure and functioning of cell. The aim of this thesis is to analyze the efficiency and computational power of these computational cellular systems. Specifically there have been analyzed two different classes of P systems. On the one hand it has been analyzed the Neural Spiking P Systems, and on the other hand it has been analyzed the P systems with proteins on membranes. For the first class it is shown that it is possible to reduce or restrict the characteristics of these kind of systems without loss of computational power. For the second class it is analyzed the computational efficiency solving on polynomial time PSACE problems. Computational Power Analysis: The spiking neural P systems (SN P in short) are systems inspired by the way of neural cells operate sending spikes through the synaptic networks. The bio-inspired SN Ps possess a large range of features that make these systems to be universal and therefore equivalent in computational power to a Turing machine. Such systems are computationally powerful, but by definition they incorporate a lot of features, perhaps too much. In (Ibarra et al. in 2007) it was shown that their functionality may be limited without compromising its universality. The results presented herein continue the (Ibarra et al. 2007) line of work providing new formal forms. That is, new SN P simplified variants with a minimum set of functionalities but keeping the universal computational power. Computational Efficiency Analisys: In this thesis we study the computational efficiency of P systems with proteins on membranes. We show that this computational model is equivalent to parallel random access machine (PRAM) or alternating Turing machine because, we show P Systems with proteins can solve a PSPACE-Complete problem as QSAT (Quantified Propositional Satisfiability Problem) on polynomial time. This variant of P Systems with proteins is very efficient thanks to computational power of proteins to catalyze inter-cellular communication processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It has long been assumed that the red cell membrane is highly permeable to gases because the molecules of gases are small, uncharged, and soluble in lipids, such as those of a bilayer. The disappearance of 12C18O16O from a red cell suspension as the 18O exchanges between labeled CO2 + HCO3− and unlabeled HOH provides a measure of the carbonic anhydrase (CA) activity (acceleration, or A) inside the cell and of the membrane self-exchange permeability to HCO3− (Pm,HCO−3). To test this technique, we added sufficient 4,4′-diisothiocyanato-stilbene-2,2′-disulfonate (DIDS) to inhibit all the HCO3−/Cl− transport protein (Band III or capnophorin) in a red cell suspension. We found that DIDS reduced Pm,HCO−3 as expected, but also appeared to reduce intracellular A, although separate experiments showed it has no effect on CA activity in homogenous solution. A decrease in Pm,CO2 would explain this finding. With a more advanced computational model, which solves for CA activity and membrane permeabilities to both CO2 and HCO3−, we found that DIDS inhibited both Pm,HCO−3 and Pm,CO2, whereas intracellular CA activity remained unchanged. The mechanism by which DIDS reduces CO2 permeability may not be through an action on the lipid bilayer itself, but rather on a membrane transport protein, implying that this is a normal route for at least part of red cell CO2 exchange.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We model experience-dependent plasticity in the cortical representation of whiskers (the barrel cortex) in normal adult rats, and in adult rats that were prenatally exposed to alcohol. Prenatal exposure to alcohol (PAE) caused marked deficits in experience-dependent plasticity in a cortical barrel-column. Cortical plasticity was induced by trimming all whiskers on one side of the face except two. This manipulation produces high activity from the intact whiskers that contrasts with low activity from the cut whiskers while avoiding any nerve damage. By a computational model, we show that the evolution of neuronal responses in a single barrel-column after this sensory bias is consistent with the synaptic modifications that follow the rules of the Bienenstock, Cooper, and Munro (BCM) theory. The BCM theory postulates that a neuron possesses a moving synaptic modification threshold, θM, that dictates whether the neuron's activity at any given instant will lead to strengthening or weakening of its input synapses. The current value of θM changes proportionally to the square of the neuron's activity averaged over some recent past. In the model of alcohol impaired cortex, the effective θM has been set to a level unattainable by the depressed levels of cortical activity leading to “impaired” synaptic plasticity that is consistent with experimental findings. Based on experimental and computational results, we discuss how elevated θM may be related to (i) reduced levels of neurotransmitters modulating plasticity, (ii) abnormally low expression of N-methyl-d-aspartate receptors (NMDARs), and (iii) the membrane translocation of Ca2+/calmodulin-dependent protein kinase II (CaMKII) in adult rat cortex subjected to prenatal alcohol exposure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Efficient and reliable classification of visual stimuli requires that their representations reside a low-dimensional and, therefore, computationally manageable feature space. We investigated the ability of the human visual system to derive such representations from the sensory input-a highly nontrivial task, given the million or so dimensions of the visual signal at its entry point to the cortex. In a series of experiments, subjects were presented with sets of parametrically defined shapes; the points in the common high-dimensional parameter space corresponding to the individual shapes formed regular planar (two-dimensional) patterns such as a triangle, a square, etc. We then used multidimensional scaling to arrange the shapes in planar configurations, dictated by their experimentally determined perceived similarities. The resulting configurations closely resembled the original arrangements of the stimuli in the parameter space. This achievement of the human visual system was replicated by a computational model derived from a theory of object representation in the brain, according to which similarities between objects, and not the geometry of each object, need to be faithfully represented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A computational model is presented that can be used as a tool in the design of safer chemicals. This model predicts the rate of hydrogen-atom abstraction by cytochrome P450 enzymes. Excellent correlations between biotransformation rates and the calculated activation energies (delta Hact) of the cytochrome P450-mediated hydrogen-atom abstractions were obtained for the in vitro biotransformation of six halogenated alkanes (1-fluoro-1,1,2,2-tetrachloroethane, 1,1-difluoro-1,2,2-trichloroethane, 1,1,1-trifluro-2,2-dichloroethane, 1,1,1,2-tetrafluoro-2-chloroethane, 1,1,1,2,2,-pentafluoroethane, and 2-bromo-2-chloro-1,1,1-trifluoroethane) with both rat and human enzyme preparations: In(rate, rat liver microsomes) = 44.99 - 1.79(delta Hact), r2 = 0.86; In(rate, human CYP2E1) = 46.99 - 1.77(delta Hact), r2 = 0.97 (rates are in nmol of product per min per nmol of cytochrome P450 and energies are in kcal/mol). Correlations were also obtained for five inhalation anesthetics (enflurane, sevoflurane, desflurane, methoxyflurane, and isoflurane) for both in vivo and in vitro metabolism by humans: In[F(-)]peak plasma = 42.87 - 1.57(delta Hact), r2 = 0.86. To our knowledge, these are the first in vivo human metabolic rates to be quantitatively predicted. Furthermore, this is one of the first examples where computational predictions and in vivo and in vitro data have been shown to agree in any species. The model presented herein provides an archetype for the methodology that may be used in the future design of safer chemicals, particularly hydrochlorofluorocarbons and inhalation anesthetics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Qualquer tarefa motora ativa se dá pela ativação de uma população de unidades motoras. Porém, devido a diversas dificuldades, tanto técnicas quanto éticas, não é possível medir a entrada sináptica dos motoneurônios em humanos. Por essas razões, o uso de modelos computacionais realistas de um núcleo de motoneurônios e as suas respectivas fibras musculares tem um importante papel no estudo do controle humano dos músculos. Entretanto, tais modelos são complexos e uma análise matemática é difícil. Neste texto é apresentada uma abordagem baseada em identificação de sistemas de um modelo realista de um núcleo de unidades motoras, com o objetivo de obter um modelo mais simples capaz de representar a transdução das entradas do núcleo de unidades motoras na força do músculo associado ao núcleo. A identificação de sistemas foi baseada em um algoritmo de mínimos quadrados ortogonal para achar um modelo NARMAX, sendo que a entrada considerada foi a condutância sináptica excitatória dendrítica total dos motoneurônios e a saída foi a força dos músculos produzida pelo núcleo de unidades motoras. O modelo identificado reproduziu o comportamento médio da saída do modelo computacional realista, mesmo para pares de sinal de entrada-saída não usados durante o processo de identificação do modelo, como sinais de força muscular modulados senoidalmente. Funções de resposta em frequência generalizada do núcleo de motoneurônios foram obtidas do modelo NARMAX, e levaram a que se inferisse que oscilações corticais na banda-beta (20 Hz) podem influenciar no controle da geração de força pela medula espinhal, comportamento do núcleo de motoneurônios até então desconhecido.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of applications as well as the services for mobile systems faces a varied range of devices with very heterogeneous capabilities whose response times are difficult to predict. The research described in this work aims to respond to this issue by developing a computational model that formalizes the problem and that defines adjusting computing methods. The described proposal combines imprecise computing strategies with cloud computing paradigms in order to provide flexible implementation frameworks for embedded or mobile devices. As a result, the imprecise computation scheduling method on the workload of the embedded system is the solution to move computing to the cloud according to the priority and response time of the tasks to be executed and hereby be able to meet productivity and quality of desired services. A technique to estimate network delays and to schedule more accurately tasks is illustrated in this paper. An application example in which this technique is experimented in running contexts with heterogeneous work loading for checking the validity of the proposed model is described.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tese de mestrado integrado em Engenharia da Energia e do Ambiente, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It has been suggested that growth cones navigating through the developing nervous system might display adaptation, so that their response to gradient signals is conserved over wide variations in ligand concentration. Recently however, a new chemotaxis assay that allows the effect of gradient parameters on axonal trajectories to be finely varied has revealed a decline in gradient sensitivity on either side of an optimal concentration. We show that this behavior can be quantitatively reproduced with a computational model of axonal chemotaxis that does not employ explicit adaptation. Two crucial components of this model required to reproduce the observed sensitivity are spatial and temporal averaging. These can be interpreted as corresponding, respectively, to the spatial spread of signaling effects downstream from receptor binding, and to the finite time over which these signaling effects decay. For spatial averaging, the model predicts that an effective range of roughly one-third of the extent of the growth cone is optimal for detecting small gradient signals. For temporal decay, a timescale of about 3 minutes is required for the model to reproduce the experimentally observed sensitivity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We demonstrate an end-to-end computational model of the HEG shock tunnel as a way to extract more precise test flow conditions and as a way of getting predictions of new operating conditions. For a selection of established operating conditions, the L1d program was used to simulate the one-dimensional gas-dynamic processes within the whole of the facility. The program reproduces the compression tube performance reliably and, with the inclusion of a loss factor near the upstream-end of the compression tube, it provides a good estimate of the equilibrium pressure in the shock-reflection region over the set of six standard operating conditions for HEG.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. Generalisation is measured by the performance on independent test data drawn from the same distribution as the training data. Such performance can be quantified by the posterior average of the information divergence between the true and the model distributions. Averaging over the Bayesian posterior guarantees internal coherence; Using information divergence guarantees invariance with respect to representation. The theory generalises the least mean squares theory for linear Gaussian models to general problems of statistical estimation. The main results are: (1)~the ideal optimal estimate is always given by average over the posterior; (2)~the optimal estimate within a computational model is given by the projection of the ideal estimate to the model. This incidentally shows some currently popular methods dealing with hyperpriors are in general unnecessary and misleading. The extension of information divergence to positive normalisable measures reveals a remarkable relation between the dlt dual affine geometry of statistical manifolds and the geometry of the dual pair of Banach spaces Ld and Ldd. It therefore offers conceptual simplification to information geometry. The general conclusion on the issue of evaluating neural network learning rules and other statistical inference methods is that such evaluations are only meaningful under three assumptions: The prior P(p), describing the environment of all the problems; the divergence Dd, specifying the requirement of the task; and the model Q, specifying available computing resources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The human visual system combines contrast information from the two eyes to produce a single cyclopean representation of the external world. This task requires both summation of congruent images and inhibition of incongruent images across the eyes. These processes were explored psychophysically using narrowband sinusoidal grating stimuli. Initial experiments focussed on binocular interactions within a single detecting mechanism, using contrast discrimination and contrast matching tasks. Consistent with previous findings, dichoptic presentation produced greater masking than monocular or binocular presentation. Four computational models were compared, two of which performed well on all data sets. Suppression between mechanisms was then investigated, using orthogonal and oblique stimuli. Two distinct suppressive pathways were identified, corresponding to monocular and dichoptic presentation. Both pathways impact prior to binocular summation of signals, and differ in their strengths, tuning, and response to adaptation, consistent with recent single-cell findings in cat. Strikingly, the magnitude of dichoptic masking was found to be spatiotemporally scale invariant, whereas monocular masking was dependent on stimulus speed. Interocular suppression was further explored using a novel manipulation, whereby stimuli were presented in dichoptic antiphase. Consistent with the predictions of a computational model, this produced weaker masking than in-phase presentation. This allowed the bandwidths of suppression to be measured without the complicating factor of additive combination of mask and test. Finally, contrast vision in strabismic amblyopia was investigated. Although amblyopes are generally believed to have impaired binocular vision, binocular summation was shown to be intact when stimuli were normalized for interocular sensitivity differences. An alternative account of amblyopia was developed, in which signals in the affected eye are subject to attenuation and additive noise prior to binocular combination.