19 resultados para Case Base Reasoning
em Universidad Politécnica de Madrid
Resumo:
Case-based reasoning (CBR) is a unique tool for the evaluation of possible failure of firms (EOPFOF) for its eases of interpretation and implementation. Ensemble computing, a variation of group decision in society, provides a potential means of improving predictive performance of CBR-based EOPFOF. This research aims to integrate bagging and proportion case-basing with CBR to generate a method of proportion bagging CBR for EOPFOF. Diverse multiple case bases are first produced by multiple case-basing, in which a volume parameter is introduced to control the size of each case base. Then, the classic case retrieval algorithm is implemented to generate diverse member CBR predictors. Majority voting, the most frequently used mechanism in ensemble computing, is finally used to aggregate outputs of member CBR predictors in order to produce final prediction of the CBR ensemble. In an empirical experiment, we statistically validated the results of the CBR ensemble from multiple case bases by comparing them with those of multivariate discriminant analysis, logistic regression, classic CBR, the best member CBR predictor and bagging CBR ensemble. The results from Chinese EOPFOF prior to 3 years indicate that the new CBR ensemble, which significantly improved CBRs predictive ability, outperformed all the comparative methods.
Resumo:
In the presence of a river flood, operators in charge of control must take decisions based on imperfect and incomplete sources of information (e.g., data provided by a limited number sensors) and partial knowledge about the structure and behavior of the river basin. This is a case of reasoning about a complex dynamic system with uncertainty and real-time constraints where bayesian networks can be used to provide an effective support. In this paper we describe a solution with spatio-temporal bayesian networks to be used in a context of emergencies produced by river floods. In the paper we describe first a set of types of causal relations for hydrologic processes with spatial and temporal references to represent the dynamics of the river basin. Then we describe how this was included in a computer system called SAIDA to provide assistance to operators in charge of control in a river basin. Finally the paper shows experimental results about the performance of the model.
Resumo:
Enabling Subject Matter Experts (SMEs) to formulate knowledge without the intervention of Knowledge Engineers (KEs) requires providing SMEs with methods and tools that abstract the underlying knowledge representation and allow them to focus on modeling activities. Bridging the gap between SME-authored models and their representation is challenging, especially in the case of complex knowledge types like processes, where aspects like frame management, data, and control flow need to be addressed. In this paper, we describe how SME-authored process models can be provided with an operational semantics and grounded in a knowledge representation language like F-logic in order to support process-related reasoning. The main results of this work include a formalism for process representation and a mechanism for automatically translating process diagrams into executable code following such formalism. From all the process models authored by SMEs during evaluation 82% were well-formed, all of which executed correctly. Additionally, the two optimizations applied to the code generation mechanism produced a performance improvement at reasoning time of 25% and 30% with respect to the base case, respectively.
Resumo:
Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications—it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ‘Activity Monitor’ has been designed and implemented: a personal health-persuasive application that provides feedback on the user’s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user’s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.d
Resumo:
El cálculo de relaciones binarias fue creado por De Morgan en 1860 para ser posteriormente desarrollado en gran medida por Peirce y Schröder. Tarski, Givant, Freyd y Scedrov demostraron que las álgebras relacionales son capaces de formalizar la lógica de primer orden, la lógica de orden superior así como la teoría de conjuntos. A partir de los resultados matemáticos de Tarski y Freyd, esta tesis desarrolla semánticas denotacionales y operacionales para la programación lógica con restricciones usando el álgebra relacional como base. La idea principal es la utilización del concepto de semántica ejecutable, semánticas cuya característica principal es el que la ejecución es posible utilizando el razonamiento estándar del universo semántico, este caso, razonamiento ecuacional. En el caso de este trabajo, se muestra que las álgebras relacionales distributivas con un operador de punto fijo capturan toda la teoría y metateoría estándar de la programación lógica con restricciones incluyendo los árboles utilizados en la búsqueda de demostraciones. La mayor parte de técnicas de optimización de programas, evaluación parcial e interpretación abstracta pueden ser llevadas a cabo utilizando las semánticas aquí presentadas. La demostración de la corrección de la implementación resulta extremadamente sencilla. En la primera parte de la tesis, un programa lógico con restricciones es traducido a un conjunto de términos relacionales. La interpretación estándar en la teoría de conjuntos de dichas relaciones coincide con la semántica estándar para CLP. Las consultas contra el programa traducido son llevadas a cabo mediante la reescritura de relaciones. Para concluir la primera parte, se demuestra la corrección y equivalencia operacional de esta nueva semántica, así como se define un algoritmo de unificación mediante la reescritura de relaciones. La segunda parte de la tesis desarrolla una semántica para la programación lógica con restricciones usando la teoría de alegorías—versión categórica del álgebra de relaciones—de Freyd. Para ello, se definen dos nuevos conceptos de Categoría Regular de Lawvere y _-Alegoría, en las cuales es posible interpretar un programa lógico. La ventaja fundamental que el enfoque categórico aporta es la definición de una máquina categórica que mejora e sistema de reescritura presentado en la primera parte. Gracias al uso de relaciones tabulares, la máquina modela la ejecución eficiente sin salir de un marco estrictamente formal. Utilizando la reescritura de diagramas, se define un algoritmo para el cálculo de pullbacks en Categorías Regulares de Lawvere. Los dominios de las tabulaciones aportan información sobre la utilización de memoria y variable libres, mientras que el estado compartido queda capturado por los diagramas. La especificación de la máquina induce la derivación formal de un juego de instrucciones eficiente. El marco categórico aporta otras importantes ventajas, como la posibilidad de incorporar tipos de datos algebraicos, funciones y otras extensiones a Prolog, a la vez que se conserva el carácter 100% declarativo de nuestra semántica. ABSTRACT The calculus of binary relations was introduced by De Morgan in 1860, to be greatly developed by Peirce and Schröder, as well as many others in the twentieth century. Using different formulations of relational structures, Tarski, Givant, Freyd, and Scedrov have shown how relation algebras can provide a variable-free way of formalizing first order logic, higher order logic and set theory, among other formal systems. Building on those mathematical results, we develop denotational and operational semantics for Constraint Logic Programming using relation algebra. The idea of executable semantics plays a fundamental role in this work, both as a philosophical and technical foundation. We call a semantics executable when program execution can be carried out using the regular theory and tools that define the semantic universe. Throughout this work, the use of pure algebraic reasoning is the basis of denotational and operational results, eliminating all the classical non-equational meta-theory associated to traditional semantics for Logic Programming. All algebraic reasoning, including execution, is performed in an algebraic way, to the point we could state that the denotational semantics of a CLP program is directly executable. Techniques like optimization, partial evaluation and abstract interpretation find a natural place in our algebraic models. Other properties, like correctness of the implementation or program transformation are easy to check, as they are carried out using instances of the general equational theory. In the first part of the work, we translate Constraint Logic Programs to binary relations in a modified version of the distributive relation algebras used by Tarski. Execution is carried out by a rewriting system. We prove adequacy and operational equivalence of the semantics. In the second part of the work, the relation algebraic approach is improved by using allegory theory, a categorical version of the algebra of relations developed by Freyd and Scedrov. The use of allegories lifts the semantics to typed relations, which capture the number of logical variables used by a predicate or program state in a declarative way. A logic program is interpreted in a _-allegory, which is in turn generated from a new notion of Regular Lawvere Category. As in the untyped case, program translation coincides with program interpretation. Thus, we develop a categorical machine directly from the semantics. The machine is based on relation composition, with a pullback calculation algorithm at its core. The algorithm is defined with the help of a notion of diagram rewriting. In this operational interpretation, types represent information about memory allocation and the execution mechanism is more efficient, thanks to the faithful representation of shared state by categorical projections. We finish the work by illustrating how the categorical semantics allows the incorporation into Prolog of constructs typical of Functional Programming, like abstract data types, and strict and lazy functions.
Resumo:
No tenemos conocimiento de ninguna red de caminos prerromanos que sirvieran como base de una posible malla territorial de España. Sin embargo, una sociedad prerromana sin caminos, por muy fragmentada y aislada que fuera, es algo improbable y mucho menos en la Edad del Hierro. Por eso en época prerromana existían infinidad de caminos, muchos de los cuales hoy han desaparecido y otros han sobrevivido, casi siempre con sus recorridos mejorados. Los pueblos prerromanos aprovecharon vías naturales de comunicación (ríos, vados, valles, puertos naturales, llanuras, etc.) para tender sus caminos. En sus orígenes no siguieron pautas concretas, si no que los caminos se originaban por el tránsito (de personas, ganados, mercancías, etc.) de un lugar a otro. De este modo la red viaria prerromana era caótica y anárquica: todo camino tenía numerosos ramales y variantes, según las necesidades. Pendientes excesivas, anchuras variables, etc., en decir eran vías espontáneas, surgidas sin ninguna planificación aparente. Los recorridos en general eran cortos, aunque algunas investigaciones actuales están demostrando que algunas de las cañadas ganaderas más importantes, como la Galiana, y de largo recorrido, eran de origen prerromano. En el caso de la península Ibérica, y más concretamente en el caso de la Meseta, el territorio estaba fragmentado en diversos pueblos y tribus, agrupados según criterios étnicos y culturales y con contactos con los pueblos próximos que motivan la preponderancia de caminos de recorrido cortos. Solo la necesidad de llevar los rebaños (de cabras y ovejas sobre todo) desde las serranías en verano a las llanuras en invierno, motivaría viajes más largos en los que algunas cañadas ganaderas jugarían un papel más importante. Con la llegada de los romanaos, se implantó en Hispania una densa red viaria, cuya construcción se prolongó durante toda la dominación romana, siendo reparadas muchas calzadas y vías en varias ocasiones. En época romana la red caminera era variada y estaba constituida por “las calzadas” que comunicaban puntos importantes, eran muy transitadas, de ahí que la administración romana las mantuviera siempre en buen estado, para asegurar el intercambio comercial entre zonas distintas, cobro de impuestos, etc. “Los caminos de tierra (viae terrenae)” que además de las calzadas, que podemos asemejar a las actuales carreteras de primer y segundo orden, constituían la infinidad de caminos locales y comarcales. Los trazados se realizaron unos en época romana, y otros muchos apoyándose en los caminos de la época prerromana, éstas vías no se realizaban buscando el recorrido más corto entre dos puntos, ni tampoco el más cómodo y con un firme estructural de menor importancia que en las calzadas. Tampoco estaban hechos para un tipo concreto de transporte, por lo que nos encontraríamos algunos que por su anchura permitían el paso de carros, y otros que sólo permitirían el paso a pie, a caballo o en burro. Solían ser, como hemos indicado, caminos de tierra con acabados en zahorras y recorridos en su mayor parte cortos y medianos. Dentro de la malla territorial de España las calzadas constituirían las denominadas “viae publicae” que constituían la red principal y esqueleto vertebrador de Hispania. Los caminos de tierra constituirían los denominados “actus” caminos de carácter regional que configuraban la mayor parte de la red. Muchas de las “viae publicae” y de los “actus” tendrían su origen en las “viae militares” que habrían sido los primeros construidos, apoyándose en muchas ocasiones en los caminos prerromanos, por los romanos para realizar la conquista de Hispania y que luego con la Paz romana habrían tenido otro tipo de uso. Dentro de estas “viae militares” tuvieron una importancia relevancia aquellas que se utilizaron en la conquista de la Celtiberia, culminada con la caída de Numantia. Dentro de ellas tuvo una importancia fundamental la vía romana del río Alhama, objeto de esta Tesis, que facilitaría el desplazamiento de los ejércitos romanos desde Graccurris, primera ciudad romana fundada en el Ebro medio, hasta Numantia. Desde la época Augusta, la vía romana del río Alhama, pasaría a formar parte de los denominados “actus” formando parte de la malla territorial de la Península Ibérica como vía de comunicación entre la Meseta y el Ebro Medio. We do not have knowledge of any network of ways prerromanos that were serving as base of a possible territorial mesh of Spain. Nevertheless, a company prerromana without ways, for very fragmented and isolated that was, is something improbable and great less in the Age of the Iron. Because of it in epoch prerromana existed infinity of ways, many of which today have disappeared and others have survived, almost always with his improved tours. The people prerromanos took advantage of natural routes of communication (rivers, fords, valleys, natural ports, plains, etc.) to stretch his ways. In his origins concrete guidelines did not continue, if not that the ways were originating for the traffic (of persons, cattle, goods, etc.) to and from. Thus the network viaria prerromana was chaotic and anarchic: all way had numerous branches and variants, according to the needs. Excessive slopes, variable widths, etc., in saying were spontaneous routes arisen without no apparent planning. The tours in general were short, though some current investigations are demonstrating that some of the most important cattle glens, as the Galiana, and of crossed length, were of origin prerromano. In case of the Iberian Peninsula, and more concretely in case of the Plateau, the territory was fragmented in diverse peoples and tribes, grouped according to ethnic and cultural criteria and with contacts with the near peoples that motivate the prevalence of short ways of tour. Only the need to take the flocks (of goats and sheeps especially) from the mountainous countries in summer to the plains in winter, would motivate longer trips in which some cattle glens would play a more important paper. With the arrival of the romanos, a dense network was implanted in Roman Spain viaria, whose construction extended during the whole Roman domination, being repaired many causeways and routes in several occasions. In Roman epoch the pertaining to roads network was changed and constituted by " the causeways " that were communicating important points, they were very travelled, of there that the Roman administration was supporting always in good condition, to assure the commercial exchange between different zones, collection of taxes, etc. "The dirt tracks (viae terrenae)" that besides the causeways, which we can make alike to the current roads of the first and second order, were constituting the infinity of local and regional ways. The tracings were realized some in Roman epoch, and great others resting on the ways of the epoch prerromana, these routes were not realized looking for the most short tour neither between points, two nor neither most comfortable and with a structural road surface of minor importance that in the causeways. They were not also done for a concrete type of transport, for what some of us would think that for his width they were allowing the step of cars, and others that only would allow the step afoot, astride or in donkey. They were in the habit of being, since we have indicated, dirt tracks with ended in zahorras and tours in his most short and medium. Inside the territorial mesh of Spain the causeways would constitute the called ones "viae publicae" that constituted the principal network and skeleton vertebrador of Roman Spain. The dirt tracks would constitute the "actus” called ways of regional character that were forming most of the network. Many of "viae publicae" and of the "actus" they would have his origin in " viae military" that would have been the first ones constructed, resting on many occasions on the ways prerromanos, for the Romans to realize the conquest of Roman Spain and that then with the Roman Peace they would have had another type of use. Inside these "viae military" had an importance relevancy those that were in use in the conquest of the Celtiberia, reached with Numantia's fall. Inside them a fundamental importance had the Roman route of the river Alhama, object of this Thesis, which would facilitate the displacement of the Roman armies from Graccurris, the first Roman city been founded on the average Ebro, up to Numantia. From the August epoch, the Roman route of the river Alhama, would happen to form a part of the "actus” forming a part of the territorial mesh of the Iberian Peninsula as road link between the Plateau and the Average Ebro.
Resumo:
World Health Organization actively stresses the importance of health, nutrition and well-being of the mother to foster children development. This issue is critical in the rural areas of developing countries where monitoring of health status of children is hardly performed since population suffers from a lack of access to health care. The aim of this research is to design, implement and deploy an e-health information and communication system to support health care in 26 rural communities of Cusmapa, Nicaragua. The final solution consists of an hybrid WiMAX/WiFi architecture that provides good quality communications through VoIP taking advantage of low cost WiFi mobile devices. Thus, a WiMAX base station was installed in the health center to provide a radio link with the rural health post "El Carrizo" sited 7,4 km. in line of sight. This service makes possible personal broadband voice and data communication facilities with the health center based on WiFi enabled devices such as laptops and cellular phones without communications cost. A free software PBX was installed at "San José de Cusmapa" health care site to enable communications for physicians, nurses and a technician through mobile telephones with IEEE 802.11 b/g protocol and SIP provided by the project. Additionally, the rural health post staff (midwives, brigade) received two mobile phones with these same features. In a complementary way, the deployed health information system is ready to analyze the distribution of maternal-child population at risk and the distribution of diseases on a geographical baseline. The system works with four information layers: fertile women, children, people with disabilities and diseases. Thus, authorized staff can obtain reports about prenatal monitoring tasks, status of the communities, malnutrition, and immunization control. Data need to be updated by health care staff in order to timely detect the source of problem to implement measures addressed to alleviate and improve health status population permanently. Ongoing research is focused on a mobile platform that collects and automatically updates in the information system, the height and weight of the children locally gathered in the remote communities. This research is being granted by the program Millennium Rural Communities of the Technical University of Madrid.
Resumo:
Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications?it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ?Activity Monitor? has been designed and implemented: a personal health-persuasive application that provides feedback on the user?s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user?s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.
Resumo:
Neuronal morphology is a key feature in the study of brain circuits, as it is highly related to information processing and functional identification. Neuronal morphology affects the process of integration of inputs from other neurons and determines the neurons which receive the output of the neurons. Different parts of the neurons can operate semi-independently according to the spatial location of the synaptic connections. As a result, there is considerable interest in the analysis of the microanatomy of nervous cells since it constitutes an excellent tool for better understanding cortical function. However, the morphologies, molecular features and electrophysiological properties of neuronal cells are extremely variable. Except for some special cases, this variability makes it hard to find a set of features that unambiguously define a neuronal type. In addition, there are distinct types of neurons in particular regions of the brain. This morphological variability makes the analysis and modeling of neuronal morphology a challenge. Uncertainty is a key feature in many complex real-world problems. Probability theory provides a framework for modeling and reasoning with uncertainty. Probabilistic graphical models combine statistical theory and graph theory to provide a tool for managing domains with uncertainty. In particular, we focus on Bayesian networks, the most commonly used probabilistic graphical model. In this dissertation, we design new methods for learning Bayesian networks and apply them to the problem of modeling and analyzing morphological data from neurons. The morphology of a neuron can be quantified using a number of measurements, e.g., the length of the dendrites and the axon, the number of bifurcations, the direction of the dendrites and the axon, etc. These measurements can be modeled as discrete or continuous data. The continuous data can be linear (e.g., the length or the width of a dendrite) or directional (e.g., the direction of the axon). These data may follow complex probability distributions and may not fit any known parametric distribution. Modeling this kind of problems using hybrid Bayesian networks with discrete, linear and directional variables poses a number of challenges regarding learning from data, inference, etc. In this dissertation, we propose a method for modeling and simulating basal dendritic trees from pyramidal neurons using Bayesian networks to capture the interactions between the variables in the problem domain. A complete set of variables is measured from the dendrites, and a learning algorithm is applied to find the structure and estimate the parameters of the probability distributions included in the Bayesian networks. Then, a simulation algorithm is used to build the virtual dendrites by sampling values from the Bayesian networks, and a thorough evaluation is performed to show the model’s ability to generate realistic dendrites. In this first approach, the variables are discretized so that discrete Bayesian networks can be learned and simulated. Then, we address the problem of learning hybrid Bayesian networks with different kinds of variables. Mixtures of polynomials have been proposed as a way of representing probability densities in hybrid Bayesian networks. We present a method for learning mixtures of polynomials approximations of one-dimensional, multidimensional and conditional probability densities from data. The method is based on basis spline interpolation, where a density is approximated as a linear combination of basis splines. The proposed algorithms are evaluated using artificial datasets. We also use the proposed methods as a non-parametric density estimation technique in Bayesian network classifiers. Next, we address the problem of including directional data in Bayesian networks. These data have some special properties that rule out the use of classical statistics. Therefore, different distributions and statistics, such as the univariate von Mises and the multivariate von Mises–Fisher distributions, should be used to deal with this kind of information. In particular, we extend the naive Bayes classifier to the case where the conditional probability distributions of the predictive variables given the class follow either of these distributions. We consider the simple scenario, where only directional predictive variables are used, and the hybrid case, where discrete, Gaussian and directional distributions are mixed. The classifier decision functions and their decision surfaces are studied at length. Artificial examples are used to illustrate the behavior of the classifiers. The proposed classifiers are empirically evaluated over real datasets. We also study the problem of interneuron classification. An extensive group of experts is asked to classify a set of neurons according to their most prominent anatomical features. A web application is developed to retrieve the experts’ classifications. We compute agreement measures to analyze the consensus between the experts when classifying the neurons. Using Bayesian networks and clustering algorithms on the resulting data, we investigate the suitability of the anatomical terms and neuron types commonly used in the literature. Additionally, we apply supervised learning approaches to automatically classify interneurons using the values of their morphological measurements. Then, a methodology for building a model which captures the opinions of all the experts is presented. First, one Bayesian network is learned for each expert, and we propose an algorithm for clustering Bayesian networks corresponding to experts with similar behaviors. Then, a Bayesian network which represents the opinions of each group of experts is induced. Finally, a consensus Bayesian multinet which models the opinions of the whole group of experts is built. A thorough analysis of the consensus model identifies different behaviors between the experts when classifying the interneurons in the experiment. A set of characterizing morphological traits for the neuronal types can be defined by performing inference in the Bayesian multinet. These findings are used to validate the model and to gain some insights into neuron morphology. Finally, we study a classification problem where the true class label of the training instances is not known. Instead, a set of class labels is available for each instance. This is inspired by the neuron classification problem, where a group of experts is asked to individually provide a class label for each instance. We propose a novel approach for learning Bayesian networks using count vectors which represent the number of experts who selected each class label for each instance. These Bayesian networks are evaluated using artificial datasets from supervised learning problems. Resumen La morfología neuronal es una característica clave en el estudio de los circuitos cerebrales, ya que está altamente relacionada con el procesado de información y con los roles funcionales. La morfología neuronal afecta al proceso de integración de las señales de entrada y determina las neuronas que reciben las salidas de otras neuronas. Las diferentes partes de la neurona pueden operar de forma semi-independiente de acuerdo a la localización espacial de las conexiones sinápticas. Por tanto, existe un interés considerable en el análisis de la microanatomía de las células nerviosas, ya que constituye una excelente herramienta para comprender mejor el funcionamiento de la corteza cerebral. Sin embargo, las propiedades morfológicas, moleculares y electrofisiológicas de las células neuronales son extremadamente variables. Excepto en algunos casos especiales, esta variabilidad morfológica dificulta la definición de un conjunto de características que distingan claramente un tipo neuronal. Además, existen diferentes tipos de neuronas en regiones particulares del cerebro. La variabilidad neuronal hace que el análisis y el modelado de la morfología neuronal sean un importante reto científico. La incertidumbre es una propiedad clave en muchos problemas reales. La teoría de la probabilidad proporciona un marco para modelar y razonar bajo incertidumbre. Los modelos gráficos probabilísticos combinan la teoría estadística y la teoría de grafos con el objetivo de proporcionar una herramienta con la que trabajar bajo incertidumbre. En particular, nos centraremos en las redes bayesianas, el modelo más utilizado dentro de los modelos gráficos probabilísticos. En esta tesis hemos diseñado nuevos métodos para aprender redes bayesianas, inspirados por y aplicados al problema del modelado y análisis de datos morfológicos de neuronas. La morfología de una neurona puede ser cuantificada usando una serie de medidas, por ejemplo, la longitud de las dendritas y el axón, el número de bifurcaciones, la dirección de las dendritas y el axón, etc. Estas medidas pueden ser modeladas como datos continuos o discretos. A su vez, los datos continuos pueden ser lineales (por ejemplo, la longitud o la anchura de una dendrita) o direccionales (por ejemplo, la dirección del axón). Estos datos pueden llegar a seguir distribuciones de probabilidad muy complejas y pueden no ajustarse a ninguna distribución paramétrica conocida. El modelado de este tipo de problemas con redes bayesianas híbridas incluyendo variables discretas, lineales y direccionales presenta una serie de retos en relación al aprendizaje a partir de datos, la inferencia, etc. En esta tesis se propone un método para modelar y simular árboles dendríticos basales de neuronas piramidales usando redes bayesianas para capturar las interacciones entre las variables del problema. Para ello, se mide un amplio conjunto de variables de las dendritas y se aplica un algoritmo de aprendizaje con el que se aprende la estructura y se estiman los parámetros de las distribuciones de probabilidad que constituyen las redes bayesianas. Después, se usa un algoritmo de simulación para construir dendritas virtuales mediante el muestreo de valores de las redes bayesianas. Finalmente, se lleva a cabo una profunda evaluaci ón para verificar la capacidad del modelo a la hora de generar dendritas realistas. En esta primera aproximación, las variables fueron discretizadas para poder aprender y muestrear las redes bayesianas. A continuación, se aborda el problema del aprendizaje de redes bayesianas con diferentes tipos de variables. Las mixturas de polinomios constituyen un método para representar densidades de probabilidad en redes bayesianas híbridas. Presentamos un método para aprender aproximaciones de densidades unidimensionales, multidimensionales y condicionales a partir de datos utilizando mixturas de polinomios. El método se basa en interpolación con splines, que aproxima una densidad como una combinación lineal de splines. Los algoritmos propuestos se evalúan utilizando bases de datos artificiales. Además, las mixturas de polinomios son utilizadas como un método no paramétrico de estimación de densidades para clasificadores basados en redes bayesianas. Después, se estudia el problema de incluir información direccional en redes bayesianas. Este tipo de datos presenta una serie de características especiales que impiden el uso de las técnicas estadísticas clásicas. Por ello, para manejar este tipo de información se deben usar estadísticos y distribuciones de probabilidad específicos, como la distribución univariante von Mises y la distribución multivariante von Mises–Fisher. En concreto, en esta tesis extendemos el clasificador naive Bayes al caso en el que las distribuciones de probabilidad condicionada de las variables predictoras dada la clase siguen alguna de estas distribuciones. Se estudia el caso base, en el que sólo se utilizan variables direccionales, y el caso híbrido, en el que variables discretas, lineales y direccionales aparecen mezcladas. También se estudian los clasificadores desde un punto de vista teórico, derivando sus funciones de decisión y las superficies de decisión asociadas. El comportamiento de los clasificadores se ilustra utilizando bases de datos artificiales. Además, los clasificadores son evaluados empíricamente utilizando bases de datos reales. También se estudia el problema de la clasificación de interneuronas. Desarrollamos una aplicación web que permite a un grupo de expertos clasificar un conjunto de neuronas de acuerdo a sus características morfológicas más destacadas. Se utilizan medidas de concordancia para analizar el consenso entre los expertos a la hora de clasificar las neuronas. Se investiga la idoneidad de los términos anatómicos y de los tipos neuronales utilizados frecuentemente en la literatura a través del análisis de redes bayesianas y la aplicación de algoritmos de clustering. Además, se aplican técnicas de aprendizaje supervisado con el objetivo de clasificar de forma automática las interneuronas a partir de sus valores morfológicos. A continuación, se presenta una metodología para construir un modelo que captura las opiniones de todos los expertos. Primero, se genera una red bayesiana para cada experto y se propone un algoritmo para agrupar las redes bayesianas que se corresponden con expertos con comportamientos similares. Después, se induce una red bayesiana que modela la opinión de cada grupo de expertos. Por último, se construye una multired bayesiana que modela las opiniones del conjunto completo de expertos. El análisis del modelo consensuado permite identificar diferentes comportamientos entre los expertos a la hora de clasificar las neuronas. Además, permite extraer un conjunto de características morfológicas relevantes para cada uno de los tipos neuronales mediante inferencia con la multired bayesiana. Estos descubrimientos se utilizan para validar el modelo y constituyen información relevante acerca de la morfología neuronal. Por último, se estudia un problema de clasificación en el que la etiqueta de clase de los datos de entrenamiento es incierta. En cambio, disponemos de un conjunto de etiquetas para cada instancia. Este problema está inspirado en el problema de la clasificación de neuronas, en el que un grupo de expertos proporciona una etiqueta de clase para cada instancia de manera individual. Se propone un método para aprender redes bayesianas utilizando vectores de cuentas, que representan el número de expertos que seleccionan cada etiqueta de clase para cada instancia. Estas redes bayesianas se evalúan utilizando bases de datos artificiales de problemas de aprendizaje supervisado.
Resumo:
En el presente trabajo se estudia la producción potencial de biomasa procedente de los cultivos de centeno y triticale en las seis comarcas agrarias de la Comunidad de Madrid (CM) y la posibilidad de su aplicación a la producción de bioelectricidad en cada una de ellas. En primer lugar se realiza un estudio bibliográfico de la situación actual de la bioelectricidad. Uno de los principales datos a tener en cuenta es que en el PER 2011- 2020 se estima que el total de potencia eléctrica instalada a partir de biomasa en España en el año 2020 sea de 1.350 MW, unas dos veces y media la existente a finales de 2010. Además, se comenta el estado de la incentivación del uso de biomasa de cultivos energéticos para producción de electricidad, la cual se regula actualmente según el Real Decreto-ley 9/2013, de 12 de Julio, por el que se adoptaron medidas urgentes para garantizar la estabilidad financiera del sistema eléctrico, y se consideran los criterios de sostenibilidad en el uso de biocombustibles sólidos. Se realiza una caracterización de las seis comarcas agrarias que forman la Comunidad Autónoma de Madrid: Área Metropolitana, Campiña, Guadarrama, Lozoya- Somosierra, Sur-Occidental y Vegas, la cual consta de dos partes: una descripción de la climatología y otra de la distribución de la superficie dedicada a barbecho y cultivos herbáceos. Se hace una recopilación bibliográfica de los modelos de simulación más representativos de crecimiento de los cultivos (CERES y Cereal YES), así como de ensayos realizados con los cultivos de centeno y triticale para la producción de biomasa y de estudios efectuados mediante herramientas GIS y técnicas de análisis multicriterio para la ubicación de centrales de bioelectricidad y el estudio de la logística de la biomasa. Se propone un modelo de simulación de la productividad de biomasa de centeno y de triticale para la CM, que resulta de la combinación de un modelo de producción de grano en base a datos climatológicos y a la relación biomasa/grano media de ambos cultivos obtenida en una experiencia previa. Los modelos obtenidos responden a las siguientes ecuaciones (siendo TN = temperatura media normalizada a 9,9 ºC y PN = precipitación acumulada normalizada a 496,7 mm): - Producción biomasa centeno (t m.s./ha) = 2,785 * [1,078 * ln(TN + 2*PN) + 2,3256] - Producción biomasa triticale (t m.s./ha) = 2,595 * [2,4495 * ln(TN + 2*PN) + 2,6103] Posteriormente, aplicando los modelos desarrollados, se cuantifica el potencial de producción de biomasa de centeno y triticale en las distintas comarcas agrarias de la CM en cada uno de los escenarios establecidos, que se consideran según el uso de la superficie de barbecho de secano disponible (25%, 50%, 75% y 100%). Las producciones potenciales de biomasa, que se podrían alcanzar en la CM utilizando el 100% de la superficie de barbecho de secano, en base a los cultivos de centeno y triticale, se estimaron en 169.710,72 - 149.811,59 - 140.217,54 - 101.583,01 - 26.961,88 y 1.886,40 t anuales para las comarcas de Campiña - Vegas, Sur - Occidental - Área Metropolitana - Lozoya-Somosierra y Guadarrama, respectivamente. Se realiza un análisis multicriterio basado en la programación de compromiso para definir las comarcas agrarias con mejores características para la ubicación de centrales de bioelectricidad en base a los criterios de potencial de biomasa, infraestructura eléctrica, red de carreteras, espacios protegidos y superficie de núcleos urbanos. Al efectuar el análisis multicriterio, se obtiene la siguiente ordenación jerárquica en base a los criterios establecidos: Campiña, Sur Occidental, Vegas, Área Metropolitana, Lozoya-Somosierra y Guadarrama. Mediante la utilización de técnicas GIS se estudia la localización más conveniente de una central de bioelectricidad de 2,2 MW en cada una de las comarcas agrarias y según el uso de la superficie de barbecho de secano disponible (25%, 50%, 75% y 100%), siempre que exista potencial suficiente. Para el caso de la biomasa de centeno y de triticale en base seca se considera un PCI de 3500 kcal/kg, por lo que se necesitarán como mínimo 17.298,28 toneladas para satisfacer las necesidades de cada una de las centrales de 2,2 MW. Se analiza el potencial máximo de bioelectricidad en cada una de las comarcas agrarias en base a los cultivos de centeno y triticale como productores de biomasa. Según se considere el 25% o el 100% del barbecho de secano para producción de biomasa, la potencia máxima de bioelectricidad que se podría instalar en cada una de las comarcas agrarias variaría entre 5,4 y 21,58 MW en la comarca Campiña, entre 4,76 y 19,05 MW en la comarca Vegas, entre 4,46 y 17,83 MW en la comarca Sur Occidental, entre 3,23 y 12,92 MW en la comarca Área Metropolitana, entre 0,86 y 3,43 MW en la comarca Lozoya Somosierra y entre 0,06 y 0,24 MW en la comarca Guadarrama. La potencia total que se podría instalar en la CM a partir de la biomasa de centeno y triticale podría variar entre 18,76 y 75,06 MW según que se utilice el 25% o el 100% de las tierras de barbecho de secano para su cultivo. ABSTRACT In this work is studied the potential biomass production from rye and triticale crops in the six Madrid Community (MC) agricultural regions and the possibility of its application to the bioelectricity production in each of them. First is performed a bibliographical study of the current situation of bioelectricity. One of the main elements to be considered is that in the PER 2011-2020 is estimated that the total installed electric power from biomass in Spain in 2020 was 1.350 MW, about two and a half times as at end 2010. Also is discussed the status of enhancing the use of biomass energy crops for electricity production, which is currently regulated according to the Real Decreto-ley 9/2013, of July 12, by which urgent measures were adopted to ensure financial stability of the electrical system, and there are considered the sustainability criteria in the use of solid biofuels. A characterization of the six Madrid Community agricultural regions is carried out: Area Metropolitana, Campiña, Guadarrama, Lozoya-Somosierra, Sur-Occidental and Vegas, which consists of two parts: a description of the climatology and another about the distribution of the area under fallow and arable crops. It makes a bibliographic compilation of the most representative crop growth simulation models (CERES and Cereal YES), as well as trials carried out with rye and triticale crops for biomass production and studies conducted by GIS tools and techniques multicriteria analysis for the location of bioelectricity centrals and the study of the logistics of biomass. Is proposed a biomass productivity simulation model for rye and triticale for MC that results from the combination of grain production model based on climatological data and the average relative biomass/grain of both crops obtained in a prior experience. The models obtained correspond to the following equations (where TN = normalized average temperature and PN = normalized accumulated precipitation): - Production rye biomass (t d.m./ha) = 2.785 * [1.078 * ln (TN + 2*PN) + 2.3256] - Production triticale biomass (t d.m./ha) = 2,595 * [2.4495 * ln (TN + 2*PN) + 2.6103] Subsequently, applying the developed models, the biomass potential of the MC agricultural regions is quantified in each of the scenarios established, which are considered as the use of dry fallow area available (25%, 50%, 75 % and 100%). The potential biomass production that can be achieved within the MC using 100% of the rainfed fallow area based on rye and triticale crops, were estimated at 169.710,72 - 149.811,59 - 140.217,54 - 101.583,01 - 26.961,88 and 1.886,40 t annual for the regions of Campiña, Vegas, Sur Occidental, Area Metropolitana, Lozoya- Somosierra and Guadarrama, respectively. A multicriteria analysis is performed, based on compromise programming to define the agricultural regions with better features for the location of bioelectricity centrals, on the basis of biomass potential, electrical infrastructure, road network, protected areas and urban area criteria. Upon multicriteria analysis, is obtained the following hierarchical order based on criteria: Campiña, Sur Occidental, Vegas, Area Metropolitana, Lozoya-Somosierra and Guadarrama. Likewise, through the use of GIS techniques, the most suitable location for a 2,2 MW bioelectricity plant is studied in each of the agricultural regions and according to the use of dry fallow area available (25%, 50% , 75% and 100%), if there is sufficient potential. In the case of biomass rye and triticale dry basis is considered a PCI of 3500 kcal/kg, so it will take at least 17,298.28 t to satisfy the needs of each plant. Is analyzed the maximum bioelectricity potential on each of the agricultural regions on the basis of the rye and triticale crops as biomass producers. As deemed 25% or 100% dry fallow for biomass, the maximum bioelectricity potential varies between 5,4 and 21,58 MW in the Campiña region, between 4,76 and 19,05 MW in the Vegas region, between 4,46 and 17,83 MW in the Sur Occidental region, between 3,23 and 12,92 MW in the Area Metropolitana region, between 0,86 and 3,43 MW in the Lozoya-Somosierra region and between 0,06 and 0,24 MW in the Guadarrama region. The total power that could be installed in the CM from rye and triticale biomass could vary between 18.76 and 75.06 MW if is used the 25% or 100% of fallow land for rainfed crop.
Case study on mobile applications UX: effect of the usage of a crosss-platform development framework
Resumo:
Cross-platform development frameworks for mobile applications promise important advantages in cost cuttings and easy maintenance, posing as a very good option for organizations interested in the design of mobile applications for several platforms. Given that platform conventions are especially important for the User eXperience (UX) of mobile applications, the usage of framework where the same code defines the behavior of the app in different platforms could have negative impact in the UX. The objetive of this study is comparing the cross-platform and the native approach for being able to determine if the selected development approach has any impact on the users in terms of UX. To be able to set a base line under this subject, study on cross-platform frameworks was performed to select the most appropriate one from a UX point of view. In order to achieve the objectives of this work, two development teams have developed two versions of the same application; one using framework that generates Android and iOS versions automatically, and another team developing native versions of the same application. The alternative versions for each platform have been evaluated with 37 users with a combination of a laboratory usability test and a longitudinal study. The results show that differences are minimal in the Android version, but in iOS, even if a reasonable good UX can be obtained with the usage of this framework by an UX-conscious design team, a higher level of UX can be obtained directly developing in native code.
Resumo:
This paper presents the knowledge model of a distributed decision support system, that has been designed for the management of a national network in Ukraine. It shows how advanced Artificial Intelligence techniques (multiagent systems and knowledge modelling) have been applied to solve this real-world decision support problem: on the one hand its distributed nature, implied by different loci of decision-making at the network nodes, suggested to apply a multiagent solution; on the other, due to the complexity of problem-solving for local network administration, it was useful to apply knowledge modelling techniques, in order to structure the different knowledge types and reasoning processes involved. The paper sets out from a description of our particular management problem. Subsequently, our agent model is described, pointing out the local problem-solving and coordination knowledge models. Finally, the dynamics of the approach is illustrated by an example.
Resumo:
The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies such as cooperative learning and project based learning. This study was motivated by the comparison of the results obtained after applying Cooperative Learning (CL) and Project Based Learning (PBL) to a subject of Computer Engineering. The fundamental hypothesis tested was whether the academic success achieved by the students of the first years was higher when CL was applied than in those cases to which PBL was applied. A practical case, by means of which the effectiveness of CL and PBL are compared, is presented in this work. This study has been carried out at the Universidad Politécnica de Madrid, where these mechanisms have been applied to the Operating Systems I subject from the Technical Engineering in Computer Systems degree (OSIS) and to the same subject from the Technical Engineering in Computer Management degree (OSIM). Both subjects have the same syllabus, are taught in the same year and semester and share also formative objectives. From this study we can conclude that students¿ academic performance (regarding the grades given) is greater with PBL than with CL. To be more specific, the difference is between 0.5 and 1 point for the individual tests. For the group tests, this difference is between 2.5 and 3 points. Therefore, this study refutes the fundamental hypothesis formulated at the beginning. Some of the possible interpretations of these results are referred to in this study.
Resumo:
La creciente complejidad, heterogeneidad y dinamismo inherente a las redes de telecomunicaciones, los sistemas distribuidos y los servicios avanzados de información y comunicación emergentes, así como el incremento de su criticidad e importancia estratégica, requieren la adopción de tecnologías cada vez más sofisticadas para su gestión, su coordinación y su integración por parte de los operadores de red, los proveedores de servicio y las empresas, como usuarios finales de los mismos, con el fin de garantizar niveles adecuados de funcionalidad, rendimiento y fiabilidad. Las estrategias de gestión adoptadas tradicionalmente adolecen de seguir modelos excesivamente estáticos y centralizados, con un elevado componente de supervisión y difícilmente escalables. La acuciante necesidad por flexibilizar esta gestión y hacerla a la vez más escalable y robusta, ha provocado en los últimos años un considerable interés por desarrollar nuevos paradigmas basados en modelos jerárquicos y distribuidos, como evolución natural de los primeros modelos jerárquicos débilmente distribuidos que sucedieron al paradigma centralizado. Se crean así nuevos modelos como son los basados en Gestión por Delegación, en el paradigma de código móvil, en las tecnologías de objetos distribuidos y en los servicios web. Estas alternativas se han mostrado enormemente robustas, flexibles y escalables frente a las estrategias tradicionales de gestión, pero continúan sin resolver aún muchos problemas. Las líneas actuales de investigación parten del hecho de que muchos problemas de robustez, escalabilidad y flexibilidad continúan sin ser resueltos por el paradigma jerárquico-distribuido, y abogan por la migración hacia un paradigma cooperativo fuertemente distribuido. Estas líneas tienen su germen en la Inteligencia Artificial Distribuida (DAI) y, más concretamente, en el paradigma de agentes autónomos y en los Sistemas Multi-agente (MAS). Todas ellas se perfilan en torno a un conjunto de objetivos que pueden resumirse en alcanzar un mayor grado de autonomía en la funcionalidad de la gestión y una mayor capacidad de autoconfiguración que resuelva los problemas de escalabilidad y la necesidad de supervisión presentes en los sistemas actuales, evolucionar hacia técnicas de control fuertemente distribuido y cooperativo guiado por la meta y dotar de una mayor riqueza semántica a los modelos de información. Cada vez más investigadores están empezando a utilizar agentes para la gestión de redes y sistemas distribuidos. Sin embargo, los límites establecidos en sus trabajos entre agentes móviles (que siguen el paradigma de código móvil) y agentes autónomos (que realmente siguen el paradigma cooperativo) resultan difusos. Muchos de estos trabajos se centran en la utilización de agentes móviles, lo cual, al igual que ocurría con las técnicas de código móvil comentadas anteriormente, les permite dotar de un mayor componente dinámico al concepto tradicional de Gestión por Delegación. Con ello se consigue flexibilizar la gestión, distribuir la lógica de gestión cerca de los datos y distribuir el control. Sin embargo se permanece en el paradigma jerárquico distribuido. Si bien continúa sin definirse aún una arquitectura de gestión fiel al paradigma cooperativo fuertemente distribuido, estas líneas de investigación han puesto de manifiesto serios problemas de adecuación en los modelos de información, comunicación y organizativo de las arquitecturas de gestión existentes. En este contexto, la tesis presenta un modelo de arquitectura para gestión holónica de sistemas y servicios distribuidos mediante sociedades de agentes autónomos, cuyos objetivos fundamentales son el incremento del grado de automatización asociado a las tareas de gestión, el aumento de la escalabilidad de las soluciones de gestión, soporte para delegación tanto por dominios como por macro-tareas, y un alto grado de interoperabilidad en entornos abiertos. A partir de estos objetivos se ha desarrollado un modelo de información formal de tipo semántico, basado en lógica descriptiva que permite un mayor grado de automatización en la gestión en base a la utilización de agentes autónomos racionales, capaces de razonar, inferir e integrar de forma dinámica conocimiento y servicios conceptualizados mediante el modelo CIM y formalizados a nivel semántico mediante lógica descriptiva. El modelo de información incluye además un “mapping” a nivel de meta-modelo de CIM al lenguaje de especificación de ontologías OWL, que supone un significativo avance en el área de la representación y el intercambio basado en XML de modelos y meta-información. A nivel de interacción, el modelo aporta un lenguaje de especificación formal de conversaciones entre agentes basado en la teoría de actos ilocucionales y aporta una semántica operacional para dicho lenguaje que facilita la labor de verificación de propiedades formales asociadas al protocolo de interacción. Se ha desarrollado también un modelo de organización holónico y orientado a roles cuyas principales características están alineadas con las demandadas por los servicios distribuidos emergentes e incluyen la ausencia de control central, capacidades de reestructuración dinámica, capacidades de cooperación, y facilidades de adaptación a diferentes culturas organizativas. El modelo incluye un submodelo normativo adecuado al carácter autónomo de los holones de gestión y basado en las lógicas modales deontológica y de acción.---ABSTRACT---The growing complexity, heterogeneity and dynamism inherent in telecommunications networks, distributed systems and the emerging advanced information and communication services, as well as their increased criticality and strategic importance, calls for the adoption of increasingly more sophisticated technologies for their management, coordination and integration by network operators, service providers and end-user companies to assure adequate levels of functionality, performance and reliability. The management strategies adopted traditionally follow models that are too static and centralised, have a high supervision component and are difficult to scale. The pressing need to flexibilise management and, at the same time, make it more scalable and robust recently led to a lot of interest in developing new paradigms based on hierarchical and distributed models, as a natural evolution from the first weakly distributed hierarchical models that succeeded the centralised paradigm. Thus new models based on management by delegation, the mobile code paradigm, distributed objects and web services came into being. These alternatives have turned out to be enormously robust, flexible and scalable as compared with the traditional management strategies. However, many problems still remain to be solved. Current research lines assume that the distributed hierarchical paradigm has as yet failed to solve many of the problems related to robustness, scalability and flexibility and advocate migration towards a strongly distributed cooperative paradigm. These lines of research were spawned by Distributed Artificial Intelligence (DAI) and, specifically, the autonomous agent paradigm and Multi-Agent Systems (MAS). They all revolve around a series of objectives, which can be summarised as achieving greater management functionality autonomy and a greater self-configuration capability, which solves the problems of scalability and the need for supervision that plague current systems, evolving towards strongly distributed and goal-driven cooperative control techniques and semantically enhancing information models. More and more researchers are starting to use agents for network and distributed systems management. However, the boundaries established in their work between mobile agents (that follow the mobile code paradigm) and autonomous agents (that really follow the cooperative paradigm) are fuzzy. Many of these approximations focus on the use of mobile agents, which, as was the case with the above-mentioned mobile code techniques, means that they can inject more dynamism into the traditional concept of management by delegation. Accordingly, they are able to flexibilise management, distribute management logic about data and distribute control. However, they remain within the distributed hierarchical paradigm. While a management architecture faithful to the strongly distributed cooperative paradigm has yet to be defined, these lines of research have revealed that the information, communication and organisation models of existing management architectures are far from adequate. In this context, this dissertation presents an architectural model for the holonic management of distributed systems and services through autonomous agent societies. The main objectives of this model are to raise the level of management task automation, increase the scalability of management solutions, provide support for delegation by both domains and macro-tasks and achieve a high level of interoperability in open environments. Bearing in mind these objectives, a descriptive logic-based formal semantic information model has been developed, which increases management automation by using rational autonomous agents capable of reasoning, inferring and dynamically integrating knowledge and services conceptualised by means of the CIM model and formalised at the semantic level by means of descriptive logic. The information model also includes a mapping, at the CIM metamodel level, to the OWL ontology specification language, which amounts to a significant advance in the field of XML-based model and metainformation representation and exchange. At the interaction level, the model introduces a formal specification language (ACSL) of conversations between agents based on speech act theory and contributes an operational semantics for this language that eases the task of verifying formal properties associated with the interaction protocol. A role-oriented holonic organisational model has also been developed, whose main features meet the requirements demanded by emerging distributed services, including no centralised control, dynamic restructuring capabilities, cooperative skills and facilities for adaptation to different organisational cultures. The model includes a normative submodel adapted to management holon autonomy and based on the deontic and action modal logics.
Resumo:
This paper presents an adaptation of the Cross-Entropy (CE) method to optimize fuzzy logic controllers. The CE is a recently developed optimization method based on a general Monte-Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling. This work shows the application of this optimization method to optimize the inputs gains, the location and size of the different membership functions' sets of each variable, as well as the weight of each rule from the rule's base of a fuzzy logic controller (FLC). The control system approach presented in this work was designed to command the orientation of an unmanned aerial vehicle (UAV) to modify its trajectory for avoiding collisions. An onboard looking forward camera was used to sense the environment of the UAV. The information extracted by the image processing algorithm is the only input of the fuzzy control approach to avoid the collision with a predefined object. Real tests with a quadrotor have been done to corroborate the improved behavior of the optimized controllers at different stages of the optimization process.