36 resultados para 280213 Other Artificial Intelligence
Resumo:
Desde el comienzo de la Historia, el ser humano ha pensado en su futuro, ya fuera por el hecho de preocuparse por su supervivencia o por razones más filosóficas, como el porvenir como especie. Pensar en el futuro es anticipación. El género de la ciencia ficción es el que más se basa en la anticipación. En muchos subgéneros de la ciencia ficción se tratan temas futuristas, viajes espaciales a otros planetas, inteligencia artificial, etc. Por otra parte, en la actualidad vivimos un período donde los cambios en las TIC (Tecnologías de la Información y Comunicación) son prácticamente diarios, aunque podría llegar a decirse que los cambios se producen hora a hora. A veces, muchos de estos avances nos parecen producto de la fantasía o la ciencia ficción, o salidos de alguna película futurista. Sin embargo, dichos avances son perfectamente posibles y explicables a través de la ciencia actual. A través de este proyecto se pretenden analizar estos avances, relacionándolos con distintas obras de ciencia ficción. Una gran cantidad de los avances tecnológicos tienen su origen en alguna obra de la ciencia ficción, ya sea literatura o cine. Otro objetivo es realizar un estudio de diferentes propuestas tecnológicas de diferentes obras de esta rama que no se hayan realizado todavía, y analizar su viabilidad, su utilidad y los posibles cambios sociológicos que produciría en el mundo en el que vivimos. El tercer objetivo es evaluar la aptitud y actitud de los ingenieros de telecomunicación en cuanto a la innovación y la proyección hacia el futuro de los estos posibles cambios tecnológicos. Abstract Since the beginning of History, humans have thought about their future, either concerned about survival or by philosophical reasons like the future as species. Thinking about the future is speculation. The science fiction genre is the one that is based on speculation. Sub-genres of science fiction covers futuristic themes like space travel to other planets, artificial intelligence, etc. Today we live in a period where changes in ICT (Information and Communication Technologies) are almost daily, although we should say that changes occur in a matter of hours. Sometimes, many of these advances seem a product of fantasy or science fiction, or coming out of a futuristic movie. However, these advances are perfectly possible and explainable by current science. Through this project the intention is to analyze these developments, relating them to various works of science fiction. A great number of this technological advancements have their origin in a work of science fiction, either literature or film. Another objective is to study different technological proposals of this genre that have not been done yet, and analyze their feasibility, usefulness and potential sociological changes that occur in the world we live in. The third objective is to evaluate the ability and attitude of telecommunication engineers in terms of innovation and future projection of these potential technological changes.
Resumo:
Objective The main purpose of this research is the novel use of artificial metaplasticity on multilayer perceptron (AMMLP) as a data mining tool for prediction the outcome of patients with acquired brain injury (ABI) after cognitive rehabilitation. The final goal aims at increasing knowledge in the field of rehabilitation theory based on cognitive affectation. Methods and materials The data set used in this study contains records belonging to 123 ABI patients with moderate to severe cognitive affectation (according to Glasgow Coma Scale) that underwent rehabilitation at Institut Guttmann Neurorehabilitation Hospital (IG) using the tele-rehabilitation platform PREVIRNEC©. The variables included in the analysis comprise the neuropsychological initial evaluation of the patient (cognitive affectation profile), the results of the rehabilitation tasks performed by the patient in PREVIRNEC© and the outcome of the patient after a 3–5 months treatment. To achieve the treatment outcome prediction, we apply and compare three different data mining techniques: the AMMLP model, a backpropagation neural network (BPNN) and a C4.5 decision tree. Results The prediction performance of the models was measured by ten-fold cross validation and several architectures were tested. The results obtained by the AMMLP model are clearly superior, with an average predictive performance of 91.56%. BPNN and C4.5 models have a prediction average accuracy of 80.18% and 89.91% respectively. The best single AMMLP model provided a specificity of 92.38%, a sensitivity of 91.76% and a prediction accuracy of 92.07%. Conclusions The proposed prediction model presented in this study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients. The ability to predict treatment outcomes may provide new insights toward improving effectiveness and creating personalized therapeutic interventions based on clinical evidence.
Resumo:
Numerous authors have proposed functions to quantify the degree of similarity between two fuzzy numbers using various descriptive parameters, such as the geometric distance, the distance between the centers of gravity or the perimeter. However, these similarity functions have drawback for specific situations. We propose a new similarity measure for generalized trapezoidal fuzzy numbers aimed at overcoming such drawbacks. This new measure accounts for the distance between the centers of gravity and the geometric distance but also incorporates a new term based on the shared area between the fuzzy numbers. The proposed measure is compared against other measures in the literature.
Resumo:
Currently, there is a great deal of well-founded explicit knowledge formalizing general notions, such as time concepts and the part_of relation. Yet, it is often the case that instead of reusing ontologies that implement such notions (the so-called general ontologies), engineers create procedural programs that implicitly implement this knowledge. They do not save time and code by reusing explicit knowledge, and devote effort to solve problems that other people have already adequately solved. Consequently, we have developed a methodology that helps engineers to: (a) identify the type of general ontology to be reused; (b) find out which axioms and definitions should be reused; (c) make a decision, using formal concept analysis, on what general ontology is going to be reused; and (d) adapt and integrate the selected general ontology in the domain ontology to be developed. To illustrate our approach we have employed use-cases. For each use case, we provide a set of heuristics with examples. Each of these heuristics has been tested in either OWL or Prolog. Our methodology has been applied to develop a pharmaceutical product ontology. Additionally, we have carried out a controlled experiment with graduated students doing a MCs in Artificial Intelligence. This experiment has yielded some interesting findings concerning what kind of features the future extensions of the methodology should have.
Resumo:
This paper presents a proposal for a recognition model for the appraisal value of sentences. It is based on splitting the text into independent sentences (full stops) and then analysing the appraisal elements contained in each sentence according to the previous value in the appraisal lexicon. In this lexicon, positive words are assigned a positive coefficient (+1) and negative words a negative coefficient (-1). We take into account word such as ?too?, ?little? (when it is not ?a bit?), ?less?, and ?nothing? than can modify the polarity degree of lexical unit when appear in the nearby environment. If any of these elements are present, then the previous coefficient will be multiplied by (-1), that is, they will change their sign. Our results show a nearly theoretical effectiveness of 90%, despite not achieving the recognition (or misrecognition) of implicit elements. These elements represent approximately 4% of the total of sentences analysed for appraisal and include the errors in the recognition of coordinated sentences. On the one hand, we found that 3.6 % of the sentences could not be recognized because they use different connectors than those included in the model; on the other hand, we found that in 8.6% of the sentences despite using some of the described connectors could not be applied the rules we have developed. The percentage relative to the whole group of appraisal sentences in the corpus was approximately of 5%.
Resumo:
This paper proposes a novel combination of artificial intelligence planning and other techniques for improving decision-making in the context of multi-step multimedia content adaptation. In particular, it describes a method that allows decision-making (selecting the adaptation to perform) in situations where third-party pluggable multimedia conversion modules are involved and the multimedia adaptation planner does not know their exact adaptation capabilities. In this approach, the multimedia adaptation planner module is only responsible for a part of the required decisions; the pluggable modules make additional decisions based on different criteria. We demonstrate that partial decision-making is not only attainable, but also introduces advantages with respect to a system in which these conversion modules are not capable of providing additional decisions. This means that transferring decisions from the multi-step multimedia adaptation planner to the pluggable conversion modules increases the flexibility of the adaptation. Moreover, by allowing conversion modules to be only partially described, the range of problems that these modules can address increases, while significantly decreasing both the description length of the adaptation capabilities and the planning decision time. Finally, we specify the conditions under which knowing the partial adaptation capabilities of a set of conversion modules will be enough to compute a proper adaptation plan.
Resumo:
This paper presents the knowledge model of a distributed decision support system, that has been designed for the management of a national network in Ukraine. It shows how advanced Artificial Intelligence techniques (multiagent systems and knowledge modelling) have been applied to solve this real-world decision support problem: on the one hand its distributed nature, implied by different loci of decision-making at the network nodes, suggested to apply a multiagent solution; on the other, due to the complexity of problem-solving for local network administration, it was useful to apply knowledge modelling techniques, in order to structure the different knowledge types and reasoning processes involved. The paper sets out from a description of our particular management problem. Subsequently, our agent model is described, pointing out the local problem-solving and coordination knowledge models. Finally, the dynamics of the approach is illustrated by an example.
Resumo:
Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the needs and preferences of customers.
Resumo:
La mejora continua de los procesos de fabricación es fundamental para alcanzar niveles óptimos de productividad, calidad y coste en la producción de componentes y productos. Para ello es necesario disponer de modelos que relacionen de forma precisa las variables que intervienen en el proceso de corte. Esta investigación tiene como objetivo determinar la influencia de la velocidad de corte y el avance en el desgaste del flanco de los insertos de carburos recubiertos GC1115 y GC2015 y en la rugosidad superficial de la pieza mecanizada de la pieza en el torneado de alta velocidad en seco del acero AISI 316L. Se utilizaron entre otros los métodos de observación científica, experimental, medición, inteligencia artificial y estadísticos. El inserto GC1115 consigue el mejor resultado de acuerdo al gráfico de medias y de las ecuaciones de regresión múltiple de desgaste del flanco para v= 350 m/min, mientras que para las restantes velocidades el inserto GC2015 consigue el mejor desempeño. El mejor comportamiento en cuanto a la rugosidad superficial de la pieza mecanizada se obtuvo con el inserto GC1115 en las velocidades de 350 m/min y 400 m/min, en la velocidad de 450 m/min el mejor resultado correspondió al inserto GC2015. Se analizaron dos criterios nuevos, el coeficiente de vida útil de la herramienta de corte en relación al volumen de metal cortado y el coeficiente de rugosidad superficial de la pieza mecanizada en relación al volumen de metal cortado. Fueron determinados los modelos de regresión múltiple que permitieron calcular el tiempo de mecanizado de los insertos sin que alcanzaran el límite del criterio de desgaste del flanco. Los modelos desarrollados fueron evaluados por sus capacidades de predicción con los valores medidos experimentalmente. ABSTRACT The continuous improvement of manufacturing processes is critical to achieving optimal levels of productivity, quality and cost in the production of components and products. This is necessary to have models that accurately relate the variables involved in the cutting process. This research aims to determine the influence of the cutting speed and feed on the flank wear of carbide inserts coated by GC1115 and GC2015 and the surface roughness of the workpiece for turning dry high speed steel AISI 316L. Among various scientific methods this study were used of observation, experiment, measurement, statistical and artificial intelligence. The GC1115 insert gets the best result according to the graph of means and multiple regression equations of flank wear for v = 350 m / min, while for the other speeds the GC2015 insert gets the best performance. Two approaches are discussed, the life ratio of the cutting tool relative to the cut volume and surface roughness coefficient in relation to the cut volume. Multiple regression models were determined to calculate the machining time of the inserts without reaching the limit of the criterion flank wear. The developed models were evaluated for their predictive capabilities with the experimentally measured values.
Resumo:
This paper describes a new technique referred to as watched subgraphs which improves the performance of BBMC, a leading state of the art exact maximum clique solver (MCP). It is based on watched literals employed by modern SAT solvers for boolean constraint propagation. In efficient SAT algorithms, a list of clauses is kept for each literal (it is said that the clauses watch the literal) so that only those in the list are checked for constraint propagation when a (watched) literal is assigned during search. BBMC encodes vertex sets as bit strings, a bit block representing a subset of vertices (and the corresponding induced subgraph) the size of the CPU register word. The paper proposes to watch two subgraphs of critical sets during MCP search to efficiently compute a number of basic operations. Reported results validate the approach as the size and density of problem instances rise, while achieving comparable performance in the general case.
Resumo:
La minería de datos es un campo de las ciencias de la computación referido al proceso que intenta descubrir patrones en grandes volúmenes de datos. La minería de datos busca generar información similar a la que podría producir un experto humano. Además es el proceso de descubrir conocimientos interesantes, como patrones, asociaciones, cambios, anomalías y estructuras significativas a partir de grandes cantidades de datos almacenadas en bases de datos, data warehouses o cualquier otro medio de almacenamiento de información. El aprendizaje automático o aprendizaje de máquinas es una rama de la Inteligencia artificial cuyo objetivo es desarrollar técnicas que permitan a las computadoras aprender. De forma más concreta, se trata de crear programas capaces de generalizar comportamientos a partir de una información no estructurada suministrada en forma de ejemplos. La minería de datos utiliza métodos de aprendizaje automático para descubrir y enumerar patrones presentes en los datos. En los últimos años se han aplicado las técnicas de clasificación y aprendizaje automático en un número elevado de ámbitos como el sanitario, comercial o de seguridad. Un ejemplo muy actual es la detección de comportamientos y transacciones fraudulentas en bancos. Una aplicación de interés es el uso de las técnicas desarrolladas para la detección de comportamientos fraudulentos en la identificación de usuarios existentes en el interior de entornos inteligentes sin necesidad de realizar un proceso de autenticación. Para comprobar que estas técnicas son efectivas durante la fase de análisis de una determinada solución, es necesario crear una plataforma que de soporte al desarrollo, validación y evaluación de algoritmos de aprendizaje y clasificación en los entornos de aplicación bajo estudio. El proyecto planteado está definido para la creación de una plataforma que permita evaluar algoritmos de aprendizaje automático como mecanismos de identificación en espacios inteligentes. Se estudiarán tanto los algoritmos propios de este tipo de técnicas como las plataformas actuales existentes para definir un conjunto de requisitos específicos de la plataforma a desarrollar. Tras el análisis se desarrollará parcialmente la plataforma. Tras el desarrollo se validará con pruebas de concepto y finalmente se verificará en un entorno de investigación a definir. ABSTRACT. The data mining is a field of the sciences of the computation referred to the process that it tries to discover patterns in big volumes of information. The data mining seeks to generate information similar to the one that a human expert might produce. In addition it is the process of discovering interesting knowledge, as patterns, associations, changes, abnormalities and significant structures from big quantities of information stored in databases, data warehouses or any other way of storage of information. The machine learning is a branch of the artificial Intelligence which aim is to develop technologies that they allow the computers to learn. More specifically, it is a question of creating programs capable of generalizing behaviors from not structured information supplied in the form of examples. The data mining uses methods of machine learning to discover and to enumerate present patterns in the information. In the last years there have been applied classification and machine learning techniques in a high number of areas such as healthcare, commercial or security. A very current example is the detection of behaviors and fraudulent transactions in banks. An application of interest is the use of the techniques developed for the detection of fraudulent behaviors in the identification of existing Users inside intelligent environments without need to realize a process of authentication. To verify these techniques are effective during the phase of analysis of a certain solution, it is necessary to create a platform that support the development, validation and evaluation of algorithms of learning and classification in the environments of application under study. The project proposed is defined for the creation of a platform that allows evaluating algorithms of machine learning as mechanisms of identification in intelligent spaces. There will be studied both the own algorithms of this type of technologies and the current existing platforms to define a set of specific requirements of the platform to develop. After the analysis the platform will develop partially. After the development it will be validated by prove of concept and finally verified in an environment of investigation that would be define.
Resumo:
En esta tesis se ha profundizado en el estudio y desarrollo de modelos de soporte para el aprendizaje colaborativo a distancia, que ha permitido proponer una arquitectura fundamentada en los principios del paradigma CSCL (Computer Supported Collaborative Learning). La arquitectura propuesta aborda un tipo de problema concreto que requiere el uso de técnicas derivadas del Trabajo Colaborativo, la Inteligencia Artificial, Interfaces de Usuario así como ideas tomadas de la Pedagogía y la Psicología. Se ha diseñado una solución completa, abierta y genérica. La arquitectura aprovecha las nuevas tecnologías para lograr un sistema efectivo de apoyo a la educación a distancia. Está organizada en cuatro niveles: el de Configuración, el de Experiencia, el de Organización y el de Análisis. A partir de ella se ha implementado un sistema llamado DEGREE. En DEGREE, cada uno de los niveles de la arquitectura da lugar a un subsistema independiente pero relacionado con los otros. La aplicación saca partido del uso de espacios de trabajo estructurados. El subsistema Configurador de Experiencias permite definir los elementos de un espacio de trabajo y una experiencia y adaptarlos a cada tipo de usuario. El subsistema Manejador de Experiencias recoge las contribuciones de los usuarios para construir una solución conjunta de un problema. Las intervenciones de los alumnos se estructuran basándose en un grafo conversacional genérico. Además, se registran todas las acciones de los usuarios para representar explícitamente el proceso completo que lleva a la solución. Estos datos también se almacenan en una memoria común que constituye el subsistema llamado Memoria Organizativa de Experiencias. El subsistema Analizador estudia las intervenciones de los usuarios. Este análisis permite inferir conclusiones sobre la forma en que trabajan los grupos y sus actitudes frente a la colaboración, teniendo en cuenta además el conocimiento subjetivo del observador. El proceso de desarrollo en paralelo de la arquitectura y el sistema ha seguido un ciclo de refinamiento en cinco fases con sucesivas etapas de prototipado y evaluación formativa. Cada fase de este proceso se ha realizado con usuarios reales y se han considerado las opiniones de los usuarios para mejorar las funcionalidades de la arquitectura así como la interfaz del sistema. Esta aproximación ha permitido, además, comprobar la utilidad práctica y la validez de las propuestas que sustentan este trabajo.---ABSTRACT---In this thesis, we have studied in depth the development of support models for distance collaborative learning and subsequently devised an architecture based on the Computer Supported Collaborative Learning paradigm principles. The proposed architecture addresses a specific problem: coordinating groups of students to perform collaborative distance learning activities. Our approach uses Cooperative Work, Artificial Intelligence and Human-Computer Interaction techniques as well as some ideas from the fields of Pedagogy and Psychology. We have designed a complete, open and generic solution. Our architecture exploits the new information technologies to achieve an effective system for education purposes. It is organised into four levels: Configuration, Experience, Organisation and Reflection. This model has been implemented into a system called DEGREE. In DEGREE, each level of the architecture gives rise to an independent subsystem related to the other ones. The application benefits from the use of shared structured workspaces. The configuration subsystem allows customising the elements that define an experience and a workspace. The experience subsystem gathers the users' contributions to build joint solutions to a given problem. The students' interventions build up a structure based on a generic conversation graph. Moreover, all user actions are registered in order to represent explicitly the complete process for reaching the group solution. Those data are also stored into a common memory, which constitutes the organisation subsystem. The user interventions are studied by the reflection subsystem. This analysis allows us inferring conclusions about the way in which the group works and its attitudes towards collaboration. The inference process takes into account the observer's subjective knowledge. The process of developing both the architecture and the system in parallel has run through a five-pass cycle involving successive stages of prototyping and formative evaluation. At each stage of that process, we have considered the users' feedback for improving the architecture's functionalities as well as the system interface. This approach has allowed us to prove the usability and validity of our proposal.
Resumo:
Objectives: A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names.We sought to automatically classify digitally reconstructed interneuronal morphologies according tothis scheme. Simultaneously, we sought to discover possible subtypes of these types that might emergeduring automatic classification (clustering). We also investigated which morphometric properties weremost relevant for this classification.Materials and methods: A set of 118 digitally reconstructed interneuronal morphologies classified into thecommon basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of theworld?s leading neuroscientists, quantified by five simple morphometric properties of the axon and fourof the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. Wethen removed this class information for each type separately, and applied semi-supervised clustering tothose cells (keeping the others? cluster membership fixed), to assess separation from other types and lookfor the formation of new groups (subtypes). We performed this same experiment unlabeling the cells oftwo types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixtureof Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performedthe described experiments on three different subsets of the data, formed according to how many expertsagreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least26 (47 neurons).Results: Interneurons with more reliable type labels were classified more accurately. We classified HTcells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy,respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, andno subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette widthand ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively,confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a singletype also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometricproperties were more relevant that dendritic ones, with the axonal polar histogram length in the [pi, 2pi) angle interval being particularly useful.Conclusions: The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heteroge-neous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones fordistinguishing among the CB, HT, LB, and MA interneuron types.
Resumo:
In the smart building control industry, creating a platform to integrate different communication protocols and ease the interaction between users and devices is becoming increasingly important. BATMP is a platform designed to achieve this goal. In this paper, the authors describe a novel mechanism for information exchange, which introduces a new concept, Parameter, and uses it as the common object among all the BATMP components: Gateway Manager, Technology Manager, Application Manager, Model Manager and Data Warehouse. Parameter is an object which represents a physical magnitude and contains the information about its presentation, available actions, access type, etc. Each component of BATMP has a copy of the parameters. In the Technology Manager, three drivers for different communication protocols, KNX, CoAP and Modbus, are implemented to convert devices into parameters. In the Gateway Manager, users can control the parameters directly or by defining a scenario. In the Application Manager, the applications can subscribe to parameters and decide the values of parameters by negotiating. Finally, a Negotiator is implemented in the Model Manager to notify other components about the changes taking place in any component. By applying this mechanism, BATMP ensures the simultaneous and concurrent communication among users, applications and devices.
Resumo:
Due to ever increasing transportation of people and goods, automatic traffic surveillance is becoming a key issue for both providing safety to road users and improving traffic control in an efficient way. In this paper, we propose a new system that, exploiting the capabilities that both computer vision and machine learning offer, is able to detect and track different types of real incidents on a highway. Specifically, it is able to accurately detect not only stopped vehicles, but also drivers and passengers leaving the stopped vehicle, and other pedestrians present in the roadway. Additionally, a theoretical approach for detecting vehicles which may leave the road in an unexpected way is also presented. The system works in real-time and it has been optimized for working outdoor, being thus appropriate for its deployment in a real-world environment like a highway. First experimental results on a dataset created with videos provided by two Spanish highway operators demonstrate the effectiveness of the proposed system and its robustness against noise and low-quality videos.