886 resultados para Engineering, Industrial|Artificial Intelligence


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the architecture of a computer system conceived as an intelligent assistant for public transport management. The goal of the system is to help operators of a control center in making strategic decisions about how to solve problems of a fleet of buses in an urban network. The system uses artificial intelligence techniques to simulate the decision processes. In particular, a complex knowledge model has been designed by using advanced knowledge engineering methods that integrates three main tasks: diagnosis, prediction and planning. Finally, the paper describes two particular applications developed following this architecture for the cities of Torino (Italy) and Vitoria (Spain).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We perform a review of Web Mining techniques and we describe a Bootstrap Statistics methodology applied to pattern model classifier optimization and verification for Supervised Learning for Tour-Guide Robot knowledge repository management. It is virtually impossible to test thoroughly Web Page Classifiers and many other Internet Applications with pure empirical data, due to the need for human intervention to generate training sets and test sets. We propose using the computer-based Bootstrap paradigm to design a test environment where they are checked with better reliability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The conformance of semantic technologies has to be systematically evaluated to measure and verify the real adherence of these technologies to the Semantic Web standards. Currente valuations of semantic technology conformance are not exhaustive enough and do not directly cover user requirements and use scenarios, which raises the need for a simple, extensible and parameterizable method to generate test data for such evaluations. To address this need, this paper presents a keyword-driven approach for generating ontology language conformance test data that can be used to evaluate semantic technologies, details the definition of a test suite for evaluating OWL DL conformance using this approach,and describes the use and extension of this test suite during the evaluation of some tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When users face a certain problem needing a product, service, or action to solve it, selecting the best alternative among them can be a dicult task due to the uncertainty of their quality. This is especially the case in the domains where users do not have an expertise, like for example in Software Engineering. Multiple criteria decision making (MCDM) methods are methods that help making better decisions when facing the complex problem of selecting the best solution among a group of alternatives that can be compared according to different conflicting criteria. In MCDM problems, alternatives represent concrete products, services or actions that will help in achieving a goal, while criteria represent the characteristics of these alternatives that are important for making a decision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde el comienzo de la Historia, el ser humano ha pensado en su futuro, ya fuera por el hecho de preocuparse por su supervivencia o por razones más filosóficas, como el porvenir como especie. Pensar en el futuro es anticipación. El género de la ciencia ficción es el que más se basa en la anticipación. En muchos subgéneros de la ciencia ficción se tratan temas futuristas, viajes espaciales a otros planetas, inteligencia artificial, etc. Por otra parte, en la actualidad vivimos un período donde los cambios en las TIC (Tecnologías de la Información y Comunicación) son prácticamente diarios, aunque podría llegar a decirse que los cambios se producen hora a hora. A veces, muchos de estos avances nos parecen producto de la fantasía o la ciencia ficción, o salidos de alguna película futurista. Sin embargo, dichos avances son perfectamente posibles y explicables a través de la ciencia actual. A través de este proyecto se pretenden analizar estos avances, relacionándolos con distintas obras de ciencia ficción. Una gran cantidad de los avances tecnológicos tienen su origen en alguna obra de la ciencia ficción, ya sea literatura o cine. Otro objetivo es realizar un estudio de diferentes propuestas tecnológicas de diferentes obras de esta rama que no se hayan realizado todavía, y analizar su viabilidad, su utilidad y los posibles cambios sociológicos que produciría en el mundo en el que vivimos. El tercer objetivo es evaluar la aptitud y actitud de los ingenieros de telecomunicación en cuanto a la innovación y la proyección hacia el futuro de los estos posibles cambios tecnológicos. Abstract Since the beginning of History, humans have thought about their future, either concerned about survival or by philosophical reasons like the future as species. Thinking about the future is speculation. The science fiction genre is the one that is based on speculation. Sub-genres of science fiction covers futuristic themes like space travel to other planets, artificial intelligence, etc. Today we live in a period where changes in ICT (Information and Communication Technologies) are almost daily, although we should say that changes occur in a matter of hours. Sometimes, many of these advances seem a product of fantasy or science fiction, or coming out of a futuristic movie. However, these advances are perfectly possible and explainable by current science. Through this project the intention is to analyze these developments, relating them to various works of science fiction. A great number of this technological advancements have their origin in a work of science fiction, either literature or film. Another objective is to study different technological proposals of this genre that have not been done yet, and analyze their feasibility, usefulness and potential sociological changes that occur in the world we live in. The third objective is to evaluate the ability and attitude of telecommunication engineers in terms of innovation and future projection of these potential technological changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective The main purpose of this research is the novel use of artificial metaplasticity on multilayer perceptron (AMMLP) as a data mining tool for prediction the outcome of patients with acquired brain injury (ABI) after cognitive rehabilitation. The final goal aims at increasing knowledge in the field of rehabilitation theory based on cognitive affectation. Methods and materials The data set used in this study contains records belonging to 123 ABI patients with moderate to severe cognitive affectation (according to Glasgow Coma Scale) that underwent rehabilitation at Institut Guttmann Neurorehabilitation Hospital (IG) using the tele-rehabilitation platform PREVIRNEC©. The variables included in the analysis comprise the neuropsychological initial evaluation of the patient (cognitive affectation profile), the results of the rehabilitation tasks performed by the patient in PREVIRNEC© and the outcome of the patient after a 3–5 months treatment. To achieve the treatment outcome prediction, we apply and compare three different data mining techniques: the AMMLP model, a backpropagation neural network (BPNN) and a C4.5 decision tree. Results The prediction performance of the models was measured by ten-fold cross validation and several architectures were tested. The results obtained by the AMMLP model are clearly superior, with an average predictive performance of 91.56%. BPNN and C4.5 models have a prediction average accuracy of 80.18% and 89.91% respectively. The best single AMMLP model provided a specificity of 92.38%, a sensitivity of 91.76% and a prediction accuracy of 92.07%. Conclusions The proposed prediction model presented in this study allows to increase the knowledge about the contributing factors of an ABI patient recovery and to estimate treatment efficacy in individual patients. The ability to predict treatment outcomes may provide new insights toward improving effectiveness and creating personalized therapeutic interventions based on clinical evidence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, there is a great deal of well-founded explicit knowledge formalizing general notions, such as time concepts and the part_of relation. Yet, it is often the case that instead of reusing ontologies that implement such notions (the so-called general ontologies), engineers create procedural programs that implicitly implement this knowledge. They do not save time and code by reusing explicit knowledge, and devote effort to solve problems that other people have already adequately solved. Consequently, we have developed a methodology that helps engineers to: (a) identify the type of general ontology to be reused; (b) find out which axioms and definitions should be reused; (c) make a decision, using formal concept analysis, on what general ontology is going to be reused; and (d) adapt and integrate the selected general ontology in the domain ontology to be developed. To illustrate our approach we have employed use-cases. For each use case, we provide a set of heuristics with examples. Each of these heuristics has been tested in either OWL or Prolog. Our methodology has been applied to develop a pharmaceutical product ontology. Additionally, we have carried out a controlled experiment with graduated students doing a MCs in Artificial Intelligence. This experiment has yielded some interesting findings concerning what kind of features the future extensions of the methodology should have.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel combination of artificial intelligence planning and other techniques for improving decision-making in the context of multi-step multimedia content adaptation. In particular, it describes a method that allows decision-making (selecting the adaptation to perform) in situations where third-party pluggable multimedia conversion modules are involved and the multimedia adaptation planner does not know their exact adaptation capabilities. In this approach, the multimedia adaptation planner module is only responsible for a part of the required decisions; the pluggable modules make additional decisions based on different criteria. We demonstrate that partial decision-making is not only attainable, but also introduces advantages with respect to a system in which these conversion modules are not capable of providing additional decisions. This means that transferring decisions from the multi-step multimedia adaptation planner to the pluggable conversion modules increases the flexibility of the adaptation. Moreover, by allowing conversion modules to be only partially described, the range of problems that these modules can address increases, while significantly decreasing both the description length of the adaptation capabilities and the planning decision time. Finally, we specify the conditions under which knowing the partial adaptation capabilities of a set of conversion modules will be enough to compute a proper adaptation plan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to describe an intelligent system for the problem of real time road traffic control. The purpose of the system is to help traffic engineers in the selection of the state of traffic control devices on real time, using data recorded by traffic detectors on motorways. The system follows an advanced knowledge-based approach that implements an abstract generic problem solving method, called propose-and-revise, which was proposed in Artificial Intelligence, within the knowledge engineering field, as a standard cognitive structure oriented to solve configuration design problems. The paper presents the knowledge model of such a system together with the strategy of inference and describes how it was applied for the case of the M-40 urban ring for the city of Madrid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the intensive use of mobile phones for diferent purposes, these devices usually contain condential information which must not be accessed by another person apart from the owner of the device. Furthermore, the new generation phones commonly incorporate an accelerometer which may be used to capture the acceleration signals produced as a result of owner s gait. Nowadays, gait identication in basis of acceleration signals is being considered as a new biometric technique which allows blocking the device when another person is carrying it. Although distance based approaches as Euclidean distance or dynamic time warping have been applied to solve this identication problem, they show di±culties when dealing with gaits at diferent speeds. For this reason, in this paper, a method to extract an average template from instances of the gait at diferent velocities is presented. This method has been tested with the gait signals of 34 subjects while walking at diferent motion speeds (slow, normal and fast) and it has shown to improve the performance of Euclidean distance and classical dynamic time warping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nivel mundial, el cáncer de mama es el tipo de cáncer más frecuente además de una de las principales causas de muerte entre la población femenina. Actualmente, el método más eficaz para detectar lesiones mamarias en una etapa temprana es la mamografía. Ésta contribuye decisivamente al diagnóstico precoz de esta enfermedad que, si se detecta a tiempo, tiene una probabilidad de curación muy alta. Uno de los principales y más frecuentes hallazgos en una mamografía, son las microcalcificaciones, las cuales son consideradas como un indicador importante de cáncer de mama. En el momento de analizar las mamografías, factores como la capacidad de visualización, la fatiga o la experiencia profesional del especialista radiólogo hacen que el riesgo de omitir ciertas lesiones presentes se vea incrementado. Para disminuir dicho riesgo es importante contar con diferentes alternativas como por ejemplo, una segunda opinión por otro especialista o un doble análisis por el mismo. En la primera opción se eleva el coste y en ambas se prolonga el tiempo del diagnóstico. Esto supone una gran motivación para el desarrollo de sistemas de apoyo o asistencia en la toma de decisiones. En este trabajo de tesis se propone, se desarrolla y se justifica un sistema capaz de detectar microcalcificaciones en regiones de interés extraídas de mamografías digitalizadas, para contribuir a la detección temprana del cáncer demama. Dicho sistema estará basado en técnicas de procesamiento de imagen digital, de reconocimiento de patrones y de inteligencia artificial. Para su desarrollo, se tienen en cuenta las siguientes consideraciones: 1. Con el objetivo de entrenar y probar el sistema propuesto, se creará una base de datos de imágenes, las cuales pertenecen a regiones de interés extraídas de mamografías digitalizadas. 2. Se propone la aplicación de la transformada Top-Hat, una técnica de procesamiento digital de imagen basada en operaciones de morfología matemática. La finalidad de aplicar esta técnica es la de mejorar el contraste entre las microcalcificaciones y el tejido presente en la imagen. 3. Se propone un algoritmo novel llamado sub-segmentación, el cual está basado en técnicas de reconocimiento de patrones aplicando un algoritmo de agrupamiento no supervisado, el PFCM (Possibilistic Fuzzy c-Means). El objetivo es encontrar las regiones correspondientes a las microcalcificaciones y diferenciarlas del tejido sano. Además, con la finalidad de mostrar las ventajas y desventajas del algoritmo propuesto, éste es comparado con dos algoritmos del mismo tipo: el k-means y el FCM (Fuzzy c-Means). Por otro lado, es importante destacar que en este trabajo por primera vez la sub-segmentación es utilizada para detectar regiones pertenecientes a microcalcificaciones en imágenes de mamografía. 4. Finalmente, se propone el uso de un clasificador basado en una red neuronal artificial, específicamente un MLP (Multi-layer Perceptron). El propósito del clasificador es discriminar de manera binaria los patrones creados a partir de la intensidad de niveles de gris de la imagen original. Dicha clasificación distingue entre microcalcificación y tejido sano. ABSTRACT Breast cancer is one of the leading causes of women mortality in the world and its early detection continues being a key piece to improve the prognosis and survival. Currently, the most reliable and practical method for early detection of breast cancer is mammography.The presence of microcalcifications has been considered as a very important indicator ofmalignant types of breast cancer and its detection and classification are important to prevent and treat the disease. However, the detection and classification of microcalcifications continue being a hard work due to that, in mammograms there is a poor contrast between microcalcifications and the tissue around them. Factors such as visualization, tiredness or insufficient experience of the specialist increase the risk of omit some present lesions. To reduce this risk, is important to have alternatives such as a second opinion or a double analysis for the same specialist. In the first option, the cost increases and diagnosis time also increases for both of them. This is the reason why there is a great motivation for development of help systems or assistance in the decision making process. This work presents, develops and justifies a system for the detection of microcalcifications in regions of interest extracted fromdigitizedmammographies to contribute to the early detection of breast cancer. This systemis based on image processing techniques, pattern recognition and artificial intelligence. For system development the following features are considered: With the aim of training and testing the system, an images database is created, belonging to a region of interest extracted from digitized mammograms. The application of the top-hat transformis proposed. This image processing technique is based on mathematical morphology operations. The aim of this technique is to improve the contrast betweenmicrocalcifications and tissue present in the image. A novel algorithm called sub-segmentation is proposed. The sub-segmentation is based on pattern recognition techniques applying a non-supervised clustering algorithm known as Possibilistic Fuzzy c-Means (PFCM). The aim is to find regions corresponding to the microcalcifications and distinguish them from the healthy tissue. Furthermore,with the aim of showing themain advantages and disadvantages this is compared with two algorithms of same type: the k-means and the fuzzy c-means (FCM). On the other hand, it is important to highlight in this work for the first time the sub-segmentation is used for microcalcifications detection. Finally, a classifier based on an artificial neural network such as Multi-layer Perceptron is used. The purpose of this classifier is to discriminate froma binary perspective the patterns built from gray level intensity of the original image. This classification distinguishes between microcalcifications and healthy tissue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una de las dificultades principales en el desarrollo de software es la ausencia de un marco conceptual adecuado para su estudio. Una propuesta la constituye el modelo transformativo, que entiende el desarrollo de software como un proceso iterativo de transformación de especificaciones: se parte de una especificación inicial que va transformándose sucesivamente hasta obtener una especificación final que se toma como programa. Este modelo básico puede llevarse a la práctica de varias maneras. En concreto, la aproximación deductiva toma una sentencia lógica como especificación inicial y su proceso transformador consiste en la demostración de la sentencia; como producto secundario de la demostración se deriva un programa que satisface la especificación inicial. La tesis desarrolla un método deductivo para la derivación de programas funcionales con patrones, escritos en un lenguaje similar a Hope. El método utiliza una lógica multigénero, cuya relación con el lenguaje de programación es estudiada. También se identifican los esquemas de demostración necesarios para la derivación de funciones con patrones, basados en la demostración independiente de varias subsentencias. Cada subsentencia proporciona una subespecificación de una ecuación del futuro programa a derivar. Nuestro método deductivo está inspirado en uno previo de Zohar Manna y Richard Waldinger, conocido como el cuadro deductivo, que deriva programas en un lenguaje similar a Lisp. El nuevo método es una modificación del cuadro de estos autores, que incorpora géneros y permite demostrar una especificación mediante varios cuadros. Cada cuadro demuestra una subespecificación y por tanto deriva una ecuación del programa. Se prevén mecanismos para que los programas derivados puedan contener definiciones locales con patrones y variables anónimas y sinónimas y para que las funciones auxiliares derivadas no usen variables de las funciones principales. La tesis se completa con varios ejemplos de aplicación, un mecanismo que independentiza el método del lenguaje de programación y un prototipo de entorno interactivo de derivación deductiva. Categorías y descriptores de materia CR D.l.l [Técnicas de programación]: Programación funcional; D.2.10 [Ingeniería de software]: Diseño - métodos; F.3.1 [Lógica y significado de los programas]: Especificación, verificación y razonamiento sobre programas - lógica de programas; F.3.3 [Lógica y significado de los programas]: Estudios de construcciones de programas - construcciones funcionales; esquemas de programa y de recursion; 1.2.2 [Inteligencia artificial]: Programación automática - síntesis de programas; 1.2.3 [Inteligencia artificial]: Deducción y demostración de teoremas]: extracción de respuesta/razón; inducción matemática. Términos generales Programación funcional, síntesis de programas, demostración de teoremas. Otras palabras claves y expresiones Funciones con patrones, cuadro deductivo, especificación parcial, inducción estructural, teorema de descomposición.---ABSTRACT---One of the main difficulties in software development is the lack of an adequate conceptual framework of study. The transformational model is one such proposal that conceives software development as an iterative process of specifications transformation: an initial specification is developed and successively transformed until a final specification is obtained and taken as a program. This basic model can be implemented in several ways. The deductive approach takes a logical sentence as the initial specification and its proof constitutes the transformational process; as a byproduct of the proof, a program which satisfies the initial specification is derived. In the thesis, a deductive method for the derivation of Hope-like functional programs with patterns is developed. The method uses a many-sorted logic, whose relation to the programming language is studied. Also the proof schemes necessary for the derivation of functional programs with patterns, based on the independent proof of several subsentences, are identified. Each subsentence provides a subspecification of one equation of the future program to be derived. Our deductive method is inspired on a previous one by Zohar Manna and Richard Waldinger, known as the deductive tableau, which derives Lisp-like programs. The new method incorporates sorts in the tableau and allows to prove a sentence with several tableaux. Each tableau proves a subspecification and therefore derives an equation of the program. Mechanisms are included to allow the derived programs to contain local definitions with patterns and anonymous and synonymous variables; also, the derived auxiliary functions cannot reference parameters of their main functions. The thesis is completed with several application examples, i mechanism to make the method independent from the programming language and an interactive environment prototype for deductive derivation. CR categories and subject descriptors D.l.l [Programming techniques]: Functional programming; D.2.10 [Software engineering]: Design - methodologies; F.3.1 [Logics and meanings of programa]: Specifying and verifying and reasoning about programs - logics of programs; F.3.3 [Logics and meanings of programs]: Studies of program constructs - functional constructs; program and recursion schemes; 1.2.2 [Artificial intelligence]: Automatic programming - program synthesis; 1.2.3 [Artificial intelligence]: Deduction and theorem proving - answer/reason extraction; mathematical induction. General tenas Functional programming, program synthesis, theorem proving. Additional key words and phrases Functions with patterns, deductive tableau, structural induction, partial specification, descomposition theorem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a robust approach for recognition of thermal face images based on decision level fusion of 34 different region classifiers. The region classifiers concentrate on local variations. They use singular value decomposition (SVD) for feature extraction. Fusion of decisions of the region classifier is done by using majority voting technique. The algorithm is tolerant against false exclusion of thermal information produced by the presence of inconsistent distribution of temperature statistics which generally make the identification process difficult. The algorithm is extensively evaluated on UGC-JU thermal face database, and Terravic facial infrared database and the recognition performance are found to be 95.83% and 100%, respectively. A comparative study has also been made with the existing works in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the application of language translation technologies for generating bus information in Spanish Sign Language (LSE: Lengua de Signos Española). In this work, two main systems have been developed: the first for translating text messages from information panels and the second for translating spoken Spanish into natural conversations at the information point of the bus company. Both systems are made up of a natural language translator (for converting a word sentence into a sequence of LSE signs), and a 3D avatar animation module (for playing back the signs). For the natural language translator, two technological approaches have been analyzed and integrated: an example-based strategy and a statistical translator. When translating spoken utterances, it is also necessary to incorporate a speech recognizer for decoding the spoken utterance into a word sequence, prior to the language translation module. This paper includes a detailed description of the field evaluation carried out in this domain. This evaluation has been carried out at the customer information office in Madrid involving both real bus company employees and deaf people. The evaluation includes objective measurements from the system and information from questionnaires. In the field evaluation, the whole translation presents an SER (Sign Error Rate) of less than 10% and a BLEU greater than 90%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.