934 resultados para upscale extensions
Resumo:
Two complementary benchmarks have been proposed so far for the evaluation and continuous improvement of RDF stream processors: SRBench and LSBench. They put a special focus on different features of the evaluated systems, including coverage of the streaming extensions of SPARQL supported by each processor, query processing throughput, and an early analysis of query evaluation correctness, based on comparing the results obtained by different processors for a set of queries. However, none of them has analysed the operational semantics of these processors in order to assess the correctness of query evaluation results. In this paper, we propose a characterization of the operational semantics of RDF stream processors, adapting well-known models used in the stream processing engine community: CQL and SECRET. Through this formalization, we address correctness in RDF stream processor benchmarks, allowing to determine the multiple answers that systems should provide. Finally, we present CSRBench, an extension of SRBench to address query result correctness verification using an automatic method.
Resumo:
Las Redes de Procesadores Evolutivos-NEP propuestas en [Mitrana et al., 2001], son un modelo computacional bio-inspirado a partir de la evolución de poblaciones de células, definiendo a nivel sintáctico algunas propiedades biológicas. En este modelo, las células están representadas por medio de palabras que describen secuencias de ADN. Informalmente, en algún instante de tiempo, el sistema evolutivo está representado por una colección de palabras cada una de las cuales representa una célula. El espacio genotipo de las especies, es un conjunto que recoge aquellas palabras que son aceptadas como sobrevivientes (es decir, como \correctas"). Desde el punto de vista de la evolución, las células pertenecen a especies y su comunidad evoluciona de acuerdo a procesos biológicos como la mutación y la división celular. éstos procesos representan el proceso natural de evolución y ponen de manifiesto una característica intrínseca de la naturaleza: el paralelismo. En este modelo, estos procesos son vistos como operaciones sobre palabras. Formalmente, el modelo de las NEP constituyen una arquitectura paralela y distribuida de procesamiento simbólico inspirada en la Máquina de conexión [Hillis, 1981], en el Paradigma de Flujo Lógico [Errico and Jesshope, 1994] y en las Redes de Procesadores Paralelos de Lenguajes (RPPL) [Csuhaj-Varju and Salomaa, 1997]. Al modelo NEP se han ido agregando nuevas y novedosas extensiones hasta el punto que actualmente podemos hablar de una familia de Redes de Procesadores Bio-inspirados (NBP) [Mitrana et al., 2012b]. Un considerable número de trabajos a lo largo de los últimos años han demostrado la potencia computacional de la familia NBP. En general, éstos modelos son computacionalmente completos, universales y eficientes [Manea et al., 2007], [Manea et al., 2010b], [Mitrana and Martín-Vide, 2005]. De acuerdo a lo anterior, se puede afirmar que el modelo NEP ha adquirido hasta el momento un nivel de madurez considerable. Sin embargo, aunque el modelo es de inspiración biológica, sus metas siguen estando motivadas en la Teoría de Lenguajes Formales y las Ciencias de la Computación. En este sentido, los aspectos biológicos han sido abordados desde una perspectiva cualitativa y el acercamiento a la realidad biológica es de forma meramente sintáctica. Para considerar estos aspectos y lograr dicho acercamiento es necesario que el modelo NEP tenga una perspectiva más amplia que incorpore la interacción de aspectos tanto cualitativos como cuantitativos. La contribución de esta Tesis puede considerarse como un paso hacia adelante en una nueva etapa de los NEPs, donde el carácter cuantitativo del modelo es de primordial interés y donde existen posibilidades de un cambio visible en el enfoque de interés del dominio de los problemas a considerar: de las ciencias de la computación hacia la simulación/modelado biológico y viceversa, entre otros. El marco computacional que proponemos en esta Tesis extiende el modelo de las Redes de Procesadores Evolutivos (NEP) y define arquitectura inspirada en la definición de bloques funcionales del proceso de señalización celular para la solución de problemas computacionales complejos y el modelado de fenómenos celulares desde una perspectiva discreta. En particular, se proponen dos extensiones: (1) los Transductores basados en Redes de Procesadores Evolutivos (NEPT), y (2) las Redes Parametrizadas de Procesadores Evolutivos Polarizados (PNPEP). La conservación de las propiedades y el poder computacional tanto de NEPT como de PNPEP se demuestra formalmente. Varias simulaciones de procesos relacionados con la señalización celular son abordadas sintáctica y computacionalmente, con el _n de mostrar la aplicabilidad e idoneidad de estas dos extensiones. ABSTRACT Network of Evolutionary Processors -NEP was proposed in [Mitrana et al., 2001], as a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells accepted as survivors (correct) are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing inspired from the Connection Machine [Hillis, 1981], the Flow Logic Paradigm [Errico and Jesshope, 1994] and the Networks of Parallel Language Processors (RPPL) [Csuhaj-Varju and Salomaa, 1997]. Since the date when NEP was proposed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP) [Mitrana et al., 2012b]. During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated [Manea et al., 2007, Manea et al., 2010b, Mitrana and Martín-Vide, 2005]. Therefore, we can say that the NEP model has reached its maturity. Nevertheless, although the NEP model is biologically inspired, this model is mainly motivated by mathematical and computer science goals. In this context, the biological aspects are only considered from a qualitative and syntactical perspective. In view of this lack, it is important to try to keep the NEP theory as close as possible to the biological reality, extending their perspective incorporating the interplay of qualitative and quantitative aspects. The contribution of this Thesis, can be considered as a starting point in a new era of the NEP model. Then, the quantitative character of the NEP model is mandatory and it can address completely new different types of problems with respect to the classical computational domain (e.g. from the computer science to system biology). Therefore, the computational framework that we propose extends the NEP model and defines an architecture inspired by the functional blocks from cellular signaling in order to solve complex computational problems and cellular phenomena modeled from a discrete perspective. Particularly, we propose two extensions, namely: (1) Transducers based on Network of Evolutionary Processors (NEPT), and (2) Parametrized Network of Polarized Evolutionary Processors (PNPEP). Additionally, we have formally proved that the properties and computational power of NEP is kept in both extensions. Several simulations about processes related with cellular signaling both syntactical and computationally have been considered to show the model suitability.
Resumo:
Los lenguajes de programación son el idioma que los programadores usamos para comunicar a los computadores qué queremos que hagan. Desde el lenguaje ensamblador, que traduce una a una las instrucciones que interpreta un computador hasta lenguajes de alto nivel, se ha buscado desarrollar lenguajes más cercanos a la forma de pensar y expresarse de los humanos. Los lenguajes de programación lógicos como Prolog utilizan a su vez el lenguaje de la lógica de 1er orden de modo que el programador puede expresar las premisas del problema que se quiere resolver sin preocuparse del cómo se va a resolver dicho problema. La resolución del problema se equipara a encontrar una deducción del objetivo a alcanzar a partir de las premisas y equivale a lo que entendemos por la ejecución de un programa. Ciao es una implementación de Prolog (http://www.ciao-lang.org) y utiliza el método de resolución SLD, que realiza el recorrido de los árboles de decisión en profundidad(depth-first) lo que puede derivar en la ejecución de una rama de busqueda infinita (en un bucle infinito) sin llegar a dar respuestas. Ciao, al ser un sistema modular, permite la utilización de extensiones para implementar estrategias de resolución alternativas como la tabulación (OLDT). La tabulación es un método alternativo que se basa en memorizar las llamadas realizadas y sus respuestas para no repetir llamadas y poder usar las respuestas sin recomputar las llamadas. Algunos programas que con SLD entran en un bucle infinito, gracias a la tabulación dán todas las respuestas y termina. El modulo tabling es una implementación de tabulación mediante el algoritmo CHAT. Esta implementación es una versión beta que no tiene implementado un manejador de memoria. Entendemos que la gestión de memoria en el módulo de tabling tiene gran importancia, dado que la resolución con tabulación permite reducir el tiempo de computación (al no repetir llamadas), aumentando los requerimientos de memoria (para guardar las llamadas y las respuestas). Por lo tanto, el objetivo de este trabajo es implementar un mecanismo de gestión de la memoria en Ciao con el módulo tabling cargado. Para ello se ha realizado la implementación de: Un mecanismo de captura de errores que: detecta cuando el computador se queda sin memoria y activa la reinicialización del sitema. Un procedimiento que ajusta los punteros del modulo de tabling que apuntan a la WAM tras un proceso de realojo de algunas de las áreas de memoria de la WAM. Un gestor de memoria del modulo de tabling que detecta c realizar una ampliación de las áreas de memoria del modulo de tabling, realiza la solicitud de más memoria y realiza el ajuste de los punteros. Para ayudar al lector no familiarizado con este tema, describimos los datos que Ciao y el módulo de tabling alojan en las áreas de memoria dinámicas que queremos gestionar. Los casos de pruebas desarrollados para evaluar la implementación del gestor de memoria, ponen de manifiesto que: Disponer de un gestor de memoria dinámica permite la ejecución de programas en un mayor número de casos. La política de gestión de memoria incide en la velocidad de ejecución de los programas. ---ABSTRACT---Programming languages are the language that programmers use in order to communicate to computers what we want them to do. Starting from the assembly language, which translates one by one the instructions to the computer, and arriving to highly complex languages, programmers have tried to develop programming languages that resemble more closely the way of thinking and communicating of human beings. Logical programming languages, such as Prolog, use the language of logic of the first order so that programmers can express the premise of the problem that they want to solve without having to solve the problem itself. The solution to the problem is equal to finding a deduction of the objective to reach starting from the premises and corresponds to what is usually meant as the execution of a program. Ciao is an implementation of Prolog (http://www.ciao-lang.org) and uses the method of resolution SLD that carries out the path of the decision trees in depth (depth-frist). This can cause the execution of an infinite searching branch (an infinite loop) without getting to an answer. Since Ciao is a modular system, it allows the use of extensions to implement alternative resolution strategies, such as tabulation (OLDT). Tabulation is an alternative method that is based on the memorization of executions and their answers, in order to avoid the repetition of executions and to be able to use the answers without reexecutions. Some programs that get into an infinite loop with SLD are able to give all the answers and to finish thanks to tabulation. The tabling package is an implementation of tabulation through the algorithm CHAT. This implementation is a beta version which does not present a memory handler. The management of memory in the tabling package is highly important, since the solution with tabulation allows to reduce the system time (because it does not repeat executions) and increases the memory requirements (in order to save executions and answers). Therefore, the objective of this work is to implement a memory management mechanism in Ciao with the tabling package loaded. To achieve this goal, the following implementation were made: An error detection system that reveals when the computer is left without memory and activate the reinizialitation of the system. A procedure that adjusts the pointers of the tabling package which points to the WAM after a process of realloc of some of the WAM memory stacks. A memory manager of the tabling package that detects when it is necessary to expand the memory stacks of the tabling package, requests more memory, and adjusts the pointers. In order to help the readers who are not familiar with this topic, we described the data which Ciao and the tabling package host in the dynamic memory stacks that we want to manage. The test cases developed to evaluate the implementation of the memory manager show that: A manager for the dynamic memory allows the execution of programs in a larger number of cases. Memory management policy influences the program execution speed.
Resumo:
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.
Resumo:
El continuo desarrollo que está teniendo el mundo de las telecomunicaciones móviles hace que la red móvil esté sufriendo progresivos cambios para adaptarse a las nuevas tecnologías móviles que ofrecen un mejor servicio. El cambio en la red móvil no solo se produce por el desarrollo de las nuevas generaciones móviles. La red móvil se adapta también a la demanda de usuarios la cual no deja de incrementar a lo largo de los últimos años. Por tanto, los operadores tienen que ampliar su red instalando nodos que tengan las nuevas tecnologías y también las anteriores. Aparte de crear nuevos nodos también tienen que modificar sus nodos antiguos y convertirlos en nodos que soporten mayor número de usuarios. Hoy en día, en España, se están instalando nuevos nodos con 2G, 3G y 4G y además se están realizando ampliaciones de portadora para 3G. Este proyecto se divide en cuatro partes, la primera de ella se centra en explicar el proceso a seguir para la instalación de un nuevo nodo urbano. Este proceso es muy parecido para instalar un nodo con una tecnología u otra, en el caso del proyecto se explicarán los pasos a seguir para la instalación de un nodo con 2G y 3G. Posteriormente se explicará cómo se realizan las medidas para corroborar el correcto funcionamiento de un nodo rural y se compararán a las medidas de zona urbana mediante capturas de un nodo específico. En la penúltima parte del proyecto se estudia la cobertura en interiores y las diversas soluciones que se toman normalmente para mejorar dicha cobertura en edificios, almacenes y centros comerciales. Por último aparecen las conclusiones del proyecto y los trabajos futuros en donde se realiza una visión de posibles estudios relacionados con este proyecto y una visión de cómo puede quedar formada la red en unos años. ABSTRACT. Due to the continuous development of mobile telecommunications the mobile networks have undergone rapid changes to adapt to new mobile technologies that offer a better service. The mobile network change hasn´t only occurred because of the development of new generations of mobile radio-communications. The mobile network adapts itself to user demand, which has been growing over the last few years faster than expected. Therefore, mobile operators have to enlarge its network by installing nodes that share the old and new technologies. Apart from creating new nodes, the operators have to modify the old ones and turn them into nodes that support an increasing number of users. Nowadays, in Spain new nodes with 2G 3G and 4G are being installed, and carrier extensions for 3G are being made as well. This project is divided into four parts. The first chapter focuses on explaining the process that should be followed to install a new urban node. This process is similar to install a node with any of the technologies available. In the case of this project, the steps to follow in setting up a wireless node with 2G and 3G will be detailed. Afterwards, in the second chapter the document continues explaining the measurements that should be carried out to ensure proper performance of a rural node. Then, those measurements will be compared with the ones of an urban node. In the third part of the project it is explained how coverage indoor studies are performed, and the different solutions that are usually proposed to improve coverage in buildings, stores and shopping centers. The last chapter explains the conclusions that have been reached and future works. It is provided a widespread view of possible studies related to this project and how the mobile will improve in the following years.
Resumo:
Examples of global solutions of the shell equations are presented, such as the ones based on the well known Levy series expansion. Also discussed are some natural extensions of the Levy method as well as the inherent limitations of these methods concerning the shell model assumptions, boundary conditions and geometric regularity. Finally, some open additional design questions are noted mainly related to the simultaneous use in analysis of these global techniques and the local methods (like the finite elements) to finding the optimal shell shape, and to determining the reinforcement layout.
Resumo:
ML 1.4 is widely accepted as the standard for representing the various software artifacts generated by a development process. For this reason, there have been attempts to use this language to represent the software architec- ture of systems as well. Unfortunately, these attempts have ended in representa- tions (boxes and lines) already criticized by the software architecture commu- nity. Recently, OMG has published a draft that will constitute the future UML 2.0 specification. In this paper we compare the capacities of UML 1.4 and UML 2.0 to describe software architectures. In particular, we study extensions of both UML versions to describe the static view of the C3 architectural style (a simplification of the C2 style). One of the results of this study is the difficulties found when using the UML 2.0 metamodel to describe the concept of connector in a software architecture.
Resumo:
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. In robotics a similar role has been played by modules that fit point cloud data to the superquadric family of shapes and its various extensions. We developed a model of shape tuning in AIP based on cosine tuning to superquadric parameters. However, the model did not fit the data well, and we also found that it was difficult to accurately reproduce these parameters using neural networks with the appropriate inputs (modelled on the caudal intraparietal area, CIP). The latter difficulty was related to the fact that there are large discontinuities in the superquadric parameters between very similar shapes. To address these limitations we adopted an alternative shape parameterization based on an Isomap nonlinear dimension reduction. The Isomap was built using gradients and curvatures of object surface depth. This alternative parameterization was low-dimensional (like superquadrics), but data-driven (similar to an alternative clustering approach that is also sometimes used in robotics) and lacked large discontinuities. Isomaps with 16 or more dimensions reproduced the AIP data fairly well. Moreover, we found that the Isomap parameters could be approximated from CIP-like input much more accurately than the superquadric parameters. We conclude that Isomaps, or perhaps alternative dimension reductions of CIP signals, provide a promising model of AIP tuning. We have now started to integrate our model with a robot hand, to explore the efficacy of Isomap shape reductions in grasp planning. Future work will consider dynamics of spike responses and integration with related visual and motor area models.
Resumo:
Monte Carlo (MC) methods are widely used in signal processing, machine learning and communications for statistical inference and stochastic optimization. A well-known class of MC methods is composed of importance sampling and its adaptive extensions (e.g., population Monte Carlo). In this work, we introduce an adaptive importance sampler using a population of proposal densities. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples generated. The cloud of proposals is adapted by learning from a subset of previously generated samples, in such a way that local features of the target density can be better taken into account compared to single global adaptation procedures. Numerical results show the advantages of the proposed sampling scheme in terms of mean absolute error and robustness to initialization.
Resumo:
Workflows are increasingly used to manage and share scientific computations and methods. Workflow tools can be used to design, validate, execute and visualize scientific workflows and their execution results. Other tools manage workflow libraries or mine their contents. There has been a lot of recent work on workflow system integration as well as common workflow interlinguas, but the interoperability among workflow systems remains a challenge. Ideally, these tools would form a workflow ecosystem such that it should be possible to create a workflow with a tool, execute it with another, visualize it with another, and use yet another tool to mine a repository of such workflows or their executions. In this paper, we describe our approach to create a workflow ecosystem through the use of standard models for provenance (OPM and W3C PROV) and extensions (P-PLAN and OPMW) to represent workflows. The ecosystem integrates different workflow tools with diverse functions (workflow generation, execution, browsing, mining, and visualization) created by a variety of research groups. This is, to our knowledge, the first time that such a variety of workflow systems and functions are integrated.
Resumo:
rights and conditions present in licenses for software, data and general works are expressed with the Open Digital Rights Language (ODRL) 2.0 vocabulary and extensions thereof. The dataset contains licenses identified by a dereferenceable URI, which are served with content negotiation providing a double representation for humans and machines alike. This feature enables a generalized machine-to-machine commerce if generally adopted.
Resumo:
La familia de algoritmos de Boosting son un tipo de técnicas de clasificación y regresión que han demostrado ser muy eficaces en problemas de Visión Computacional. Tal es el caso de los problemas de detección, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y más popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios más generales: multiclase, multilabel, con costes, etc. Nuestro interés se centra en extender AdaBoost al terreno de la clasificación multiclase, considerándolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, está concebido para abordar el problema descomponiéndolo en subproblemas binarios. Por un lado, usamos una codificación vectorial para representar etiquetas y, por otro, utilizamos la función de pérdida exponencial multiclase para evaluar las respuestas. Esta codificación produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimización iterativa del modelo genera un proceso de Boosting asimétrico cuyos costes dependen del número de etiquetas separadas por cada clasificador débil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un método bien fundamentado que extiende de manera canónica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, está concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una función de pérdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensión más canónica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estándar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos métodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas áreas. En dichos experimentos se usan bases de datos de referencia en el área de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Además, hemos podido aplicar BAdaCost con éxito a un proceso de segmentación, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad teórica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.
Resumo:
En la mayoría de problemas de decisión a los que nos enfrentamos no hay evidencia sobre cuál es la mejor elección debido a la complejidad de los mismos. Esta complejidad está asociada a la existencia de múltiples objetivos conflictivos y a que en muchos casos solo se dispone de información incompleta o imprecisa sobre los distintos parámetros del modelo de decisión. Por otro lado, el proceso de toma de decisiones se puede realizar en grupo, debiendo incorporar al modelo las preferencias individuales de cada uno de los decisores y, posteriormente, agregarlas para alcanzar un consenso final, lo que dificulta más todavía el proceso de decisión. La metodología del Análisis de Decisiones (AD) es un procedimiento sistemático y lógico que permite estructurar y simplificar la tarea de tomar decisiones. Utiliza la información existente, datos recogidos, modelos y opiniones profesionales para cuantificar la probabilidad de los valores o impactos de las alternativas y la Teoría de la Utilidad para cuantificar las preferencias de los decisores sobre los posibles valores de las alternativas. Esta tesis doctoral se centra en el desarrollo de extensiones del modelo multicriterio en utilidad aditivo para toma de decisiones en grupo con veto en base al AD y al concepto de la intensidad de la dominancia, que permite explotar la información incompleta o imprecisa asociada a los parámetros del modelo. Se considera la posibilidad de que la importancia relativa que tienen los criterios del problema para los decisores se representa mediante intervalos de valores o información ordinal o mediante números borrosos trapezoidales. Adicionalmente, se considera que los decisores tienen derecho a veto sobre los valores de los criterios bajo consideración, pero solo un subconjunto de ellos es efectivo, teniéndose el resto solo en cuenta de manera parcial. ABSTRACT In most decision-making problems, the best choice is unclear because of their complexity. This complexity is mainly associated with the existence of multiple conflicting objectives. Besides, there is, in many cases, only incomplete or inaccurate information on the various decision model parameters. Alternatively, the decision-making process may be performed by a group. Consequently, the model must account for individual preferences for each decision-maker (DM), which have to be aggregated to reach a final consensus. This makes the decision process even more difficult. The decision analysis (DA) methodology is a systematic and logical procedure for structuring and simplifying the decision-making task. It takes advantage of existing information, collected data, models and professional opinions to quantify the probability of the alternative values or impacts and utility theory to quantify the DM’s preferences concerning the possible alternative values. This PhD. thesis focuses on developing extensions for a multicriteria additive utility model for group decision-making accounting for vetoes based on DA and on the concept of dominance intensity in order to exploit incomplete or imprecise information associated with the parameters of the decision-making model. We consider the possibility of the relative importance of criteria for DMs being represented by intervals or ordinal information, or by trapezoidal fuzzy numbers. Additionally, we consider that DMs are allowed to provide veto values for the criteria under consideration, of which only a subset are effective, whereas the remainder are only partially taken into account.
Resumo:
The In Vessel Viewing System (IVVS) will be one of the essential machine diagnostic systems at ITER to provide information about the status of in-vessel and plasma facing components and to evaluate the dust inside the Vacuum Vessel. The current design consists of six scanning probes and their deployment systems, which are placed in dedicated ports at the divertor level. These units are located in resident guiding tubes 10 m long, which allow the IVVS probes to go from their storage location to the scanning position by means of a simple straight translation. Moreover, each resident tube is supported inside the corresponding Vacuum Vessel and Cryostat port extensions, which are part of the primary confinement barrier. As the Vacuum Vessel and the Cryostat will move with respect to each other during operation (especially during baking) and during incidents and accidents (disruptions, vertical displacement events, seismic events), the structural integrity of the resident tube and the surrounding vacuum boundaries would be compromised if the required flexibility and supports are not appropriately assured. This paper focuses on the integration of the present design of the IVVS into the Vacuum Vessel and Cryostat environment. It presents the adopted strategy to withstand all the main interfacing loads without damaging the confinement barriers and the corresponding analysis supporting it.
Resumo:
El extenso legado edificado que constituyen las construcciones históricas de fábrica de adobe distribuidas por toda la península encuentra en la región de Aveiro (Portugal) su máxima expresión, tanto desde el punto de vista cuantitativo, un 30% de los edificios existentes en la región, como tipológico, compuesto por un amplio conjunto de diferentes construcciones (edificios residenciales, edificios militares, iglesias, escuelas, teatros, naves industriales, etc.), muchos de los cuales de reconocido valor histórico, arquitectónico y patrimonial. Tras la gradual desaparición del empleo de la fábrica de adobe como técnica constructiva, durante la segunda mitad del siglo XX, la necesaria conservación y/o rehabilitación de los edificios remanentes no se ha tenido en cuenta. Como consecuencia de esta actitud generalizada de pasividad continuada, se verifica el estado actual de deterioro y de daño acusado que presentan una gran parte de estas construcciones, del cual ha resultado el panorama presente de abandono y ruina en el que se encuentran muchas de ellas. Por regla general, la solución adoptada para afrontar el estado actual de los edificios en estas condiciones suele ser la demolición. No obstante lo anterior, en los últimos años, se viene produciendo en el seno de los diferentes agentes que intervienen en la toma de decisiones sobre estos edificios un interés creciente en su preservación, promoviendo la rehabilitación de los mismos. Empieza de este modo a cobrar importancia la necesidad de desarrollar estrategias de intervención que posibiliten alargar la vida útil de estas estructuras, permitiendo, por un lado, establecer metodologías de análisis de sus condiciones de seguridad, que posibiliten determinar las medidas de actuación necesarias para el aseguramiento de las mismas frente a las acciones existentes y, en su caso, a la incorporación de nuevas solicitaciones debidas a cambios de uso o ampliaciones, así como, dando respuesta a los principales mecanismo de daño a los que se encuentran sujetas. Tratándose de estructuras que fueron construidas utilizando técnicas y materiales escasamente estudiados y entre tanto abandonados, se hace también necesario, y con carácter previo a lo anterior, acometer un estudio e investigación multidisciplinares que permitan su caracterización y comprensión, y la par establecer un diagnóstico sobre los principales agentes y procesos patológicos que promueven el deterioro de las mismas. En este contexto, el estudio llevado a cabo tuvo como objetivos ampliar el conocimiento e investigación acerca de las propiedades y parámetros resistentes de las fábricas históricas de adobe, así como, analizar los principales mecanismos de daño asociados a las mismas, a la luz del el estado actual de estas construcciones, y su repercusión en el comportamiento resistente de las fábricas –con especial énfasis para la influencia del agua–. De este modo, se pretendió construir una base de resultados que, por un lado, pueda servir de soporte a intervenciones de rehabilitación y/o consolidación de estas estructuras – permitiendo de forma ágil y a partir de datos y recursos de cálculo expeditos la evaluación de los niveles de seguridad en las fábricas–, y por otro, el estudio de soluciones de mejora o corrección de deficiencias en su comportamiento estructural. The vast edified legacy composed of the historical adobe load-bearing walls which can be found spread all over the Iberian peninsula has, in the region of Aveiro (Portugal), its maximum expression, both from a quantitative point of view (around 30% of the local construction) and a typological point of view, including a considerable number of different types of constructions (residential buildings, military buildings, churches, industrial buildings, etc.), most of which have a recognized high historical, architectural and patrimonial value. The conservation and/or rehabilitation of many of these edifications has been neglected, since its gradual abandonment as constructive technique during the second half of the 20th century. As a consequence of this posture of general passivity, it is nowadays visible the state of pronounced damage manifested by great part of these constructions, which has been leading to their abandonment and state of ruin. In most cases, the option for demolition has been the solution found to face the actual state of these constructions. However, in recent years, a growing interest in the preservation and maintenance of the adobe constructions has become visible by the envolved parties, with the obvious consequence of rehabilitation. By so, it becomes mandatory to develop strategies of intervention that, through rehabilitation, are able to extend the useful life of the existing structures, allowing on one hand, to establish methodologies to analyze their safety state and determine the actions needed to ensure their protection against existing pressures and, when applicable, the occurrence of new demands due to changes of use or extensions, and on the other hand, to give response to the main damage mechanisms to which these structures are exposed. Because these structures were built using not very well know techniques and materials, which have been, in the meantime, abandoned, it is now crucial that these are object of a previous phase of study and multidisciplinary investigation, essential not only to their characterization and comprehension, but also to the diagnosis of the main pathological processes that affect them. In this context, the present study had as main goal to develop the analysis and knowledge regarding the properties and resistant parameters of the adobe masonry as well as bringing to the light of day the actual state of the existing adobe constructions, making evident the main damage mechanisms by which they are affected and analyzing, through their occurrence, the vulnerability of the mentioned properties and resistant parameters. By so, it was objective of the present study the development of a result database which on one hand, supports the execution of rehabilitation interventions and/or strengthening of these constructions and, on other hand, allows the development of improvement solutions in the mechanical characteristics of the adobe masonry which permit corrections of deficiencies in their structural behavior with a special emphasis on the influence of water. Thus, a database of results was developed. Its goal is, on one hand, to support the rehabilitation or consolidation interventions on these structures - allowing for a quick analysis of the safety state of the adobe walls from expedite data and calculus resources and, on the other hand, to study improvement or correction solutions of their structural behavior.