43 resultados para Execute

em Universidad Politécnica de Madrid


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A diferencia de otros parámetros, el efecto de la existencia de huecos en la aparición y desarrollo de los procesos de fisuración en los paños de fábrica no ha sido considerado por las distintas normativas existentes en la actualidad. En nuestros días se emplea una variada gama de tipologías de elementos de cerramiento para realizar las particiones en las obras de edificación, cada una de ellas con características mecánicas diferentes y distinta metodología de ejecución, siendo de aplicación la misma normativa relativa al cálculo y control de las deformaciones. Tal y como expresamos en el Capitulo 1, en el que se analiza el Estado del Conocimiento, los códigos actuales determinan de forma analítica la flecha probable que se alcanza en los elementos portantes estructurales bajo diferentes condiciones de servicio. Las distintas propuestas que existen respecto para la limitación de la flecha activa, una vez realizado el cálculo de las deformaciones, bien por el método de Branson ó mediante los métodos de integración de curvaturas, no contemplan como parámetro a considerar en la limitación de la flecha activa la existencia y tipología de huecos en un paño de fábrica soportado por la estructura. Sin embargo se intuye y podríamos afirmar que una discontinuidad en cualquier elemento sometido a esfuerzos tiene influencia en el estado tensional del mismo. Si consideramos que, de forma general, los procesos de fisuración se producen al superarse la resistencia a tracción de material constitutivo de la fábrica soportada, es claro que la variación tensional inducida por la existencia de huecos ha de tener cierta influencia en la aparición y desarrollo de los procesos de fisuración en los elementos de partición o de cerramiento de las obras de edificación. En los Capítulos 2 y 3 tras justificar la necesidad de realizar una investigación encaminada a confirmar la relación entre la existencia de huecos en un paño de fábrica y el desarrollo de procesos de fisuración en el mismo, se establece este aspecto como principal Objetivo y se expone la Metodología para su análisis. Hemos definido y justificado en el Capítulo 4 el modelo de cálculo que hemos utilizado para determinar las deformaciones y los procesos de fisuración que se producen en los casos a analizar, en los que se han considerado como variables: los valores de la luz del modelo, el estado de fisuración de los elementos portantes, los efectos de la fluencia y el porcentaje de transmisión de cargas desde el forjado superior al paño de fábrica en estudio. Además se adoptan dos valores de la resistencia a tracción de las fábricas, 0.75MPa y 1.00MPa. La capacidad de representar la fisuración, así como la robustez y fiabilidad ha condicionado y justificado la selección del programa de elementos finitos que se ha utilizado para realizar los cálculos. Aprovechando la posibilidad de reproducir de forma ajustada las características introducidas para cada parámetro, hemos planteado y realizado un análisis paramétricos que considera 360 cálculos iterativos, de cuya exposición es objeto el Capítulo 5, para obtener una serie representativa de resultados sobre los que se realizará el análisis posterior. En el Capítulo 6, de análisis de los resultados, hemos estudiado los valores de deformaciones y estados de fisuración obtenidos para los casos analizados. Hemos determinado la influencia que tiene la presencia de huecos en la aparición de los procesos de fisuración y en las deformaciones que se producen en las diferentes configuraciones estructurales. Las conclusiones que hemos obtenido tras analizar los resultados, incluidas en el Capítulo 7, no dejan lugar a dudas: la presencia, la posición y la tipología de los huecos en los elementos de fábricas soportadas sobre estructuras deformables son factores determinantes respecto de la fisuración y pueden tener influencia en las deformaciones que constituyen la flecha activa del elemento, lo que obliga a plantear una serie de recomendaciones frente al proyecto y frente a la reglamentación técnica. La investigación desarrollada para esta Tesis Doctoral y la metodología aplicada para su desarrollo abre nuevas líneas de estudio, que se esbozan en el Capítulo 8, para el análisis de otros aspectos que no han sido cubiertos por esta investigación a fin de mejorar las limitaciones que deberían establecerse para los Estados Límite de Servicio de Deformaciones correspondientes a las estructuras de edificación. SUMMARY. Unlike other parameters, the effect of the existence of voids in the arising and development of cracking processes in the masonry walls has not been considered by current Codes. Nowadays, a huge variety of enclosure elements types is used to execute partitions in buildings, each one with different mechanical characteristics and different execution methodology, being applied the same rules concerning deflection calculation and control. As indicated in Chapter 1, which analyzes the State of Art, current codes analytically determine the deflection likely to be achieved in structural supporting elements under different service conditions. The different proposals that exist related to live deflection limitation, once performed deformations calculation, either by Branson´s method or considering curvatures integration methods, do not consider in deflection limitation the existence and typology of voids in a masonry wall structured supported. But is sensed and it can be affirmed that a discontinuity in any element under stress influences the stress state of it. If we consider that, in general, cracking processes occur when masonry material tensile strength is exceeded, it is clear that tension variation induced by the existence of voids must have some influence on the emergence and development of cracking processes in enclosure elements of building works. In Chapters 2 and 3, after justifying the need for an investigation to confirm the relationship between the existence of voids in a masonry wall and the development of cracking process in it, is set as the main objective and it is shown the analysis Methodology. We have defined and justified in Chapter 4 the calculation model used to determine the deformation and cracking processes that occur in the cases analyzed, in which were considered as variables: model span values, bearing elements cracking state, creep effects and load transmission percentage from the upper floor to the studied masonry wall. In addition, two masonry tensile strength values 0.75MPa and 1.00MPa have been considered. The cracking consideration ability, robustness and reliability has determined and justified the selection of the finite element program that was used for the calculations. Taking advantage of the ability of accurately consider the characteristics introduced for each parameter, we have performed a parametric analyses that considers 360 iterative calculations, whose results are included in Chapter 5, in order to obtain a representative results set that will be analyzed later. In Chapter 6, results analysis, we studied the obtained values of deformation and cracking configurations for the cases analyzed. We determined the influence of the voids presence in the occurrence of cracking processes and deformations in different structural configurations. The conclusions we have obtained after analyzing the results, included in Chapter 7, leave no doubt: the presence, position and type of holes in masonry elements supported on deformable structures are determinative of cracking and can influence deformations which are the element live deflection, making necessary to raise a number of recommendations related to project and technical regulation. The research undertaken for this Doctoral Thesis and the applied methodology for its development opens up new lines of study, outlined in Chapter 8, for the analysis of other aspects that are not covered by this research, in order to improve the limitations that should be established for Deflections Serviceability Limit States related to building structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decreasing the accidents on highway and urban environments is the main motivation for the research and developing of driving assistance systems, also called ADAS (Advanced Driver Assistance Systems). In recent years, there are many applications of these systems in commercial vehicles: ABS systems, Cruise Control (CC), parking assistance and warning systems (including GPS), among others. However, the implementation of driving assistance systems on the steering wheel is more limited, because of their complexity and sensitivity. This paper is focused in the development, test and implementation of a driver assistance system for controlling the steering wheel in curve zones. This system is divided in two levels: an inner control loop which permits to execute the position and speed target, softening the action over the steering wheel, and a second control outer loop (controlling for fuzzy logic) that sends the reference to the inner loop according the environment and vehicle conditions. The tests have been done in different curves and speeds. The system has been proved in a commercial vehicle with satisfactory results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En las últimas décadas se han producido importantes avances tecnológicos, lo que ha producido un crecimiento importante de las Redes Inalámbricas de Sensores (RIS), conocidas en inglés como Wireless Sensor Networks (WSN). Estas redes están formadas por un conjunto de pequeños nodos o también, conocidos como motas, compuestos por diversos tipos de sensores. Las Redes Inalámbricas de Sensores pueden resultar muy útiles en entornos donde el despliegue de redes cableadas, formadas por ordenadores, encaminadores u otros dispositivos de red no sea posible. Sin embargo, este tipo de redes presentan una serie de carencias o problemas que dificultan, en ocasiones, su implementación y despliegue. Este Proyecto Fin de Carrera tiene como principales objetivos: diseñar e implementar un agente que haga uso de la tecnología Bluetooth para que se pueda comunicar tanto con la arquitectura orientada a servicios, vía radio, como con el módulo Bioharness para obtener parámetros fisiológicos; ofrecer una serie de servicios simples a la Red Inalámbrica de Sensores; diseñar un algoritmo para un sistema de alarmas; realizar e implementar una pasarela entre protocolos que usen el estándar IEEE802.15.4 (ZigBee) y el estándar IEEE802.15.1 de la Tecnología Bluetooth. Por último, implementar una aplicación Android para el reloj WiMM y que este pueda recibir alarmas en tiempo real a través del la Interfaz Bluetooth. Para lograr estos objetivos, en primer lugar realizaremos un estudio del Estado del Arte de las Redes Inalámbricas de Sensores, con el fin de estudiar su arquitectura, el estándar Bluetooth y los dispositivos Bluetooth que se han utilizado en este Proyecto. Seguidamente, describiremos detalladamente el firmware iWRAP versión 4, centrándonos en sus modos de operación, comandos AT y posibles errores que puedan ocurrir. A continuación, se describirá la arquitectura y la especificación nSOM, para adentrarnos en la arquitectura orientada a servicios. Por último, ejecutaremos la fase de validación del sistema y se analizarán los resultados obtenidos durante la fase de pruebas. ABSTRACT In last decades there have been significant advances in technology, which has resulted in important growth of Wireless Sensor Networks (WSN). These networks consist of a small set of nodes, also known as spots; equipped with various types of sensors. Wireless Sensor Networks can be very useful in environments where deployment of wired networks, formed by computers, routers or other network devices is not possible. However, these networks have a number of shortcomings or challenges to, sometimes, their implementation and deployment. The main objectives of this Final Project are to design and implement an agent that makes use of Bluetooth technology so you can communicate with both the service-oriented architecture, via radio, as with Bioharness module for physiological parameters; offer simple services to Wireless Sensor Network, designing an algorithm for an alarm system, make and implement a gateway between protocols using the standard IEEE802.15.4 (ZigBee) and IEEE802.15.1 standard Bluetooth Technology. Finally, implement an Android application for WiMM watch that can receive real-time alerts through the Bluetooth interface. In order to achieve these objectives, firstly we are going to carry out a study of the State of the Art in Wireless Sensor Network, where we study the architecture, the Bluetooth standard and Bluetooth devices that have been used in this project. Then, we will describe in detail the iWRAP firmware version 4, focusing on their operation modes, AT commands and errors that may occur. Therefore, we will describe the architecture and specification nSOM, to enter into the service-oriented architecture. Finally, we will execute the phase of validation of the system in a real application scenario, analyzing the results obtained during the testing phase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Matlab, uno de los paquetes de software matemático más utilizados actualmente en el mundo de la docencia y de la investigación, dispone de entre sus muchas herramientas una específica para el procesado digital de imágenes. Esta toolbox de procesado digital de imágenes está formada por un conjunto de funciones adicionales que amplían la capacidad del entorno numérico de Matlab y permiten realizar un gran número de operaciones de procesado digital de imágenes directamente a través del programa principal. Sin embargo, pese a que MATLAB cuenta con un buen apartado de ayuda tanto online como dentro del propio programa principal, la bibliografía disponible en castellano es muy limitada y en el caso particular de la toolbox de procesado digital de imágenes es prácticamente nula y altamente especializada, lo que requiere que los usuarios tengan una sólida formación en matemáticas y en procesado digital de imágenes. Partiendo de una labor de análisis de todas las funciones y posibilidades disponibles en la herramienta del programa, el proyecto clasificará, resumirá y explicará cada una de ellas a nivel de usuario, definiendo todas las variables de entrada y salida posibles, describiendo las tareas más habituales en las que se emplea cada función, comparando resultados y proporcionando ejemplos aclaratorios que ayuden a entender su uso y aplicación. Además, se introducirá al lector en el uso general de Matlab explicando las operaciones esenciales del programa, y se aclararán los conceptos más avanzados de la toolbox para que no sea necesaria una extensa formación previa. De este modo, cualquier alumno o profesor que se quiera iniciar en el procesado digital de imágenes con Matlab dispondrá de un documento que le servirá tanto para consultar y entender el funcionamiento de cualquier función de la toolbox como para implementar las operaciones más recurrentes dentro del procesado digital de imágenes. Matlab, one of the most used numerical computing environments in the world of research and teaching, has among its many tools a specific one for digital image processing. This digital image processing toolbox consists of a set of additional functions that extend the power of the digital environment of Matlab and allow to execute a large number of operations of digital image processing directly through the main program. However, despite the fact that MATLAB has a good help section both online and within the main program, the available bibliography is very limited in Castilian and is negligible and highly specialized in the particular case of the image processing toolbox, being necessary a strong background in mathematics and digital image processing. Starting from an analysis of all the available functions and possibilities in the program tool, the document will classify, summarize and explain each function at user level, defining all input and output variables possible, describing common tasks in which each feature is used, comparing results and providing illustrative examples to help understand its use and application. In addition, the reader will be introduced in the general use of Matlab explaining the essential operations within the program and clarifying the most advanced concepts of the toolbox so that an extensive prior formation will not be necessary. Thus, any student or teacher who wants to start digital image processing with Matlab will have a document that will serve to check and understand the operation of any function of the toolbox and also to implement the most recurrent operations in digital image processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La instalación de Infraestructuras Comunes de Telecomunicación (IICCTT) en el interior de las edificaciones para el acceso a los servicios de telecomunicación facilitó la incorporación a las viviendas de las nuevas tecnologías de forma económica y transparente para los usuarios. Actualmente, todos los edificios de nueva construcción deben presentar un proyecto ICT firmado por un Ingeniero Técnico de Telecomunicación de la especialidad correspondiente o un Ingeniero de Telecomunicación. La legislación que las regula afecta a todo tipo de viviendas con independencia del poder adquisitivo del comprador, y contribuye de manera decisiva a que disminuyan a corto y medio plazo las desigualdades sociales en lo relativo al acceso a servicios de telecomunicación tales como telefonía, Internet, telecomunicación por cable, radiodifusión sonora y televisión analógica, digital, terrenal o por satélite, etc.. Desde 1997, el Colegio Oficial de Ingenieros de Telecomunicación junto con otras organizaciones públicas y privadas ha participado en la elaboración de la normativa aplicable a las Infraestructuras Comunes de Telecomunicación, dando lugar al actual decreto, el Real Decreto 346/2011, de 11 de Marzo. El propósito general de este proyecto es diseñar una red Wi-Fi a partir de las canalizaciones e instalaciones del proyecto ICT de un conjunto de viviendas unifamiliares, para que todas ellas dispongan de conexión a internet de forma inalámbrica. Para llevar a cabo este diseño, se ha realizado un estudio de las características del estándar IEEE 802.11, conocido como Wi-Fi, analizando las posibilidades de comunicación inalámbrica que ofrece, así como las limitaciones que presenta en la actualidad. Se ha analizado el proyecto ICT del conjunto de viviendas, estudiando la viabilidad de utilizar sus instalaciones para implementar la red Wi-Fi, añadiendo tanto las canalizaciones como los dispositivos comerciales necesarios para llevar a cabo dicha implementación. Además, se ha estudiado la posibilidad de integrar la red Wi-Fi utilizando el cableado de televisión de la propia ICT. Por último, se ha estudiado la gran importancia que al Hogar Digital se da en el Real Decreto 346/2011, de 11 de marzo, por el que se aprueba el Reglamento regulador de las Infraestructuras Comunes de Telecomunicaciones para el acceso a los servicios de telecomunicación en el interior de las edificaciones, presentando los aspectos fundamentales que se persiguen con la domotización de la vivienda como mejora de vida de sus habitantes. Abstract The installation of Telecommunications Common Infrastructures (TCIs, in Spanish Infraestructuras Comunes de Telecomunicación –IICCTT-) in the buildings, in order to gain access to telecommunications services, facilitated the incorporation into the houses of new technologies in an economical and transparent way for users. Nowadays, every new construction building must have a TCI project signed by a Telecommunications Engineer or a Technical Telecommunications Engineer with the appropriate specialization. The legislation that regulates TCIs affects every kind of houses, independently of the buyer´s purchasing power, and contributes decisively to decrease in short and medium terms the social inequalities concerning the access to the telecommunication services, such as telephony, Internet, wired telecommunications, audible broadcasting and digital, analogical, land, satellite television, etc.. Since 1997, the Telecommunications Engineer Official College, together with other public and private organizations, has been elaborating the regulations for the TCIs, giving rise to the current decree, the Royal Decree 346/2011, of 11th of March. The general purpose of this project is to design a Wi-Fi network based on the canalizations and installations of the TCI project of a housing development, in such a way that every house is provided with a wireless connection to the Internet. In order to carry out this design, the characteristics of the standard IEEE 802.11, known as Wi-Fi, have been studied, analyzing the wireless-communication possibilities that it offers, as well as the constraints that it presents currently. The TCI project has been analyzed, studying the feasibility of using its installations to implement the Wi-Fi network, adding the canalizations and commercial devices required to execute the aforementioned implementation. Besides, the possibility of integrating the Wi-Fi network using the television wires of the TCI project has been investigated. Finally, it has been studied the great importance that has been given to Digital Home in the Royal Decree 346/2011, of 11th of March, that approves the regulatory Regulations of Telecommunications Common Infrastructures for the access to telecommunications services inside the buildings, presenting the essential aspects that are pursued with the house domotization as a way to improve the quality of life of its inhabitants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents some fundamental properties of independent and-parallelism and extends its applicability by enlarging the class of goals eligible for parallel execution. A simple model of (independent) and-parallel execution is proposed and issues of correctness and efficiency discussed in the light of this model. Two conditions, "strict" and "non-strict" independence, are defined and then proved sufficient to ensure correctness and efñciency of parallel execution: if goals which meet these conditions are executed in parallel the solutions obtained are the same as those produced by standard sequential execution. Also, in absence of failure, the parallel proof procedure does not genérate any additional work (with respect to standard SLD-resolution) while the actual execution time is reduced. Finally, in case of failure of any of the goals no slow down will occur. For strict independence the results are shown to hold independently of whether the parallel goals execute in the same environment or in sepárate environments. In addition, a formal basis is given for the automatic compile-time generation of independent and-parallelism: compile-time conditions to efficiently check goal independence at run-time are proposed and proved sufficient. Also, rules are given for constructing simpler conditions if information regarding the binding context of the goals to be executed in parallel is available to the compiler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we study, through a concrete case, the feasibility of using a high-level, general-purpose logic language in the design and implementation of applications targeting wearable computers. The case study is a "sound spatializer" which, given real-time signáis for monaural audio and heading, generates stereo sound which appears to come from a position in space. The use of advanced compile-time transformations and optimizations made it possible to execute code written in a clear style without efñciency or architectural concerns on the target device, while meeting strict existing time and memory constraints. The final executable compares favorably with a similar implementation written in C. We believe that this case is representative of a wider class of common pervasive computing applications, and that the techniques we show here can be put to good use in a range of scenarios. This points to the possibility of applying high-level languages, with their associated flexibility, conciseness, ability to be automatically parallelized, sophisticated compile-time tools for analysis and verification, etc., to the embedded systems field without paying an unnecessary performance penalty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While logic programming languages offer a great deal of scope for parallelism, there is usually some overhead associated with the execution of goals in parallel because of the work involved in task creation and scheduling. In practice, therefore, the "granularity" of a goal, i.e. an estimate of the work available under it, should be taken into account when deciding whether or not to execute a goal concurrently as a sepárate task. This paper describes a method for estimating the granularity of a goal at compile time. The runtime overhead associated with our approach is usually quite small, and the performance improvements resulting from the incorporation of grainsize control can be quite good. This is shown by means of experimental results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of program specialization is to optimize programs by exploiting certain knowledge about the context in which the program will execute. There exist many program manipulation techniques which allow specializing the program in different ways. Among them, one of the best known techniques is partial evaluation, often referred to simply as program specialization, which optimizes programs by specializing them for (partially) known input data. In this work we describe abstract specialization, a technique whose main features are: (1) specialization is performed with respect to "abstract" valúes rather than "concrete" ones, and (2) abstract interpretation rather than standard interpretation of the program is used in order to propágate information about execution states. The concept of abstract specialization is at the heart of the specialization system in CiaoPP, the Ciao system preprocessor. In this paper we present a unifying view of the different specialization techniques used in CiaoPP and discuss their potential applications by means of examples. The applications discussed include program parallelization, optimization of dynamic scheduling (concurreney), and integration of partial evaluation techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advantages of tabled evaluation regarding program termination and reduction of complexity are well known —as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspensión) require. This implementation effort is reduced by program transformation-based continuation cali techniques, at some eñrciency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation cali technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El Análisis de Consumo de Recursos o Análisis de Coste trata de aproximar el coste de ejecutar un programa como una función dependiente de sus datos de entrada. A pesar de que existen trabajos previos a esta tesis doctoral que desarrollan potentes marcos para el análisis de coste de programas orientados a objetos, algunos aspectos avanzados, como la eficiencia, la precisión y la fiabilidad de los resultados, todavía deben ser estudiados en profundidad. Esta tesis aborda estos aspectos desde cuatro perspectivas diferentes: (1) Las estructuras de datos compartidas en la memoria del programa son una pesadilla para el análisis estático de programas. Trabajos recientes proponen una serie de condiciones de localidad para poder mantener de forma consistente información sobre los atributos de los objetos almacenados en memoria compartida, reemplazando éstos por variables locales no almacenadas en la memoria compartida. En esta tesis presentamos dos extensiones a estos trabajos: la primera es considerar, no sólo los accesos a los atributos, sino también los accesos a los elementos almacenados en arrays; la segunda se centra en los casos en los que las condiciones de localidad no se cumplen de forma incondicional, para lo cual, proponemos una técnica para encontrar las precondiciones necesarias para garantizar la consistencia de la información acerca de los datos almacenados en memoria. (2) El objetivo del análisis incremental es, dado un programa, los resultados de su análisis y una serie de cambios sobre el programa, obtener los nuevos resultados del análisis de la forma más eficiente posible, evitando reanalizar aquellos fragmentos de código que no se hayan visto afectados por los cambios. Los analizadores actuales todavía leen y analizan el programa completo de forma no incremental. Esta tesis presenta un análisis de coste incremental, que, dado un cambio en el programa, reconstruye la información sobre el coste del programa de todos los métodos afectados por el cambio de forma incremental. Para esto, proponemos (i) un algoritmo multi-dominio y de punto fijo que puede ser utilizado en todos los análisis globales necesarios para inferir el coste, y (ii) una novedosa forma de almacenar las expresiones de coste que nos permite reconstruir de forma incremental únicamente las funciones de coste de aquellos componentes afectados por el cambio. (3) Las garantías de coste obtenidas de forma automática por herramientas de análisis estático no son consideradas totalmente fiables salvo que la implementación de la herramienta o los resultados obtenidos sean verificados formalmente. Llevar a cabo el análisis de estas herramientas es una tarea titánica, ya que se trata de herramientas de gran tamaño y complejidad. En esta tesis nos centramos en el desarrollo de un marco formal para la verificación de las garantías de coste obtenidas por los analizadores en lugar de analizar las herramientas. Hemos implementado esta idea mediante la herramienta COSTA, un analizador de coste para programas Java y KeY, una herramienta de verificación de programas Java. De esta forma, COSTA genera las garantías de coste, mientras que KeY prueba la validez formal de los resultados obtenidos, generando de esta forma garantías de coste verificadas. (4) Hoy en día la concurrencia y los programas distribuidos son clave en el desarrollo de software. Los objetos concurrentes son un modelo de concurrencia asentado para el desarrollo de sistemas concurrentes. En este modelo, los objetos son las unidades de concurrencia y se comunican entre ellos mediante llamadas asíncronas a sus métodos. La distribución de las tareas sugiere que el análisis de coste debe inferir el coste de los diferentes componentes distribuidos por separado. En esta tesis proponemos un análisis de coste sensible a objetos que, utilizando los resultados obtenidos mediante un análisis de apunta-a, mantiene el coste de los diferentes componentes de forma independiente. Abstract Resource Analysis (a.k.a. Cost Analysis) tries to approximate the cost of executing programs as functions on their input data sizes and without actually having to execute the programs. While a powerful resource analysis framework on object-oriented programs existed before this thesis, advanced aspects to improve the efficiency, the accuracy and the reliability of the results of the analysis still need to be further investigated. This thesis tackles this need from the following four different perspectives. (1) Shared mutable data structures are the bane of formal reasoning and static analysis. Analyses which keep track of heap-allocated data are referred to as heap-sensitive. Recent work proposes locality conditions for soundly tracking field accesses by means of ghost non-heap allocated variables. In this thesis we present two extensions to this approach: the first extension is to consider arrays accesses (in addition to object fields), while the second extension focuses on handling cases for which the locality conditions cannot be proven unconditionally by finding aliasing preconditions under which tracking such heap locations is feasible. (2) The aim of incremental analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code that are not affected by the changes. During software development, programs are permanently modified but most analyzers still read and analyze the entire program at once in a non-incremental way. This thesis presents an incremental resource usage analysis which, after a change in the program is made, is able to reconstruct the upper-bounds of all affected methods in an incremental way. To this purpose, we propose (i) a multi-domain incremental fixed-point algorithm which can be used by all global analyses required to infer the cost, and (ii) a novel form of cost summaries that allows us to incrementally reconstruct only those components of cost functions affected by the change. (3) Resource guarantees that are automatically inferred by static analysis tools are generally not considered completely trustworthy, unless the tool implementation or the results are formally verified. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this thesis we focus on the development of a formal framework for the verification of the resource guarantees obtained by the analyzers, instead of verifying the tools. We have implemented this idea using COSTA, a state-of-the-art cost analyzer for Java programs and KeY, a state-of-the-art verification tool for Java source code. COSTA is able to derive upper-bounds of Java programs while KeY proves the validity of these bounds and provides a certificate. The main contribution of our work is to show that the proposed tools cooperation can be used for automatically producing verified resource guarantees. (4) Distribution and concurrency are today mainstream. Concurrent objects form a well established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate via asynchronous method calls. Distribution suggests that analysis must infer the cost of the diverse distributed components separately. In this thesis we propose a novel object-sensitive cost analysis which, by using the results gathered by a points-to analysis, can keep the cost of the diverse distributed components separate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The advantages of tabled evaluation regarding program termination and reduction of complexity are well known —as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspension) require. This implementation effort is reduced by program transformation-based continuation call techniques, at some efficiency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation call technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Due to recent scientific and technological advances in information sys¬tems, it is now possible to perform almost every application on a mobile device. The need to make sense of such devices more intelligent opens an opportunity to design data mining algorithm that are able to autonomous execute in local devices to provide the device with knowledge. The problem behind autonomous mining deals with the proper configuration of the algorithm to produce the most appropriate results. Contextual information together with resource information of the device have a strong impact on both the feasibility of a particu¬lar execution and on the production of the proper patterns. On the other hand, performance of the algorithm expressed in terms of efficacy and efficiency highly depends on the features of the dataset to be analyzed together with values of the parameters of a particular implementation of an algorithm. However, few existing approaches deal with autonomous configuration of data mining algorithms and in any case they do not deal with contextual or resources information. Both issues are of particular significance, in particular for social net¬works application. In fact, the widespread use of social networks and consequently the amount of information shared have made the need of modeling context in social application a priority. Also the resource consumption has a crucial role in such platforms as the users are using social networks mainly on their mobile devices. This PhD thesis addresses the aforementioned open issues, focusing on i) Analyzing the behavior of algorithms, ii) mapping contextual and resources information to find the most appropriate configuration iii) applying the model for the case of a social recommender. Four main contributions are presented: - The EE-Model: is able to predict the behavior of a data mining algorithm in terms of resource consumed and accuracy of the mining model it will obtain. - The SC-Mapper: maps a situation defined by the context and resource state to a data mining configuration. - SOMAR: is a social activity (event and informal ongoings) recommender for mobile devices. - D-SOMAR: is an evolution of SOMAR which incorporates the configurator in order to provide updated recommendations. Finally, the experimental validation of the proposed contributions using synthetic and real datasets allows us to achieve the objectives and answer the research questions proposed for this dissertation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: This research is focused in the creation and validation of a solution to the inverse kinematics problem for a 6 degrees of freedom human upper limb. This system is intended to work within a realtime dysfunctional motion prediction system that allows anticipatory actuation in physical Neurorehabilitation under the assisted-as-needed paradigm. For this purpose, a multilayer perceptron-based and an ANFIS-based solution to the inverse kinematics problem are evaluated. Materials and methods: Both the multilayer perceptron-based and the ANFIS-based inverse kinematics methods have been trained with three-dimensional Cartesian positions corresponding to the end-effector of healthy human upper limbs that execute two different activities of the daily life: "serving water from a jar" and "picking up a bottle". Validation of the proposed methodologies has been performed by a 10 fold cross-validation procedure. Results: Once trained, the systems are able to map 3D positions of the end-effector to the corresponding healthy biomechanical configurations. A high mean correlation coefficient and a low root mean squared error have been found for both the multilayer perceptron and ANFIS-based methods. Conclusions: The obtained results indicate that both systems effectively solve the inverse kinematics problem, but, due to its low computational load, crucial in real-time applications, along with its high performance, a multilayer perceptron-based solution, consisting in 3 input neurons, 1 hidden layer with 3 neurons and 6 output neurons has been considered the most appropriated for the target application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Collaborative filtering recommender systems contribute to alleviating the problem of information overload that exists on the Internet as a result of the mass use of Web 2.0 applications. The use of an adequate similarity measure becomes a determining factor in the quality of the prediction and recommendation results of the recommender system, as well as in its performance. In this paper, we present a memory-based collaborative filtering similarity measure that provides extremely high-quality and balanced results; these results are complemented with a low processing time (high performance), similar to the one required to execute traditional similarity metrics. The experiments have been carried out on the MovieLens and Netflix databases, using a representative set of information retrieval quality measures.