11 resultados para performance data
em Universidad Politécnica de Madrid
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of speed of the analysis, precisión, and usefulness of the information obtained. To this end, design and implementation aspects are discussed for two practical abstract interpretation-based flow analysis systems: MA , the MCC And-parallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained (rom these implementations and, as an example of an application, a study of the usefulness of the mode information obtained in reducing run-time checks in independent and-parallelism.Based on the results obtained, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of both speed and precision of analysis. It discusses design and implementation aspects of two practical abstract interpretation-based flow analysis systems: MA3, the MOO Andparallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained from these implementations. Based on these results, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.
Resumo:
El objetivo de la presente tesis doctoral es el desarrollo e implementación de un sistema para mejorar la metodología de extracción de la información geométrica necesaria asociada a los procesos de documentación de entidades de interés patrimonial, a partir de la información proporcionada por el empleo de sensores láser, tanto aéreos como terrestres. Para ello, inicialmente se realiza una presentación y justificación de los antecedentes y la problemática en el registro de información geométrica para el patrimonio, detallando todos aquellos sistemas de registro y análisis de la información geométrica utilizados en la actualidad. Este análisis permitirá realizar la comparación con los sistemas de registro basados en técnicas láser, aportando sugerencias de utilización para cada caso concreto. Posteriormente, se detallan los sistemas de registro basados en técnicas láser, comenzando por los sensores aerotransportados y concluyendo con el análisis pormenorizado de los sensores terrestres, tanto en su aplicación en modo estático como móvil. Se exponen las características técnicas y funcionamiento de cada uno de ellos, así como los ámbitos de aplicación y productos generados. Se analizan las fuentes de error que determinan la precisión que puede alcanzar el sistema. Tras la exposición de las características de los sistemas LiDAR, se detallan los procesos a realizar con los datos extraídos para poder generar la información necesaria para los diferentes tipos de objetos analizados. En esta exposición, se hace hincapié en los posibles riesgos que pueden ocurrir en algunas fases delicadas y se analizarán los diferentes algoritmos de filtrado y clasificación de los puntos, fundamentales en el procesamiento de la información LiDAR. Seguidamente, se propone una alternativa para optimizar los modelos de procesamiento existentes, basándose en el desarrollo de algoritmos nuevos y herramientas informáticas que mejoran el rendimiento en la gestión de la información LiDAR. En la implementación, se han tenido en cuenta características y necesidades particulares de la documentación de entidades de interés patrimonial, así como los diferentes ámbitos de utilización del LiDAR, tanto aéreo como terrestre. El resultado es un organigrama de las tareas a realizar desde la nube de puntos LiDAR hasta el cálculo de los modelos digitales del terreno y de superficies. Para llevar a cabo esta propuesta, se han desarrollado hasta 19 algoritmos diferentes que comprenden implementaciones para el modelado en 2.5D y 3D, visualización, edición, filtrado y clasificación de datos LiDAR, incorporación de información de sensores pasivos y cálculo de mapas derivados, tanto raster como vectoriales, como pueden ser mapas de curvas de nivel y ortofotos. Finalmente, para dar validez y consistencia a los desarrollos propuestos, se han realizado ensayos en diferentes escenarios posibles en un proceso de documentación del patrimonio y que abarcan desde proyectos con sensores aerotransportados, proyectos con sensores terrestres estáticos a media y corta distancia, así como un proyecto con un sensor terrestre móvil. Estos ensayos han permitido definir los diferentes parámetros necesarios para el adecuado funcionamiento de los algoritmos propuestos. Asimismo, se han realizado pruebas objetivas expuestas por la ISPRS para la evaluación y comparación del funcionamiento de algoritmos de clasificación LiDAR. Estas pruebas han permitido extraer datos de rendimiento y efectividad del algoritmo de clasificación presentado, permitiendo su comparación con otros algoritmos de prestigio existentes. Los resultados obtenidos han constatado el funcionamiento satisfactorio de la herramienta. Esta tesis está enmarcada dentro del proyecto Consolider-Ingenio 2010: “Programa de investigación en tecnologías para la valoración y conservación del patrimonio cultural” (ref. CSD2007-00058) realizado por el Consejo Superior de Investigaciones Científicas y la Universidad Politécnica de Madrid. ABSTRACT: The goal of this thesis is the design, development and implementation of a system to improve the extraction of useful geometric information in Heritage documentation processes. This system is based on information provided by laser sensors, both aerial and terrestrial. Firstly, a presentation of recording geometric information for Heritage processes is done. Then, a justification of the background and problems is done too. Here, current systems for recording and analyzing the geometric information are studied. This analysis will perform the comparison with the laser system techniques, providing suggestions of use for each specific case. Next, recording systems based on laser techniques are detailed. This study starts with airborne sensors and ends with terrestrial ones, both in static and mobile application. The technical characteristics and operation of each of them are described, as well as the areas of application and generated products. Error sources are also analyzed in order to know the precision this technology can achieve. Following the presentation of the LiDAR system characteristics, the processes to generate the required information for different types of scanned objects are described; the emphasis is on the potential risks that some steps can produce. Moreover different filtering and classification algorithms are analyzed, because of their main role in LiDAR processing. Then, an alternative to optimize existing processing models is proposed. It is based on the development of new algorithms and tools that improve the performance in LiDAR data management. In this implementation, characteristics and needs of the documentation of Heritage entities have been taken into account. Besides, different areas of use of LiDAR are considered, both air and terrestrial. The result is a flowchart of tasks from the LiDAR point cloud to the calculation of digital terrain models and digital surface models. Up to 19 different algorithms have been developed to implement this proposal. These algorithms include implementations for 2.5D and 3D modeling, viewing, editing, filtering and classification of LiDAR data, incorporating information from passive sensors and calculation of derived maps, both raster and vector, such as contour maps and orthophotos. Finally, in order to validate and give consistency to the proposed developments, tests in different cases have been executed. These tests have been selected to cover different possible scenarios in the Heritage documentation process. They include from projects with airborne sensors, static terrestrial sensors (medium and short distances) to mobile terrestrial sensor projects. These tests have helped to define the different parameters necessary for the appropriate functioning of the proposed algorithms. Furthermore, proposed tests from ISPRS have been tested. These tests have allowed evaluating the LiDAR classification algorithm performance and comparing it to others. Therefore, they have made feasible to obtain performance data and effectiveness of the developed classification algorithm. The results have confirmed the reliability of the tool. This investigation is framed within Consolider-Ingenio 2010 project titled “Programa de investigación en tecnologías para la valoración y conservación del patrimonio cultural” (ref. CSD2007-00058) by Consejo Superior de Investigaciones Científicas and Universidad Politécnica de Madrid.
Resumo:
In recent future, wireless sensor networks (WSNs) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of WSNs facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers (DCs). The high economical and environmental impact of the energy consumption in DCs requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of WSNs: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of DCs: energy-optimal workload assignment policies in heterogeneous DCs, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.
Resumo:
In recent future, wireless sensor networks ({WSNs}) will experience a broad high-scale deployment (millions of nodes in the national area) with multiple information sources per node, and with very specific requirements for signal processing. In parallel, the broad range deployment of {WSNs} facilitates the definition and execution of ambitious studies, with a large input data set and high computational complexity. These computation resources, very often heterogeneous and driven on-demand, can only be satisfied by high-performance Data Centers ({DCs}). The high economical and environmental impact of the energy consumption in {DCs} requires aggressive energy optimization policies. These policies have been already detected but not successfully proposed. In this context, this paper shows the following on-going research lines and obtained results. In the field of {WSNs}: energy optimization in the processing nodes from different abstraction levels, including reconfigurable application specific architectures, efficient customization of the memory hierarchy, energy-aware management of the wireless interface, and design automation for signal processing applications. In the field of {DCs}: energy-optimal workload assignment policies in heterogeneous {DCs}, resource management policies with energy consciousness, and efficient cooling mechanisms that will cooperate in the minimization of the electricity bill of the DCs that process the data provided by the WSNs.
Resumo:
This study develops a proposal of method of calculation useful to estimate the energy produced by a PV grid-connected system making use of irradiance-domain integrals and denition of statistical moment. Validation against database of real PV plants performance data shows that acceptable energy estimation can be obtained with rst to fourth statistical moments and some basic system parameters. This way, only simple calculations at the reach of pocket calculators, are enough to estimate AC energy.
Resumo:
Various researchers have developed models of conventional H2O–LiBr absorption machines with the aim of predicting their performance. In this paper, the methodology of characteristic equations developed by Hellmann et al. (1998) is applied. This model is able to represent the capacity of single effect absorption chillers and heat pumps by means of simple algebraic equations. An extended characteristic equation based on a characteristic temperature difference has been obtained, considering the facility features. As a result, it is concluded that for adiabatic absorbers a subcooling temperature must be specified. The effect of evaporator overflow has been characterized. Its influence on cooling capacity has been included in the extended characteristic equation. Taking into account the particular design and operation features, a good agreement between experimental performance data and those obtained through the extended characteristic equation has been achieved at off-design operation. This allows its use for simulation and control purposes.
Resumo:
Over the past few years, the common practice within air traffic management has been that commercial aircraft fly by following a set of predefined routes to reach their destination. Currently, aircraft operators are requesting more flexibility to fly according to their prefer- ences, in order to achieve their business objectives. Due to this reason, much research effort is being invested in developing different techniques which evaluate aircraft optimal trajectory and traffic synchronisation. Also, the inefficient use of the airspace using barometric altitude overall in the landing and takeoff phases or in Continuous Descent Approach (CDA) trajectories where currently it is necessary introduce the necessary reference setting (QNH or QFE). To solve this problem and to permit a better airspace management born the interest of this research. Where the main goals will be to evaluate the impact, weakness and strength of the use of geometrical altitude instead of the use of barometric altitude. Moreover, this dissertation propose the design a simplified trajectory simulator which is able to predict aircraft trajectories. The model is based on a three degrees of freedom aircraft point mass model that can adapt aircraft performance data from Base of Aircraft Data, and meteorological information. A feature of this trajectory simulator is to support the improvement of the strategic and pre-tactical trajectory planning in the future Air Traffic Management. To this end, the error of the tool (aircraft Trajectory Simulator) is measured by comparing its performance variables with actual flown trajectories obtained from Flight Data Recorder information. The trajectory simulator is validated by analysing the performance of different type of aircraft and considering different routes. A fuel consumption estimation error was identified and a correction is proposed for each type of aircraft model. In the future Air Traffic Management (ATM) system, the trajectory becomes the fundamental element of a new set of operating procedures collectively referred to as Trajectory-Based Operations (TBO). Thus, governmental institutions, academia, and industry have shown a renewed interest for the application of trajectory optimisation techniques in com- mercial aviation. The trajectory optimisation problem can be solved using optimal control methods. In this research we present and discuss the existing methods for solving optimal control problems focusing on direct collocation, which has received recent attention by the scientific community. In particular, two families of collocation methods are analysed, i.e., Hermite-Legendre-Gauss-Lobatto collocation and the pseudospectral collocation. They are first compared based on a benchmark case study: the minimum fuel trajectory problem with fixed arrival time. For the sake of scalability to more realistic problems, the different meth- ods are also tested based on a real Airbus 319 El Cairo-Madrid flight. Results show that pseudospectral collocation, which has shown to be numerically more accurate and computa- tionally much faster, is suitable for the type of problems arising in trajectory optimisation with application to ATM. Fast and accurate optimal trajectory can contribute properly to achieve the new challenges of the future ATM. As atmosphere uncertainties are one of the most important issues in the trajectory plan- ning, the final objective of this dissertation is to have a magnitude order of how different is the fuel consumption under different atmosphere condition. Is important to note that in the strategic phase planning the optimal trajectories are determined by meteorological predictions which differ from the moment of the flight. The optimal trajectories have shown savings of at least 500 [kg] in the majority of the atmosphere condition (different pressure, and temperature at Mean Sea Level, and different lapse rate temperature) with respect to the conventional procedure simulated at the same atmosphere condition.This results show that the implementation of optimal profiles are beneficial under the current Air traffic Management (ATM).
Resumo:
ISSIS is the instrument for imaging and slitless spectroscopy on-board WSO-UV. In this article, a detailed comparison between ISSIS expected radiometric performance and other ultraviolet instruments is shown. In addition, we present preliminary information on the performance verification tests and on the foreseen procedures for in-flight operation and data handling.
Resumo:
Multiple indicators are of interest in smart cities at different scales and for different stakeholders. In open environments, such as The Web, or when indicator information has to be interchanged across systems, contextual information (e.g., unit of measurement, measurement method) should be transmitted together with the data and the lack of such information might cause undesirable effects. Describing the data by means of ontologies increases interoperability among datasets and applications. However, methodological guidance is crucial during ontology development in order to transform the art of modeling in an engineering activity. In the current paper, we present a methodological approach for modelling data about Key Performance Indicators and their context with an application example of such guidelines.