462 resultados para compiler backend
Resumo:
Lo scopo dell'elaborato di tesi è l'analisi, progettazione e sviluppo di un prototipo di una infrastruttura cloud in grado di gestire un grande flusso di eventi generati da dispositivi mobili. Questi utilizzano informazioni come la posizione assunta e il valore dei sensori locali di cui possono essere equipaggiati al fine di realizzare il proprio funzionamento. Le informazioni così ottenute vengono trasmesse in modo da ottenere una rete di device in grado di acquisire autonomamente informazioni sull'ambiente ed auto-organizzarsi. La costruzione di tale struttura si colloca in un più ampio ambito di ricerca che punta a integrare metodi per la comunicazione ravvicinata con il cloud al fine di permettere la comunicazione tra dispositivi vicini in qualsiasi situazione che si potrebbe presentare in una situazione reale. A definire le specifiche della infrastruttura e quindi a impersonare il ruolo di committente è stato il relatore, Prof. Mirko Viroli, mentre lo sviluppo è stato portato avanti da me e dal correlatore, Ing. Pietro Brunetti. Visti gli studi precedenti riguardanti il cloud computing nell'area dei sistemi complessi distribuiti, Brunetti ha dato il maggiore contributo nella fase di analisi del problema e di progettazione mentre la parte riguardante la effettiva gestione degli eventi, le computazioni in cloud e lo storage dei dati è stata maggiormente affrontata da me. In particolare mi sono occupato dello studio e della implementazione del backend computazionale, basato sulla tecnologia Apache Storm, della componente di storage dei dati, basata su Neo4j, e della costruzione di un pannello di visualizzazione basato su AJAX e Linkurious. A questo va aggiunto lo studio su Apache Kafka, utilizzato come tecnologia per realizzare la comunicazione asincrona ad alte performance tra le componenti. Si è reso necessario costruire un simulatore al fine di condurre i test per verificare il funzionamento della infrastruttura prototipale e per saggiarne l'effettiva scalabilità, considerato il potenziale numero di dispositivi da sostenere che può andare dalle decine alle migliaia. La sfida più importante riguarda la gestione della vicinanza tra dispositivi e la possibilità di scalare la computazione su più macchine. Per questo motivo è stato necessario far uso di tecnologie per l'esecuzione delle operazioni di memorizzazione, calcolo e trasmissione dei dati in grado di essere eseguite su un cluster e garantire una accettabile fault-tolerancy. Da questo punto di vista i lavori che hanno portato alla costruzione della infrastruttura sono risultati essere un'ottima occasione per prendere familiarità con tecnologie prima sconosciute. Quasi tutte le tecnologie utilizzate fanno parte dell'ecosistema Apache e, come esposto all'interno della tesi, stanno ricevendo una grande attenzione da importanti realtà proprio in questo periodo, specialmente Apache Storm e Kafka. Il software prodotto per la costruzione della infrastruttura è completamente sviluppato in Java a cui si aggiunge la componente web di visualizzazione sviluppata in Javascript.
Resumo:
Lo scopo della tesi è quello di descrivere e mettere a confronto tre diversi linguaggi, e quindi approcci, alla programmazione server-side e di back-end, ovvero il linguaggio PHP, il linguaggio Python ed il linguaggio Javascript, utilizzato però per una programmazione “Server Side”, e quindi associato al framework NodeJS. Questo confronto si pone l’obiettivo di sottolineare le differenti caratteristiche di ogni linguaggio, gli scopi a cui esso maggiormente si addice e di fornire una sorta di guida per far in modo che si possa comprendere al meglio quale dei tre linguaggi maggiormente usati per la programmazione backend si conformi meglio all’obiettivo prepostosi.
Resumo:
In this paper a new 22 GHz water vapor spectro-radiometer which has been specifically designed for profile measurement campaigns of the middle atmosphere is presented. The instrument is of a compact design and has a simple set up procedure. It can be operated as a standalone instrument as it maintains its own weather station and a calibration scheme that does not rely on other instruments or the use of liquid nitrogen. The optical system of MIAWARA-C combines a choked gaussian horn antenna with a parabolic mirror which reduces the size of the instrument in comparison with currently existing radiometers. For the data acquisition a correlation receiver is used together with a digital cross correlating spectrometer. The complete backend section, including the computer, is located in the same housing as the instrument. The receiver section is temperature stabilized to minimize gain fluctuations. Calibration of the instrument is achieved through a balancing scheme with the sky used as the cold load and the tropospheric properties are determined by performing regular tipping curves. Since MIAWARA-C is used in measurement campaigns it is important to be able to determine the elevation pointing in a simple manner as this is a crucial parameter in the calibration process. Here we present two different methods; scanning the sky and the Sun. Finally, we report on the first spectra and retrieved water vapor profiles acquired during the Lapbiat campaign at the Finnish Meteorological Institute Arctic Research Centre in Sodankylä, Finland. The performance of MIAWARA-C is validated here by comparison of the presented profiles against the equivalent profiles from the Microwave Limb Sounder on the EOS/Aura satellite.
Resumo:
This project addresses the unreliability of operating system code, in particular in device drivers. Device driver software is the interface between the operating system and the device's hardware. Device drivers are written in low level code, making them difficult to understand. Almost all device drivers are written in the programming language C which allows for direct manipulation of memory. Due to the complexity of manual movement of data, most mistakes in operating systems occur in device driver code. The programming language Clay can be used to check device driver code at compile-time. Clay does most of its error checking statically to minimize the overhead of run-time checks in order to stay competitive with C's performance time. The Clay compiler can detect a lot more types of errors than the C compiler like buffer overflows, kernel stack overflows, NULL pointer uses, freed memory uses, and aliasing errors. Clay code that successfully compiles is guaranteed to run without failing on errors that Clay can detect. Even though C is unsafe, currently most device drivers are written in it. Not only are device drivers the part of the operating system most likely to fail, they also are the largest part of the operating system. As rewriting every existing device driver in Clay by hand would be impractical, this thesis is part of a project to automate translation of existing drivers from C to Clay. Although C and Clay both allow low level manipulation of data and fill the same niche for developing low level code, they have different syntax, type systems, and paradigms. This paper explores how C can be translated into Clay. It identifies what part of C device drivers cannot be translated into Clay and what information drivers in Clay will require that C cannot provide. It also explains how these translations will occur by explaining how each C structure is represented in the compiler and how these structures are changed to represent a Clay structure.
Resumo:
An optimizing compiler internal representation fundamentally affects the clarity, efficiency and feasibility of optimization algorithms employed by the compiler. Static Single Assignment (SSA) as a state-of-the-art program representation has great advantages though still can be improved. This dissertation explores the domain of single assignment beyond SSA, and presents two novel program representations: Future Gated Single Assignment (FGSA) and Recursive Future Predicated Form (RFPF). Both FGSA and RFPF embed control flow and data flow information, enabling efficient traversal program information and thus leading to better and simpler optimizations. We introduce future value concept, the designing base of both FGSA and RFPF, which permits a consumer instruction to be encountered before the producer of its source operand(s) in a control flow setting. We show that FGSA is efficiently computable by using a series T1/T2/TR transformation, yielding an expected linear time algorithm for combining together the construction of the pruned single assignment form and live analysis for both reducible and irreducible graphs. As a result, the approach results in an average reduction of 7.7%, with a maximum of 67% in the number of gating functions compared to the pruned SSA form on the SPEC2000 benchmark suite. We present a solid and near optimal framework to perform inverse transformation from single assignment programs. We demonstrate the importance of unrestricted code motion and present RFPF. We develop algorithms which enable instruction movement in acyclic, as well as cyclic regions, and show the ease to perform optimizations such as Partial Redundancy Elimination on RFPF.
Resumo:
Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.
Resumo:
With today's prevalence of Internet-connected systems storing sensitive data and the omnipresent threat of technically skilled malicious users, computer security remains a critically important field. Because of today's multitude of vulnerable systems and security threats, it is vital that computer science students be taught techniques for programming secure systems, especially since many of them will work on systems with sensitive data after graduation. Teaching computer science students proper design, implementation, and maintenance of secure systems is a challenging task that calls for the use of novel pedagogical tools. This report describes the implementation of a compiler that converts mandatory access control specification Domain-Type Enforcement Language to the Java Security Manager, primarily for pedagogical purposes. The implementation of the Java Security Manager was explored in depth, and various techniques to work around its inherent limitations were explored and partially implemented, although some of these workarounds do not appear in the current version of the compiler because they would have compromised cross-platform compatibility. The current version of the compiler and implementation details of the Java Security Manager are discussed in depth.
Resumo:
Aufbau einer föderativen Dienstlandschaft in der Ruhr-Region auf Basis von SAML mit dem Ziel eine organisationsübergreifende Nutzung von webbasierten IT-Diensten zu ermöglichen
Resumo:
Integrating physical objects (smart objects) and enterprise IT systems is still a labor intensive, mainly manual task done by domain experts. On one hand, enterprise IT backend systems are based on service oriented architectures (SOA) and driven by business rule engines or business process execution engines. Smart objects on the other hand are often programmed at very low levels. In this paper we describe an approach that makes the integration of smart objects with such backends systems easier. We introduce semantic endpoint descriptions based on Linked USDL. Furthermore, we show how different communication patterns can be integrated into these endpoint descriptions. The strength of our endpoint descriptions is that they can be used to automatically create REST or SOAP endpoints for enterprise systems, even if which they are not able to talk to the smart objects directly. We evaluate our proposed solution with CoAP, UDP and 6LoWPAN, as we anticipate the industry converge towards these standards. Nonetheless, our approach also allows easy integration with backend systems, even if no standardized protocol is used.
Resumo:
Linking the physical world to the Internet, also known as the Internet of Things, has increased available information and services in everyday life and in the Enterprise world. In Enterprise IT an increasing number of communication is done between IT backend systems and small IoT devices, for example sensor networks or RFID readers. This introduces some challenges in terms of complexity and integration. We are working on the integration of IoT devices into Enterprise IT by leveraging SOA techniques and Semantic Web technologies. We present a SOA based integration platform for connecting WSNs and large enterprise business processes. For ensuring interoperability our platform is based on Linked Services. These are thoroughly described, machine-readable, machine-reasonable service descriptions.
Resumo:
Since November 1994, the GROund-based Millimeter-wave Ozone Spectrometer (GROMOS) measures stratospheric and lower mesospheric ozone in Bern, Switzerland (47.95° N, 7.44° E). GROMOS is part of the Network for the Detection of Atmospheric Composition Change (NDACC). In July 2009, a Fast-Fourier-Transform spectrometer (FFTS) has been added as backend to GROMOS. The new FFTS and the original filter bench (FB) measured parallel for over two years. In October 2011, the FB has been turned off and the FFTS is now used to continue the ozone time series. For a consolidated ozone time series in the frame of NDACC, the quality of the stratospheric ozone profiles obtained with the FFTS has to be assessed. The FFTS results from July 2009 to December 2011 are compared to ozone profiles retrieved by the FB. FFTS and FB of the GROMOS microwave radiometer agree within 5% above 20 hPa. A later harmonization of both time series will be realized by taking the FFTS as benchmark for the FB. Ozone profiles from the FFTS are also compared to coinciding lidar measurements from the Observatoire Haute Provence (OHP), France. For the time period studied a maximum mean difference (lidar – GROMOS FFTS) of +3.8% at 3.1 hPa and a minimum mean difference of +1.4% at 8 hPa is found. Further, intercomparisons with ozone profiles from other independent instruments are performed: satellite measurements include MIPAS onboard ENVISAT, SABER onboard TIMED, MLS onboard EOS Aura and ACE-FTS onboard SCISAT-1. Additionally, ozonesondes launched from Payerne, Switzerland, are used in the lower stratosphere. Mean relative differences of GROMOS FFTS and these independent instruments are less than 10% between 50 and 0.1 hPa.
Resumo:
No todos los compositores latinoamericanos han gozado de reconocimiento y difusión y muchas de sus obras se han perdido o no han sido editadas. En consonancia con los objetivos de la Maestría en Interpretación de Música Latinoamericana del Siglo XX, el proyecto "Raíces" busca recopilar obras e insertarlas en una base de datos de carácter interactivo. El presente trabajo da cuenta de los antecedentes y desarrollo de dicho proyecto.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
In this paper, we describe a complete development platform that features different innovative acceleration strategies, not included in any other current platform, that simplify and speed up the definition of the different elements required to design a spoken dialog service. The proposed accelerations are mainly based on using the information from the backend database schema and contents, as well as cumulative information produced throughout the different steps in the design. Thanks to these accelerations, the interaction between the designer and the platform is improved, and in most cases the design is reduced to simple confirmations of the “proposals” that the platform dynamically provides at each step. In addition, the platform provides several other accelerations such as configurable templates that can be used to define the different tasks in the service or the dialogs to obtain or show information to the user, automatic proposals for the best way to request slot contents from the user (i.e. using mixed-initiative forms or directed forms), an assistant that offers the set of more probable actions required to complete the definition of the different tasks in the application, or another assistant for solving specific modality details such as confirmations of user answers or how to present them the lists of retrieved results after querying the backend database. Additionally, the platform also allows the creation of speech grammars and prompts, database access functions, and the possibility of using mixed initiative and over-answering dialogs. In the paper we also describe in detail each assistant in the platform, emphasizing the different kind of methodologies followed to facilitate the design process at each one. Finally, we describe the results obtained in both a subjective and an objective evaluation with different designers that confirm the viability, usefulness, and functionality of the proposed accelerations. Thanks to the accelerations, the design time is reduced in more than 56% and the number of keystrokes by 84%.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.