961 resultados para Laboratorio remotorobotica mobileweb applicationsmodel driven software architecture
Resumo:
This summary presents a methodology for supporting the development of AOSAs following the MDD paradigm. This new methodology is called PRISMA and allows the code generation from models which specify functional and non-functional requirements.
Resumo:
A medida que la sociedad avanza, la cantidad de datos almacenados en sistemas de información y procesados por las aplicaciones y servidores software se eleva exponencialmente. Además, las nuevas tecnologías han confiado su desarrollo en la red internacionalmente conectada: Internet. En consecuencia, se han aprovechado las conexiones máquina a máquina (M2M) mediante Internet y se ha desarrollado el concepto de "Internet de las Cosas", red de dispositivos y terminales donde cualquier objeto cotidiano puede establecer conexiones con otros objetos o con un teléfono inteligente mediante los servicios desplegados en dicha red. Sin embargo, estos nuevos datos y eventos se deben procesar en tiempo real y de forma eficaz, para reaccionar ante cualquier situación. Así, las arquitecturas orientadas a eventos solventan la comprensión del intercambio de mensajes en tiempo real. De esta forma, una EDA (Event-Driven Architecture) brinda la posibilidad de implementar una arquitectura software con una definición exhaustiva de los mensajes, notificándole al usuario los hechos que han ocurrido a su alrededor y las acciones tomadas al respecto. Este Trabajo Final de Grado se centra en el estudio de las arquitecturas orientadas a eventos, contrastándolas con el resto de los principales patrones arquitectónicos. Esta comparación se ha efectuado atendiendo a los requisitos no funcionales de cada uno, como, por ejemplo, la seguridad frente a amenazas externas. Asimismo, el objetivo principal es el estudio de las arquitecturas EDA (Event-Driven Architecture) y su relación con la red de Internet de las Cosas, que permite a cualquier dispositivo acceder a los servicios desplegados en esa red mediante Internet. El objeto del TFG es observar y verificar las ventajas de esta arquitectura, debido a su carácter de tipo inmediato, mediante el envío y recepción de mensajes en tiempo real y de forma asíncrona. También se ha realizado un estudio del estado del arte de estos patrones de arquitectura software, así como de la red de IoT (Internet of Things) y sus servicios. Por otro lado, junto con este TFG se ha desarrollado una simulación de una EDA completa, con todos sus elementos: productores, consumidores y procesador de eventos complejo, además de la visualización de los datos. Para ensalzar los servicios prestados por la red de IoT y su relación con una arquitectura EDA, se ha implementado una simulación de un servicio personalizado de Tele-asistencia. Esta prueba de concepto ha ayudado a reforzar el aprendizaje y entender con más precisión todo el conocimiento adquirido mediante el estudio teórico de una EDA. Se ha implementado en el lenguaje de programación Java, mediante las soluciones de código abierto RabbitMQ y Esper, ayudando a su unión el estándar AMQP, para completar correctamente la transferencia.
Resumo:
El presente TFG está enmarcado en el contexto de la biología sintética (más concretamente en la automatización de protocolos) y representa una parte de los avances en este sector. Se trata de una plataforma de gestión de laboratorios autónomos. El resultado tecnológico servirá para ayudar al operador a coordinar las máquinas disponibles en un laboratorio a la hora de ejecutar un experimento basado en un protocolo de biología sintética. En la actualidad los experimentos biológicos tienen una tasa de éxito muy baja en laboratorios convencionales debido a la cantidad de factores externos que intervienen durante el protocolo. Además estos experimentos son caros y requieren de un operador pendiente de la ejecución en cada fase del protocolo. La automatización de laboratorios puede suponer un aumento de la tasa de éxito, además de una reducción de costes y de riesgos para los trabajadores en el entorno del laboratorio. En la presente propuesta se pretende que se dividan las distintas entidades de un laboratorio en unidades funcionales que serán los elementos a ser coordinados por la herramienta resultado del TFG. Para aportar flexibilidad a la herramienta se utilizará una arquitectura orientada a servicios (SOA). Cada unidad funcional desplegará un servicio web proporcionando su funcionalidad al resto del laboratorio. SOA es esencial para la comunicación entre máquinas ya que permite la abstracción del tipo de máquina que se trate y como esté implementada su funcionalidad. La principal dificultad del TFG consiste en lidiar con las dificultades de integración y coordinación de las distintas unidades funcionales para poder gestionar adecuadamente el ciclo de vida de un experimento. Para ello se ha realizado un análisis de herramientas disponibles de software libre. Finalmente se ha escogido la plataforma Apache Camel como marco sobre el que crear la herramienta específica planteada en el TFG. Apache Camel juega un papel importantísimo en este proyecto, ya que establece las capas de conexión a los distintos servicios y encamina los mensajes oportunos a cada servicio basándose en el contenido del fichero de entrada. Para la preparación del prototipo se han desarrollado una serie de servicios web que permitirán realizar pruebas y demostraciones de concepto de la herramienta en sí. Además se ha desarrollado una versión preliminar de la aplicación web que utilizará el operador del laboratorio para gestionar las peticiones, decidiendo que protocolo se ejecuta a continuación y siguiendo el flujo de tareas del experimento.---ABSTRACT---The current TFG is bound by synthetic biology context (more specifically in the protocol automation) and represents an element of progression in this sector. It consists of a management platform for automated laboratories. The technological result will help the operator to coordinate the available machines in a lab, this way an experiment based on a synthetic biological protocol, could be executed. Nowadays, the biological experiments have a low success rate in conventional laboratories, due to the amount of external factors that intrude during the protocol. On top of it, these experiments are usually expensive and require of an operator monitoring at every phase of the protocol. The laboratories’ automation might mean an increase in the success rate, and also a reduction of costs and risks for the lab workers. The current approach is hoped to divide the different entities in a laboratory in functional units. Those will be the elements to be coordinated by the tool that results from this TFG. In order to provide flexibility to the system, a service-oriented architecture will be used (SOA). Every functional unit will deploy a web service, publishing its functionality to the rest of the lab. SOA is essential to facilitate the communication between machines, due to the fact that it provides an abstraction on the type of the machine and how its functionality is implemented. The main difficulty of this TFG consists on grappling with the integration and coordination problems, being able to manage successfully the lifecycle of an experiment. For that, a benchmark has been made on the available open source tools. Finally Apache Camel has been chosen as a framework over which the tool defined in the TFG will be created. Apache Camel plays a fundamental role in this project, given that it establishes the connection layers to the different services and routes the suitable messages to each service, based on the received file’s content. For the prototype development a number of services that will allow it to perform demonstrations and concept tests have been deployed. Furthermore a preliminary version of the webapp has been developed. It will allow the laboratory operator managing petitions, to decide what protocol goes next as it executes the flow of the experiment’s tasks.
Resumo:
El propósito de esta tesis es estudiar la aproximación a los fenómenos de transporte térmico en edificación acristalada a través de sus réplicas a escala. La tarea central de esta tesis es, por lo tanto, la comparación del comportamiento térmico de modelos a escala con el correspondiente comportamiento térmico del prototipo a escala real. Los datos principales de comparación entre modelo y prototipo serán las temperaturas. En el primer capítulo del Estado del Arte de esta tesis se hará un recorrido histórico por los usos de los modelos a escala desde la antigüedad hasta nuestro días. Dentro de éste, en el Estado de la Técnica, se expondrán los beneficios que tiene su empleo y las dificultades que conllevan. A continuación, en el Estado de la Investigación de los modelos a escala, se analizarán artículos científicos y tesis. Precisamente, nos centraremos en aquellos modelos a escala que son funcionales. Los modelos a escala funcionales son modelos a escala que replican, además, una o algunas de las funciones de sus prototipos. Los modelos a escala pueden estar distorsionados o no. Los modelos a escala distorsionados son aquellos con cambios intencionados en las dimensiones o en las características constructivas para la obtención de una respuesta específica por ejemplo, replicar el comportamiento térmico. Los modelos a escala sin distorsión, o no distorsionados, son aquellos que mantienen, en la medida de lo posible, las proporciones dimensionales y características constructivas de sus prototipos de referencia. Estos modelos a escala funcionales y no distorsionados son especialmente útiles para los arquitectos ya que permiten a la vez ser empleados como elementos funcionales de análisis y como elementos de toma de decisiones en el diseño constructivo. A pesar de su versatilidad, en general, se observará que se han utilizado muy poco estos modelos a escala funcionales sin distorsión para el estudio del comportamiento térmico de la edificación. Posteriormente, se expondrán las teorías para el análisis de los datos térmicos recogidos de los modelos a escala y su aplicabilidad a los correspondientes prototipos a escala real. Se explicarán los experimentos llevados a cabo, tanto en laboratorio como a intemperie. Se han realizado experimentos con modelos sencillos cúbicos a diferentes escalas y sometidos a las mismas condiciones ambientales. De estos modelos sencillos hemos dado el salto a un modelo reducido de una edificación acristalada relativamente sencilla. Los experimentos consisten en ensayos simultáneos a intemperie del prototipo a escala real y su modelo reducido del Taller de Prototipos de la Escuela Técnica Superior de Arquitectura de Madrid (ETSAM). Para el análisis de los datos experimentales hemos aplicado las teorías conocidas, tanto comparaciones directas como el empleo del análisis dimensional. Finalmente, las simulaciones nos permiten comparaciones flexibles con los datos experimentales, por ese motivo, hemos utilizado tanto programas comerciales como un algoritmo de simulación desarrollado ad hoc para esta investigación. Finalmente, exponemos la discusión y las conclusiones de esta investigación. Abstract The purpose of this thesis is to study the approximation to phenomena of heat transfer in glazed buildings through their scale replicas. The central task of this thesis is, therefore, the comparison of the thermal performance of scale models without distortion with the corresponding thermal performance of their full-scale prototypes. Indoor air temperatures of the scale model and the corresponding prototype are the data to be compared. In the first chapter on the State of the Art, it will be shown a broad vision, consisting of a historic review of uses of scale models, from antiquity to our days. In the section State of the Technique, the benefits and difficulties associated with their implementation are presented. Additionally, in the section State of the Research, current scientific papers and theses on scale models are reviewed. Specifically, we focus on functional scale models. Functional scale models are scale models that replicate, additionally, one or some of the functions of their corresponding prototypes. Scale models can be distorted or not. Scale models with distortion are considered scale models with intentional changes, on one hand, in dimensions scaled unevenly and, on the other hand, in constructive characteristics or materials, in order to get a specific performance for instance, a specific thermal performance. Consequently, scale models without distortion, or undistorted scale models scaled evenly, are those replicating, to the extent possible, without distortion, the dimensional proportions and constructive configurations of their prototypes of reference. These undistorted and functional scale models are especially useful for architects because they can be used, simultaneously, as functional elements of analysis and as decision-making elements during the design. Although they are versatile, in general, it is remarkable that these types of models are used very little for the study of the thermal performance of buildings. Subsequently, the theories related to the analysis of the experimental thermal data collected from the scale models and their applicability to the corresponding full-scale prototypes, will be explained. Thereafter, the experiments in laboratory and at outdoor conditions are detailed. Firstly, experiments carried out with simple cube models at different scales are explained. The prototype larger in size and the corresponding undistorted scale model have been subjected to same environmental conditions in every experimental test. Secondly, a step forward is taken carrying out some simultaneous experimental tests of an undistorted scale model, replica of a relatively simple lightweight and glazed building construction. This experiment consists of monitoring the undistorted scale model of the prototype workshop located in the School of Architecture (ETSAM) of the Technical University of Madrid (UPM). For the analysis of experimental data, known related theories and resources are applied, such as, direct comparisons, statistical analyses, Dimensional Analysis and last, but not least important, simulations. Simulations allow us, specifically, flexible comparisons with experimental data. Here, apart the use of the simulation software EnergyPlus, a simulation algorithm is developed ad hoc for this research. Finally, the discussion and conclusions of this research are exposed.
Resumo:
Póster y resumen de la comunicación presentada en el VI Congreso Internacional de Docencia Universitaria e Innovación (CIDUI), Barcelona, 30 junio-2 julio 2010.
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Resumo:
Integrity assurance of configuration data has a significant impact on microcontroller-based systems reliability. This is especially true when running applications driven by events which behavior is tightly coupled to this kind of data. This work proposes a new hybrid technique that combines hardware and software resources for detecting and recovering soft-errors in system configuration data. Our approach is based on the utilization of a common built-in microcontroller resource (timer) that works jointly with a software-based technique, which is responsible to periodically refresh the configuration data. The experiments demonstrate that non-destructive single event effects can be effectively mitigated with reduced overheads. Results show an important increase in fault coverage for SEUs and SETs, about one order of magnitude.
Resumo:
Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.
Resumo:
The current INFRAWEBS European research project aims at developing ICT framework enabling software and service providers to generate and establish open and extensible development platforms for Web Service applications. One of the concrete project objectives is developing a full-life-cycle software toolset for creating and maintaining Semantic Web Services (SWSs) supporting specific applications based on Web Service Modelling Ontology (WSMO) framework. According to WSMO, functional and behavioural descriptions of a SWS may be represented by means of complex logical expressions (axioms). The paper describes a specialized userfriendly tool for constructing and editing such axioms – INFRAWEBS Axiom Editor. After discussing the main design principles of the Editor, its functional architecture is briefly presented. The tool is implemented in Eclipse Graphical Environment Framework and Eclipse Rich Client Platform.
Resumo:
Никола Вълчанов, Тодорка Терзиева, Владимир Шкуртов, Антон Илиев - Една от основните области на приложения на компютърната информатика е автоматизирането на математическите изчисления. Информационните системи покриват различни области като счетоводство, електронно обучение/тестване, симулационни среди и т. н. Те работят с изчислителни библиотеки, които са специфични за обхвата на системата. Въпреки, че такива системи са перфектни и работят безпогрешно, ако не се поддържат остаряват. В тази работа описваме механизъм, който използва динамично библиотеките за изчисления и взема решение по време на изпълнение (интелигентно или интерактивно) за това как и кога те да се използват. Целта на тази статия е представяне на архитектура за системи, управлявани от изчисления. Тя се фокусира върху ползите от използването на правилните шаблони за дизайн с цел да се осигури разширяемост и намаляване на сложността.
Resumo:
ACM Computing Classification System (1998): D.2.5, D.2.9, D.2.11.
Resumo:
Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.
Resumo:
In the metal industry, and more specifically in the forging one, scrap material is a crucial issue and reducing it would be an important goal to reach. Not only would this help the companies to be more environmentally friendly and more sustainable, but it also would reduce the use of energy and lower costs. At the same time, the techniques for Industry 4.0 and the advancements in Artificial Intelligence (AI), especially in the field of Deep Reinforcement Learning (DRL), may have an important role in helping to achieve this objective. This document presents the thesis work, a contribution to the SmartForge project, that was performed during a semester abroad at Karlstad University (Sweden). This project aims at solving the aforementioned problem with a business case of the company Bharat Forge Kilsta, located in Karlskoga (Sweden). The thesis work includes the design and later development of an event-driven architecture with microservices, to support the processing of data coming from sensors set up in the company's industrial plant, and eventually the implementation of an algorithm with DRL techniques to control the electrical power to use in it.
Resumo:
The BP (Bundle Protocol) version 7 has been recently standardized by IETF in RFC 9171, but it is the whole DTN (Delay-/Disruption-Tolerant Networking) architecture, of which BP is the core, that is gaining a renewed interest, thanks to its planned adoption in future space missions. This is obviously positive, but at the same time it seems to make space agencies more interested in deployment than in research, with new BP implementations that may challenge the central role played until now by the historical BP reference implementations, such as ION and DTNME. To make Unibo research on DTN independent of space agency decisions, the development of an internal BP implementation was in order. This is the goal of this thesis, which deals with the design and implementation of Unibo-BP: a novel, research-driven BP implementation, to be released as Free Software. Unibo-BP is fully compliant with RFC 9171, as demonstrated by a series of interoperability tests with ION and DTNME, and presents a few innovations, such as the ability to manage remote DTN nodes by means of the BP itself. Unibo-BP is compatible with pre-existing Unibo implementations of CGR (Contact Graph Routing) and LTP (Licklider Transmission Protocol) thanks to interfaces designed during the thesis. The thesis project also includes an implementation of TCPCLv3 (TCP Convergence Layer version 3, RFC 7242), which can be used as an alternative to LTPCL to connect with proximate nodes, especially in terrestrial networks. Summarizing, Unibo-BP is at the heart of a larger project, Unibo-DTN, which aims to implement the main components of a complete DTN stack (BP, TCPCL, LTP, CGR). Moreover, Unibo-BP is compatible with all DTNsuite applications, thanks to an extension of the Unified API library on which DTNsuite applications are based. The hope is that Unibo-BP and all the ancillary programs developed during this thesis will contribute to the growth of DTN popularity in academia and among space agencies.