953 resultados para component isolation, system call interpositioning, hardware virtualization, application isolation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Autonomous systems require, in most of the cases, reasoning and decision-making capabilities. Moreover, the decision process has to occur in real time. Real-time computing means that every situation or event has to have an answer before a temporal deadline. In complex applications, these deadlines are usually in the order of milliseconds or even microseconds if the application is very demanding. In order to comply with these timing requirements, computing tasks have to be performed as fast as possible. The problem arises when computations are no longer simple, but very time-consuming operations. A good example can be found in autonomous navigation systems with visual-tracking submodules where Kalman filtering is the most extended solution. However, in recent years, some interesting new approaches have been developed. Particle filtering, given its more general problem-solving features, has reached an important position in the field. The aim of this thesis is to design, implement and validate a hardware platform that constitutes itself an embedded intelligent system. The proposed system would combine particle filtering and evolutionary computation algorithms to generate intelligent behavior. Traditional approaches to particle filtering or evolutionary computation have been developed in software platforms, including parallel capabilities to some extent. In this work, an additional goal is fully exploiting hardware implementation advantages. By using the computational resources available in a FPGA device, better performance results in terms of computation time are expected. These hardware resources will be in charge of extensive repetitive computations. With this hardware-based implementation, real-time features are also expected.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cloud computing and, more particularly, private IaaS, is seen as a mature technology with a myriad solutions tochoose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock-in. Several competing and incompatible interfaces and management styles have given even more voice to these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this paper, we propose a management architecture that tries to tackle these problems; it offers a common way of managing several cloud solutions, and an interface that can be tailored to the needs of the user. This management architecture is designed in a modular way, and using a generic information model. We have validated our approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Virtualization techniques have received increased attention in the field of embedded real-time systems. Such techniques provide a set of virtual machines that run on a single hardware platform, thus allowing several application programs to be executed as though they were running on separate machines, with isolated memory spaces and a fraction of the real processor time available to each of them.This papers deals with some problems that arise when implementing real-time systems written in Ada on a virtual machine. The effects of virtualization on the performance of the Ada real-time services are analysed, and requirements for the virtualization layer are derived. Virtual-machine time services are also defined in order to properly support Ada real-time applications. The implementation of the ORK+ kernel on the XtratuM supervisor is used as an example.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the advent of cloud computing, many applications have embraced the ensuing paradigm shift towards modern distributed key-value data stores, like HBase, in order to benefit from the elastic scalability on offer. However, many applications still hesitate to make the leap from the traditional relational database model simply because they cannot compromise on the standard transactional guarantees of atomicity, isolation, and durability. To get the best of both worlds, one option is to integrate an independent transaction management component with a distributed key-value store. In this paper, we discuss the implications of this approach for durability. In particular, if the transaction manager provides durability (e.g., through logging), then we can relax durability constraints in the key-value store. However, if a component fails (e.g., a client or a key-value server), then we need a coordinated recovery procedure to ensure that commits are persisted correctly. In our research, we integrate an independent transaction manager with HBase. Our main contribution is a failure recovery middleware for the integrated system, which tracks the progress of each commit as it is flushed down by the client and persisted within HBase, so that we can recover reliably from failures. During recovery, commits that were interrupted by the failure are replayed from the transaction management log. Importantly, the recovery process does not interrupt transaction processing on the available servers. Using a benchmark, we evaluate the impact of component failure, and subsequent recovery, on application performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the development of the robotic multi-agent system SMART. In this system, the agent concept is applied to both hardware and software entities. Hardware agents are robots, with three and four legs, and an IP-camera that takes images of the scene where the cooperative task is carried out. Hardware agents strongly cooperate with software agents. These latter agents can be classified into image processing, communications, task management and decision making, planning and trajectory generation agents. To model, control and evaluate the performance of cooperative tasks among agents, a kind of PetriNet, called Work-Flow Petri Net, is used. Experimental results shows the good performance of the system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A method that provides athree-dimensional representation ofthe basin ofattraction of a dynamical system from experimen tal data was applied tothe problem ofdynamic balance restoration. The method isbased onthe density ofthe data onthe phase space ofthe system under study and makes use ofmodeling and numerical curve fittingtools.For the dynamical system ofbalance restora tion,the shape and the size of the basin of attraction depend on the dynamics of the postural restoring mechanisms and contain important information regarding the biomechanical,as well as the neuromuscular condition of the individual. The aim ofthis work was toexamine the ability ofthe method todetect, through the observed changes inthe shape and/or the size ofthe calculated basins of attraction, (a)the inherent differences between different systems (in the current application, postural restoring systems of different individuals)and (b)induced chan ges in the same system (thepostural restoring system of an individual).The results ofthe study confirm the validity of the method and furthermore justify its robustness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the main concerns of evolvable and adaptive systems is the need of a training mechanism, which is normally done by using a training reference and a test input. The fitness function to be optimized during the evolution (training) phase is obtained by comparing the output of the candidate systems against the reference. The adaptivity that this type of systems may provide by re-evolving during operation is especially important for applications with runtime variable conditions. However, fully automated self-adaptivity poses additional problems. For instance, in some cases, it is not possible to have such reference, because the changes in the environment conditions are unknown, so it becomes difficult to autonomously identify which problem requires to be solved, and hence, what conditions should be representative for an adequate re-evolution. In this paper, a solution to solve this dependency is presented and analyzed. The system consists of an image filter application mapped on an evolvable hardware platform, able to evolve using two consecutive frames from a camera as both test and reference images. The system is entirely mapped in an FPGA, and native dynamic and partial reconfiguration is used for evolution. It is also shown that using such images, both of them being noisy, as input and reference images in the evolution phase of the system is equivalent or even better than evolving the filter with offline images. The combination of both techniques results in the completely autonomous, noise type/level agnostic filtering system without reference image requirement described along the paper.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

El WCTR es un congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte, y aunque las actas publicadas están en formato digital y sin ISSN ni ISBN, lo consideramos lo suficientemente importante como para que se considere en los indicadores. This paper develops a model based on agency theory to analyze road management systems (under the different contract forms available today) that employ a mechanism of performance indicators to establish the payment of the agent. The base assumption is that of asymmetric information between the principal (Public Authorities) and the agent (contractor) and the risk aversion of this latter. It is assumed that the principal may only measure the agent?s performance indirectly and by means of certain performance indicators that may be verified by the authorities. In this model there is presumed to be a relation between the efforts made by the agent and the performance level measured by the corresponding indicators, though it is also considered that there may be dispersion between both variables that gives rise to a certain degree of randomness in the contract. An analysis of the optimal contract has been made on the basis of this model and in accordance with a series of parameters that characterize the economic environment and the particular conditions of road infrastructure. As a result of the analysis made, it is considered that an optimal contract should generally combine a fixed component and a payment in accordance with the performance level obtained. The higher the risk aversion of the agent and the greater the marginal cost of public funds, the lower the impact of this performance-based payment. By way of conclusion, the system of performance indicators should be as broad as possible but should not overweight those indicators that encompass greater randomness in their results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The current deficit situation of the Spanish airport system suggests the need to manage this in a more efficient and profitable way. One of the possible options is through private management and being able to do this through Public Private Partnerships (PPP). This study analyzes the situation of the sector and its economic importance and the different possibilities for introducing private management in a public company, specifying the situation in the case of airports, presenting the advantages and disadvantages of these possibilities, and aiming at results obtained in other places where it has been applied. It is proposed that the ideal model for the introduction of private management would be through PPP models tailored to each airport, but having common characteristics according to the group they belong to. Finally, we observe that not all airports are commercially attractive, so that the PPP concept does not apply to all of them. In some cases even the operability itself is not viable at all, and that should be considered separately in order to avoid creating a private monopoly while trying to enhance competition among them.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

New concepts in air navigation have been introduced recently. Among others, are the concepts of trajectory optimization, 4D trajectories, RBT (Reference Business Trajectory), TBO (trajectory based operations), CDA (Continuous Descent Approach) and ACDA (Advanced CDA), conflict resolution, arrival time (AMAN), introduction of new aircraft (UAVs, UASs) in air space, etc. Although some of these concepts are new, the future Air Traffic Management will maintain the four ATM key performance areas such as Safety, Capacity, Efficiency, and Environmental impact. So much, the performance of the ATM system is directly related to the accuracy with which the future evolution of the traffic can be predicted. In this sense, future air traffic management will require a variety of support tools to provide suitable help to users and engineers involved in the air space management. Most of these tools are based on an appropriate trajectory prediction module as main component. Therefore, the purposes of these tools are related with testing and evaluation of any air navigation concept before they become fully operative. The aim of this paper is to provide an overview to the design of a software tool useful to estimate aircraft trajectories adapted to air navigation concepts. Other usage of the tool, like controller design, vertical navigation assessment, procedures validation and hardware and software in the loop are available in the software tool. The paper will show the process followed to design the tool, the software modules needed to perform accurately and the process followed to validate the output data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Evolvable hardware (EH) is an interesting alternative to conventional digital circuit design, since autonomous generation of solutions for a given task permits self-adaptivity of the system to changing environments, and they present inherent fault tolerance when evolution is intrinsically performed. Systems based on FPGAs that use Dynamic and Partial Reconfiguration (DPR) for evolving the circuit are an example. Also, thanks to DPR, these systems can be provided with scalability, a feature that allows a system to change the number of allocated resources at run-time in order to vary some feature, such as performance. The combination of both aspects leads to scalable evolvable hardware (SEH), which changes in size as an extra degree of freedom when trying to achieve the optimal solution by means of evolution. The main contributions of this paper are an architecture of a scalable and evolvable hardware processing array system, some preliminary evolution strategies which take scalability into consideration, and to show in the experimental results the benefits of combined evolution and scalability. A digital image filtering application is used as use case.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

En muchas áreas de la ingeniería, la integridad y confiabilidad de las estructuras son aspectos de extrema importancia. Estos son controlados mediante el adecuado conocimiento de danos existentes. Típicamente, alcanzar el nivel de conocimiento necesario que permita caracterizar la integridad estructural implica el uso de técnicas de ensayos no destructivos. Estas técnicas son a menudo costosas y consumen mucho tiempo. En la actualidad, muchas industrias buscan incrementar la confiabilidad de las estructuras que emplean. Mediante el uso de técnicas de última tecnología es posible monitorizar las estructuras y en algunos casos, es factible detectar daños incipientes que pueden desencadenar en fallos catastróficos. Desafortunadamente, a medida que la complejidad de las estructuras, los componentes y sistemas incrementa, el riesgo de la aparición de daños y fallas también incrementa. Al mismo tiempo, la detección de dichas fallas y defectos se torna más compleja. En años recientes, la industria aeroespacial ha realizado grandes esfuerzos para integrar los sensores dentro de las estructuras, además de desarrollar algoritmos que permitan determinar la integridad estructural en tiempo real. Esta filosofía ha sido llamada “Structural Health Monitoring” (o “Monitorización de Salud Estructural” en español) y este tipo de estructuras han recibido el nombre de “Smart Structures” (o “Estructuras Inteligentes” en español). Este nuevo tipo de estructuras integran materiales, sensores, actuadores y algoritmos para detectar, cuantificar y localizar daños dentro de ellas mismas. Una novedosa metodología para detección de daños en estructuras se propone en este trabajo. La metodología está basada en mediciones de deformación y consiste en desarrollar técnicas de reconocimiento de patrones en el campo de deformaciones. Estas últimas, basadas en PCA (Análisis de Componentes Principales) y otras técnicas de reducción dimensional. Se propone el uso de Redes de difracción de Bragg y medidas distribuidas como sensores de deformación. La metodología se validó mediante pruebas a escala de laboratorio y pruebas a escala real con estructuras complejas. Los efectos de las condiciones de carga variables fueron estudiados y diversos experimentos fueron realizados para condiciones de carga estáticas y dinámicas, demostrando que la metodología es robusta ante condiciones de carga desconocidas. ABSTRACT In many engineering fields, the integrity and reliability of the structures are extremely important aspects. They are controlled by the adequate knowledge of existing damages. Typically, achieving the level of knowledge necessary to characterize the structural integrity involves the usage of nondestructive testing techniques. These are often expensive and time consuming. Nowadays, many industries look to increase the reliability of the structures used. By using leading edge techniques it is possible to monitoring these structures and in some cases, detect incipient damage that could trigger catastrophic failures. Unfortunately, as the complexity of the structures, components and systems increases, the risk of damages and failures also increases. At the same time, the detection of such failures and defects becomes more difficult. In recent years, the aerospace industry has done great efforts to integrate the sensors within the structures and, to develop algorithms for determining the structural integrity in real time. The ‘philosophy’ has being called “Structural Health Monitoring” and these structures have been called “smart structures”. These new types of structures integrate materials, sensors, actuators and algorithms to detect, quantify and locate damage within itself. A novel methodology for damage detection in structures is proposed. The methodology is based on strain measurements and consists in the development of strain field pattern recognition techniques. The aforementioned are based on PCA (Principal Component Analysis) and other dimensional reduction techniques. The use of fiber Bragg gratings and distributed sensing as strain sensors is proposed. The methodology have been validated by using laboratory scale tests and real scale tests with complex structures. The effects of the variable load conditions were studied and several experiments were performed for static and dynamic load conditions, demonstrating that the methodology is robust under unknown load conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes ExperNet, an intelligent multi-agent system that was developed under an EU funded project to assist in the management of a large-scale data network. ExperNet assists network operators at various nodes of a WAN to detect and diagnose hardware failures and network traffic problems and suggests the most feasible solution, through a web-based interface. ExperNet is composed by intelligent agents, capable of both local problem solving and social interaction among them for coordinating problem diagnosis and repair. The current network state is captured and maintained by conventional network management and monitoring software components, which have been smoothly integrated into the system through sophisticated information exchange interfaces. For the implementation of the agents, a distributed Prolog system enhanced with networking facilities was developed. The agents’ knowledge base is developed in an extensible and reactive knowledge base system capable of handling multiple types of knowledge representation. ExperNet has been developed, installed and tested successfully in an experimental network zone of Ukraine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

EPICS (Experimental Physics and Industrial Control System) lies in a set of software tools and applications which provide a software infrastructure for building distributed data acquisition and control systems. Currently there is an increase in use of such systems in large Physics experiments like ITER, ESS, and FREIA. In these experiments, advanced data acquisition systems using FPGA-based technology like FlexRIO are more frequently been used. The particular case of ITER (International Thermonuclear Experimental Reactor), the instrumentation and control system is supported by CCS (CODAC Core System), based on RHEL (Red Hat Enterprise Linux) operating system, and by the plant design specifications in which every CCS element is defined either hardware, firmware or software. In this degree final project the methodology proposed in Implementation of Intelligent Data Acquisition Systems for Fusion Experiments using EPICS and FlexRIO Technology Sanz et al. [1] is used. The final objective is to provide a document describing the fulfilled process and the source code of the data acquisition system accomplished. The use of the proposed methodology leads to have two diferent stages. The first one consists of the hardware modelling with graphic design tools like LabVIEWFPGA which later will be implemented in the FlexRIO device. In the next stage the design cycle is completed creating an EPICS controller that manages the device using a generic device support layer named NDS (Nominal Device Support). This layer integrates the data acquisition system developed into CCS (Control, data access and communication Core System) as an EPICS interface to the system. The use of FlexRIO technology drives the use of LabVIEW and LabVIEW FPGA respectively. RESUMEN. EPICS (Experimental Physics and Industrial Control System) es un conjunto de herramientas software utilizadas para el desarrollo e implementación de sistemas de adquisición de datos y control distribuidos. Cada vez es más utilizado para entornos de experimentación física a gran escala como ITER, ESS y FREIA entre otros. En estos experimentos se están empezando a utilizar sistemas de adquisición de datos avanzados que usan tecnología basada en FPGA como FlexRIO. En el caso particular de ITER, el sistema de instrumentación y control adoptado se basa en el uso de la herramienta CCS (CODAC Core System) basado en el sistema operativo RHEL (Red Hat) y en las especificaciones del diseño del sistema de planta, en la cual define todos los elementos integrantes del CCS, tanto software como firmware y hardware. En este proyecto utiliza la metodología propuesta para la implementación de sistemas de adquisición de datos inteligente basada en EPICS y FlexRIO. Se desea generar una serie de ejemplos que cubran dicho ciclo de diseño completo y que serían propuestos como casos de uso de dichas tecnologías. Se proporcionará un documento en el que se describa el trabajo realizado así como el código fuente del sistema de adquisición. La metodología adoptada consta de dos etapas diferenciadas. En la primera de ellas se modela el hardware y se sintetiza en el dispositivo FlexRIO utilizando LabVIEW FPGA. Posteriormente se completa el ciclo de diseño creando un controlador EPICS que maneja cada dispositivo creado utilizando una capa software genérica de manejo de dispositivos que se denomina NDS (Nominal Device Support). Esta capa integra la solución en CCS realizando la interfaz con la capa EPICS del sistema. El uso de la tecnología FlexRIO conlleva el uso del lenguaje de programación y descripción hardware LabVIEW y LabVIEW FPGA respectivamente.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We attempt to integrate and start up the set of necessary tools to deploy the design cycle of embedded systems based on Embedded Linux on a "Cyclone V SoC" made by Altera. First, we will analyze the available tools for designing the hardware system of the SoCkit development kit, made by Arrow, which has a "Cyclone V SoC" system (based on a "ARM Cortex-A9 MP Core" architecture). When designing the SoCkit board hardware, we will create a new peripheral to integrate it into the hardware system, so it can be used as any other existent resource of the SoCkit board previously configured. Next, we will analyze the tools to generate an Embedded Linux distribution adapted to the SoCkit board. In order to generate the Linux distribution we will use, on the one hand, a software package from Yocto recommended by Altera; on the other hand, the programs and tools of Altera, Embedded Development Suite. We will integrate all the components needed to build the Embedded Linux distribution, creating a complete and functional system which can be used for developing software applications. Finally, we will study the programs for developing and debugging applications in C or C++ language that will be executed in this hardware platform, then we will program a Linux application as an example to illustrate the use of SoCkit board resources. RESUMEN Se pretende integrar y poner en funcionamiento el conjunto de herramientas necesarias para desplegar el ciclo de diseño de sistemas embebidos basados en "Embedded Linux" sobre una "Cyclone V SoC" de Altera. En primer lugar, se analizarán las diversas herramientas disponibles para diseñar el sistema hardware de la tarjeta de desarrollo SoCkit, fabricada por Arrow, que dispone de un sistema "Cyclone V SoC" (basado en una arquitectura "ARM Cortex A9 MP Core"). En el diseño hardware de la SoCkit se creará un periférico propio y se integrará en el sistema, pudiendo ser utilizado como cualquier otro recurso de la tarjeta ya existente y configurado. A continuación, también se analizarán las herramientas para generar una distribución de "Embedded Linux" adaptado a la placa SoCkit. Para generar la distribución de Linux se utilizará, por una parte, un paquete software de Yocto recomendado por Altera y, por otra parte, las propias herramientas y programas de Altera. Se integrarán todos los componentes necesarios para construir la distribución Linux, creando un sistema completo y funcional que se pueda utilizar para el desarrollo de aplicaciones software. Por último, se estudiarán las herramientas para el diseño y depuración de aplicaciones en lenguaje C ó C++ que se ejecutarán en esta plataforma hardware. Se pretende desarrollar una aplicación de ejemplo para ilustrar el uso de los recursos más utilizados de la SoCkit.