193 resultados para Testbed
Resumo:
Este trabalho tem como finalidade apresentar uma visão geral dos assuntos relacionados à definição, configuração e gerenciamento dos serviços habilitados pela arquitetura de Serviços Diferenciados em relação à priorização de tráfego. Nele, descrevemos as necessidades por QoS nas redes atuais e futuras, bem como o direcionamento da Internet 2 em relação ao QoS, mais especificamente voltada ao mecanismo DiffServ. Também são destacados os mecanismos de QoS para redes IP do fornecedor Cisco Systems, o qual possui a maior gama de mecanismos já implantados pelos fornecedores. Por fim, identificamos as características necessárias para participar do QBone, um testbed para Serviços Diferenciados.
Resumo:
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
Resumo:
A Internet atual vem sofrendo vários problemas em termos de escalabilidade, desempenho, mobilidade, etc., devido ao vertiginoso incremento no número de usuários e o surgimento de novos serviços com novas demandas, propiciando assim o nascimento da Internet do Futuro. Novas propostas sobre redes orientadas a conteúdo, como a arquitetura Entidade Titulo (ETArch), proveem novos serviços para este tipo de cenários, implementados sobre o paradigma de redes definidas por software. Contudo, o modelo de transporte do ETArch é equivalente ao modelo best-effort da Internet atual, e vem limitando a confiabilidade das suas comunicações. Neste trabalho, ETArch é redesenhado seguindo o paradigma do sobreaprovisionamento de recursos para conseguir uma alocação de recursos avançada integrada com OpenFlow. Como resultado, o framework SMART (Suporte de Sessões Móveis com Alta Demanda de Recursos de Transporte), permite que a rede defina semanticamente os requisitos qualitativos das sessões para assim gerenciar o controle de Qualidade de Serviço visando manter a melhor Qualidade de Experiência possível. A avaliação do planos de dados e de controle teve lugar na plataforma de testes na ilha do projeto OFELIA, mostrando o suporte de aplicações móveis multimídia com alta demanda de recursos de transporte com QoS e QoE garantidos através de um esquema de sinalização restrito em comparação com o ETArch legado
Resumo:
Due to the constantly increasing use of wireless networks in domestic, business and industrial environments, new challenges have emerged. The prototyping of new protocols in these environments is typically restricted to simulation environments, where there is the need of double implementation, one in the simulation environment where an initial proof of concept is performed and the other one in a real environment. Also, if real environments are used, it is not trivial to create a testbed for high density wireless networks given the need to use various real equipment as well as attenuators and power reducers to try to reduce the physical space required to create these laboratories. In this context, LVWNet (Linux Virtual Wireless Network) project was originally designed to create completely virtual testbeds for IEEE 802.11 networks on the Linux operating system. This paper aims to extend the current project LVWNet, adding to it the features like the ability to interact with real wireless hardware, provides a initial mobility ability using the positioning of the nodes in a space coordinates environment based on meters, with loss calculations due to attenuation in free space, enables some scalability increase by creating an own protocol that allows the communication between nodes without an intermediate host and dynamic registration of nodes, allowing new nodes to be inserted into in already in operation network
Resumo:
Esta dissertação apresenta uma técnica para detecção e diagnósticos de faltas incipientes. Tais faltas provocam mudanças no comportamento do sistema sob investigação, o que se reflete em alterações nos valores dos parâmetros do seu modelo matemático representativo. Como plataforma de testes, foi elaborado um modelo de um sistema industrial em ambiente computacional Matlab/Simulink, o qual consiste em uma planta dinâmica composta de dois tanques comunicantes entre si. A modelagem dessa planta foi realizada através das equações físicas que descrevem a dinâmica do sistema. A falta, a que o sistema foi submetido, representa um estrangulamento gradual na tubulação de saída de um dos tanques. Esse estrangulamento provoca uma redução lenta, de até 20 %, na seção desse tubo. A técnica de detecção de falta foi realizada através da estimação em tempo real dos parâmetros de modelos Auto-regressivos com Entradas Exógenas (ARX) com estimadores Fuzzy e de Mínimos Quadrados Recursivos. Já, o diagnóstico do percentual de entupimento da tubulação foi obtido por um sistema fuzzy de rastreamento de parâmetro, realimentado pela integral do resíduo de detecção. Ao utilizar essa metodologia, foi possível detectar e diagnosticar a falta simulada em três pontos de operação diferentes do sistema. Em ambas as técnicas testadas, o método de MQR teve um bom desempenho, apenas para detectar a falta. Já, o método que utilizou estimação com supervisão fuzzy obteve melhor desempenho, em detectar e diagnosticar as faltas aplicadas ao sistema, constatando a proposta do trabalho.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
Skype is one of the well-known applications that has guided the evolution of real-time video streaming and has become one of the most used software in everyday life. It provides VoIP audio/video calls as well as messaging chat and file transfer. Many versions are available covering all the principal operating systems like Windows, Macintosh and Linux but also mobile systems. Voice quality decreed Skype success since its birth in 2003 and peer-to-peer architecture has allowed worldwide diffusion. After video call introduction in 2006 Skype became a complete solution to communicate between two or more people. As a primarily video conferencing application, Skype assumes certain characteristics of the delivered video to optimize its perceived quality. However in the last years, and with the recent release of SkypeKit1, many new Skype video-enabled devices came out especially in the mobile world. This forced a change to the traditional recording, streaming and receiving settings allowing for a wide range of network and content dynamics. Video calls are not anymore based on static ‘chatting’ but mobile devices have opened new possibilities and can be used in several scenarios. For instance, lecture streaming or one-to-one mobile video conferences exhibit more dynamics as both caller and callee might be on move. Most of these cases are different from “head&shoulder” only content. Therefore, Skype needs to optimize its video streaming engine to cover more video types. Heterogeneous connections require different behaviors and solutions and Skype must face with this variety to maintain a certain quality independently from connection used. Part of the present work will be focused on analyzing Skype behavior depending on video content. Since Skype protocol is proprietary most of the studies so far have tried to characterize its traffic and to reverse engineer its protocol. However, questions related to the behavior of Skype, especially on quality as perceived by users, remain unanswered. We will study Skype video codecs capabilities and video quality assessment. Another motivation of our work is the design of a mechanism that estimates the perceived cost of network conditions on Skype video delivery. To this extent we will try to assess in an objective way the impact of network impairments on the perceived quality of a Skype video call. Traditional video streaming schemes lack the necessary flexibility and adaptivity that Skype tries to achieve at the edge of a network. Our contribution will lye on a testbed and consequent objective video quality analysis that we will carry out on input videos. We will stream raw video files with Skype via an impaired channel and then we will record it at the receiver side to analyze with objective quality of experience metrics.
Resumo:
Carbon has a unique ability to shape networks of differently hybridized atoms that can generate various allotropes and may also exist as nanoscale materials. The emergence of carbon nanostructures initially occured through the serendipitous discovery of fullerenes and then through experimental advances which led to carbon nanotubes, nanohorns and graphene. The structural diversity of carbon nanoscopic allotropes and their unique and unprecedentend properties, give rise to countless applications and have been intensively exploited in nanotechnology, since they may address the need to create smarter optoelectronic devices, smaller in size and with better performance. The versatile properties of carbon nanomaterials are reflected in the multidisciplinary character of my doctoral research where, in particular, I take advantage of the opportunities offered by fullerenes and carbon nanotubes in constructing novel functional materials. In this work, carbon nanostructures are incorporated in novel photoactive functional systems constructed through different types of interactions – covalent bonds, ion-pairing or self-assembly. The variety of properties exhibited by carbon nanostructures is successfully explored by assigning them a different role in a specific array: fullerenes are employed as electron or energy acceptors, whereas carbon nanotubes behave like optically inert scaffolds for luminescent materials or nanoscale substrates in sonication-induced self-assembly. All the presented systems serve as a testbed for exploring the properties of carbon nanostructures in multicomponent arrays, which may be advantageous for the production of new photovoltaic or optoelectronic devices, as well as in the design and control of self-assembly processes.
Resumo:
Questa tesi si pone l’obiettivo di effettuare un’analisi aggiornata sulla recente evoluzione del Cloud Computing e dei nuovi modelli architetturali a sostegno della continua crescita di richiesta di risorse di computazione, di storage e di rete all'interno dei data center, per poi dedicarsi ad una fase sperimentale di migrazioni live singole e concorrenti di macchine virtuali, studiandone le prestazioni a livello di risorse applicative e di rete all’interno della piattaforma open source di virtualizzazione QEMU-KVM, oggi alla base di sistemi cloud-based come Openstack. Nel primo capitolo, viene effettuato uno studio dello stato dell’arte del Cloud Computing, dei suoi attuali limiti e delle prospettive offerte da un modello di Cloud Federation nel futuro immediato. Nel secondo capitolo vengono discusse nel dettaglio le tecniche di live migration, di recente riferimento per la comunità scientifica internazionale e le possibili ottimizzazioni in scenari inter e intra data center, con l’intento di definire la base teorica per lo studio approfondito dell’implementazione effettiva del processo di migrazione su piattaforma QEMU-KVM, che viene affrontato nel terzo capitolo. In particolare, in quest’ultimo sono descritti i principi architetturali e di funzionamento dell'hypervisor e viene definito il modello di progettazione e l’algoritmo alla base del processo di migrazione. Nel quarto capitolo, infine, si presenta il lavoro svolto, le scelte configurative e progettuali per la creazione di un ambiente di testbed adatto allo studio di sessioni di live migration concorrenti e vengono discussi i risultati delle misure di performance e del comportamento del sistema, tramite le sperimentazioni effettuate.
Resumo:
Mobile devices are now capable of supporting a wide range of applications, many of which demand an ever increasing computational power. To this end, mobile cloud computing (MCC) has been proposed to address the limited computation power, memory, storage, and energy of such devices. An important challenge in MCC is to guarantee seamless discovery of services. To this end, this thesis proposes an architecture that provides user-transparent and low-latency service discovery, as well as automated service selection. Experimental results on a real cloud computing testbed demonstrated that the proposed work outperforms state of-the-art approaches by achieving extremely low discovery delay.
Resumo:
Since the appearance of downsized and simplified TCP/IP stacks, single nodes from Wireless Sensor Networks (WSNs) have become directly accessible from the Internet with commonly used networking tools and applications (e.g., Telnet or SMTP). However, TCP has been shown to perform poorly in wireless networks, especially across multiple wireless hops. This paper examines TCP performance optimizations based on distributed caching and local retransmission strategies of intermediate nodes in a TCP connection, and proposes extended techniques to these strategies. The paper studies the impact of different radio duty-cycling MAC protocols on the end-to-end TCP performance when using the proposed TCP optimization strategies in an extensive experimental evaluation on a real-world sensor network testbed.
Resumo:
The increasing usage of wireless networks creates new challenges for wireless access providers. On the one hand, providers want to satisfy the user demands but on the other hand, they try to reduce the operational costs by decreasing the energy consumption. In this paper, we evaluate the trade-off between energy efficiency and quality of experience for a wireless mesh testbed. The results show that by intelligent service control, resources can be better utilized and energy can be saved by reducing the number of active network components. However, care has to be taken because the channel bandwidth varies in wireless networks. In the second part of the paper, we analyze the trade-off between energy efficiency and quality of experience at the end user. The results reveal that a provider's service control measures do not only reduce the operational costs of the network but also bring a second benefit: they help maximize the battery lifetime of the end-user device.
Resumo:
During the project, managers encounter numerous contingencies and are faced with the challenging task of making decisions that will effectively keep the project on track. This task is very challenging because construction projects are non-prototypical and the processes are irreversible. Therefore, it is critical to apply a methodological approach to develop a few alternative management decision strategies during the planning phase, which can be deployed to manage alternative scenarios resulting from expected and unexpected disruptions in the as-planned schedule. Such a methodology should have the following features but are missing in the existing research: (1) looking at the effects of local decisions on the global project outcomes, (2) studying how a schedule responds to decisions and disruptive events because the risk in a schedule is a function of the decisions made, (3) establishing a method to assess and improve the management decision strategies, and (4) developing project specific decision strategies because each construction project is unique and the lessons from a particular project cannot be easily applied to projects that have different contexts. The objective of this dissertation is to develop a schedule-based simulation framework to design, assess, and improve sequences of decisions for the execution stage. The contribution of this research is the introduction of applying decision strategies to manage a project and the establishment of iterative methodology to continuously assess and improve decision strategies and schedules. The project managers or schedulers can implement the methodology to develop and identify schedules accompanied by suitable decision strategies to manage a project at the planning stage. The developed methodology also lays the foundation for an algorithm towards continuously automatically generating satisfactory schedule and strategies through the construction life of a project. Different from studying isolated daily decisions, the proposed framework introduces the notion of {em decision strategies} to manage construction process. A decision strategy is a sequence of interdependent decisions determined by resource allocation policies such as labor, material, equipment, and space policies. The schedule-based simulation framework consists of two parts, experiment design and result assessment. The core of the experiment design is the establishment of an iterative method to test and improve decision strategies and schedules, which is based on the introduction of decision strategies and the development of a schedule-based simulation testbed. The simulation testbed used is Interactive Construction Decision Making Aid (ICDMA). ICDMA has an emulator to duplicate the construction process that has been previously developed and a random event generator that allows the decision-maker to respond to disruptions in the emulation. It is used to study how the schedule responds to these disruptions and the corresponding decisions made over the duration of the project while accounting for cascading impacts and dependencies between activities. The dissertation is organized into two parts. The first part presents the existing research, identifies the departure points of this work, and develops a schedule-based simulation framework to design, assess, and improve decision strategies. In the second part, the proposed schedule-based simulation framework is applied to investigate specific research problems.
Resumo:
This paper studies the energy-efficiency and service characteristics of a recently developed energy-efficient MAC protocol for wireless sensor networks in simulation and on a real sensor hardware testbed. This opportunity is seized to illustrate how simulation models can be verified by cross-comparing simulation results with real-world experiment results. The paper demonstrates that by careful calibration of simulation model parameters, the inevitable gap between simulation models and real-world conditions can be reduced. It concludes with guidelines for a methodology for model calibration and validation of sensor network simulation models.
Resumo:
Im Rahmen dieses Artikels werden aktuelle Forschungsarbeiten zum Einsatz von Energy-Harvesting und Ultra-Low-Power-Geräten in Materialflusssystemen beschrieben. Ein besonderes Augenmerk wird auf die inBin-Plattform, das Energy-Harvesting und deren Auswirkungen auf die Leistungsverfügbarkeit gelegt. Dazu werden die Hardwareplattform und Architektur der inBin-Plattform sowie der Aufbau eines Versuchsfelds detailliert erläutert. Des Weiteren wird ein Ansatz zur Modellierung und Simulation von Systemen mit einer großen Anzahl von inBin-Plattformen vorgestellt. Darüber hinaus werden die Ergebnisse zweier simulierter Szenarien und mögliche Folgen für die Planung zukünftiger Materialflusssysteme betrachtet.