927 resultados para Cyber physical system
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
Este trabalho de pesquisa e desenvolvimento tem como fundamento principal o Conceito de Controlo por Lógica Difusa. Utilizando as ferramentas do software Matlab, foi possível desenvolver um controlador com base na inferência difusa que permitisse controlar qualquer tipo de sistema físico real, independentemente das suas características. O Controlo Lógico Difuso, do inglês “Fuzzy Control”, é um tipo de controlo muito particular, pois permite o uso simultâneo de dados numéricos com variáveis linguísticas que tem por base o conhecimento heurístico dos sistemas a controlar. Desta forma, consegue-se quantificar, por exemplo, se um copo está “meio cheio” ou “meio vazio”, se uma pessoa é “alta” ou “baixa”, se está “frio” ou “muito frio”. O controlo PID é, sem dúvida alguma, o controlador mais amplamente utilizado no controlo de sistemas. Devido à sua simplicidade de construção, aos reduzidos custos de aplicação e manutenção e aos resultados que se obtêm, este controlador torna-se a primeira opção quando se pretende implementar uma malha de controlo num determinado sistema. Caracterizado por três parâmetros de ajuste, a saber componente proporcional, integral e derivativa, as três em conjunto permitem uma sintonia eficaz de qualquer tipo de sistema. De forma a automatizar o processo de sintonia de controladores e, aproveitando o que melhor oferece o Controlo Difuso e o Controlo PID, agrupou-se os dois controladores, onde em conjunto, como poderemos constatar mais adiante, foram obtidos resultados que vão de encontro com os objectivos traçados. Com o auxílio do simulink do Matlab, foi desenvolvido o diagrama de blocos do sistema de controlo, onde o controlador difuso tem a tarefa de supervisionar a resposta do controlador PID, corrigindo-a ao longo do tempo de simulação. O controlador desenvolvido é denominado por Controlador FuzzyPID. Durante o desenvolvimento prático do trabalho, foi simulada a resposta de diversos sistemas à entrada em degrau unitário. Os sistemas estudados são na sua maioria sistemas físicos reais, que representam sistemas mecânicos, térmicos, pneumáticos, eléctricos, etc., e que podem ser facilmente descritos por funções de transferência de primeira, segunda e de ordem superior, com e sem atraso.
Resumo:
Dissertação apresentada na Faculdade de Ciência e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The world is increasingly in a global community. The rapid technological development of communication and information technologies allows the transmission of knowledge in real-time. In this context, it is imperative that the most developed countries are able to develop their own strategies to stimulate the industrial sector to keep up-to-date and being competitive in a dynamic and volatile global market so as to maintain its competitive capacities and by consequence, permits the maintenance of a pacific social state to meet the human and social needs of the nation. The path traced of competitiveness through technological differentiation in industrialization allows a wider and innovative field of research. Already we are facing a new phase of organization and industrial technology that begins to change the way we relate with the industry, society and the human interaction in the world of work in current standards. This Thesis, develop an analysis of Industrie 4.0 Framework, Challenges and Perspectives. Also, an analysis of German reality in facing to approach the future challenge in this theme, the competition expected to win in future global markets, points of domestic concerns felt in its industrial fabric household face this challenge and proposes recommendations for a more effective implementation of its own strategy. The methods of research consisted of a comprehensive review and strategically analysis of existing global literature on the topic, either directly or indirectly, in parallel with the analysis of questionnaires and data analysis performed by entities representing the industry at national and world global placement. The results found by this multilevel analysis, allowed concluding that this is a theme that is only in the beginning for construction the platform to engage the future Internet of Things in the industrial environment Industrie 4.0. This dissertation allows stimulate the need of achievements of more strategically and operational approach within the society itself as a whole to clarify the existing weaknesses in this area, so that the National Strategy can be implemented with effective approaches and planned actions for a direct training plan in a more efficiently path in education for the theme.
Resumo:
Authors working on "industrial metabolism" or "social metabolism" look at the economy in terms of flows of energy and materials. Together with the ecological economists, they see the economy as a subsystem of a larger physical system. Marx and Engels followed with a few years’ delay many of the remarkable scientific and technical novelties of their time.
Resumo:
In this paper we present a prototype of a control flow for an a posteriori drug dose adaptation for Chronic Myelogenous Leukemia (CML) patients. The control flow is modeled using Timed Automata extended with Tasks (TAT) model. The feedback loop of the control flow includes the decision-making process for drug dose adaptation. This is based on the outputs of the body response model represented by the Support Vector Machine (SVM) algorithm for drug concentration prediction. The decision is further checked for conformity with the dose level rules of a medical guideline. We also have developed an automatic code synthesizer for the icycom platform as an extension of the TIMES tool.
Resumo:
In the quest to completely describe entanglement in the general case of a finite number of parties sharing a physical system of finite-dimensional Hilbert space an entanglement magnitude is introduced for its pure and mixed states: robustness. It corresponds to the minimal amount of mixing with locally prepared states which washes out all entanglement. It quantifies in a sense the endurance of entanglement against noise and jamming. Its properties are studied comprehensively. Analytical expressions for the robustness are given for pure states of two-party systems, and analytical bounds for mixed states of two-party systems. Specific results are obtained mainly for the qubit-qubit system (qubit denotes quantum bit). As by-products local pseudomixtures are generalized, a lower bound for the relative volume of separable states is deduced, and arguments for considering convexity a necessary condition of any entanglement measure are put forward.
Resumo:
Useiden pitkän kehityskaaren ohjelmistojen ylläpitäminen ja kehittäminen on vaikeaa, sillä niiden dokumentaatio on vajaata tai vanhentunutta. Tässä diplomityössä etsitään ratkaisua tällaisen ohjelmiston ja sen taustalla olevan järjestelmän kuvaukseen. Tavoitteina on tukea nykyisen ohjelmiston ylläpitoa ja uuden työvoiman perehdyttämistä. Tavoitteena on myös pohjustaa uuden korvaavan ohjelmiston suunnittelua kuvaamalla nykyiseen järjestelmään sitoutunutta sovellusalueosaamista. Työssä kehitetään kuvausmenetelmä järjestelmän kuvaamiseen hierarkkisesti laitteistotason yleiskuvauksesta ohjelmiston luokkarakenteeseen sekä toiminnallisuuteen asti. Laite- ja luokkarakennekuvaukset ovat rakenteellisia kuvauksia, joiden tehtävänä on selittää järjestelmän ja sen osien kokoonpano. Toiminnallisuudesta kertovat kuvaukset on toteutettu käyttötapauskuvauksina. Työssä keskityttiin erityisesti kohdejärjestelmän keskeisen ohjelmiston ja tietokannan kuvaamiseen. Ohjelmistosta valittiin tärkeimmät ja eniten sovellusalueen tietotaitoa sisältävät osat, joista työssä luotiin esimerkkikuvaukset. Kuvauksia on kehitettyä menetelmää hyödyntäen helppo laajentaa tarpeiden mukaan paitsi ohjelmiston muihin osiin, myös laitteiston ja järjestelmän kuvaamiseen kokonaisuudessaan syvemmin.
Resumo:
L'approximation adiabatique en mécanique quantique stipule que si un système quantique évolue assez lentement, alors il demeurera dans le même état propre. Récemment, une faille dans l'application de l'approximation adiabatique a été découverte. Les limites du théorème seront expliquées lors de sa dérivation. Ce mémoire à pour but d'optimiser la probabilité de se maintenir dans le même état propre connaissant le système initial, final et le temps d'évolution total. Cette contrainte sur le temps empêche le système d'être assez lent pour être adiabatique. Pour solutionner ce problème, une méthode variationnelle est utilisée. Cette méthode suppose connaître l'évolution optimale et y ajoute une petite variation. Par après, nous insérons cette variation dans l'équation de la probabilité d'être adiabatique et développons en série. Puisque la série est développée autour d'un optimum, le terme d'ordre un doit nécessairement être nul. Ceci devrait nous donner un critère sur l'évolution la plus adiabatique possible et permettre de la déterminer. Les systèmes quantiques dépendants du temps sont très complexes. Ainsi, nous commencerons par les systèmes ayant des énergies propres indépendantes du temps. Puis, les systèmes sans contrainte et avec des fonctions d'onde initiale et finale libres seront étudiés.
Resumo:
The study of stability problems is relevant to the study of structure of a physical system. It 1S particularly important when it is not possible to probe into its interior and obtain information on its structure by a direct method. The thesis states about stability theory that has become of dominant importance in the study of dynamical systems. and has many applications in basic fields like meteorology, oceanography, astrophysics and geophysics- to mention few of them. The definition of stability was found useful 1n many situations, but inadequate in many others so that a host of other important concepts have been introduced in past many years which are more or less related to the first definition and to the common sense meaning of stability. In recent years the theoretical developments in the studies of instabilities and turbulence have been as profound as the developments in experimental methods. The study here Points to a new direction for stability studies based on Lagrangian formulation instead of the Hamiltonian formulation used by other authors.
Resumo:
KAM is a computer program that can automatically plan, monitor, and interpret numerical experiments with Hamiltonian systems with two degrees of freedom. The program has recently helped solve an open problem in hydrodynamics. Unlike other approaches to qualitative reasoning about physical system dynamics, KAM embodies a significant amount of knowledge about nonlinear dynamics. KAM's ability to control numerical experiments arises from the fact that it not only produces pictures for us to see, but also looks at (sic---in its mind's eye) the pictures it draws to guide its own actions. KAM is organized in three semantic levels: orbit recognition, phase space searching, and parameter space searching. Within each level spatial properties and relationships that are not explicitly represented in the initial representation are extracted by applying three operations ---(1) aggregation, (2) partition, and (3) classification--- iteratively.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.