964 resultados para cyber-physical system (CPS)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hard real-time systems are a class of computer control systems that must react to demands of their environment by providing `correct' and timely responses. Since these systems are increasingly being used in systems with safety implications, it is crucial that they are designed and developed to operate in a correct manner. This thesis is concerned with developing formal techniques that allow the specification, verification and design of hard real-time systems. Formal techniques for hard real-time systems must be capable of capturing the system's functional and performance requirements, and previous work has proposed a number of techniques which range from the mathematically intensive to those with some mathematical content. This thesis develops formal techniques that contain both an informal and a formal component because it is considered that the informality provides ease of understanding and the formality allows precise specification and verification. Specifically, the combination of Petri nets and temporal logic is considered for the specification and verification of hard real-time systems. Approaches that combine Petri nets and temporal logic by allowing a consistent translation between each formalism are examined. Previously, such techniques have been applied to the formal analysis of concurrent systems. This thesis adapts these techniques for use in the modelling, design and formal analysis of hard real-time systems. The techniques are applied to the problem of specifying a controller for a high-speed manufacturing system. It is shown that they can be used to prove liveness and safety properties, including qualitative aspects of system performance. The problem of verifying quantitative real-time properties is addressed by developing a further technique which combines the formalisms of timed Petri nets and real-time temporal logic. A unifying feature of these techniques is the common temporal description of the Petri net. A common problem with Petri net based techniques is the complexity problems associated with generating the reachability graph. This thesis addresses this problem by using concurrency sets to generate a partial reachability graph pertaining to a particular state. These sets also allows each state to be checked for the presence of inconsistencies and hazards. The problem of designing a controller for the high-speed manufacturing system is also considered. The approach adopted mvolves the use of a model-based controller: This type of controller uses the Petri net models developed, thus preservIng the properties already proven of the controller. It. also contains a model of the physical system which is synchronised to the real application to provide timely responses. The various way of forming the synchronization between these processes is considered and the resulting nets are analysed using concurrency sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of increasingly powerful computers, which has enabled the use of windowing software, has also opened the way for the computer study, via simulation, of very complex physical systems. In this study, the main issues related to the implementation of interactive simulations of complex systems are identified and discussed. Most existing simulators are closed in the sense that there is no access to the source code and, even if it were available, adaptation to interaction with other systems would require extensive code re-writing. This work aims to increase the flexibility of such software by developing a set of object-oriented simulation classes, which can be extended, by subclassing, at any level, i.e., at the problem domain, presentation or interaction levels. A strategy, which involves the use of an object-oriented framework, concurrent execution of several simulation modules, use of a networked windowing system and the re-use of existing software written in procedural languages, is proposed. A prototype tool which combines these techniques has been implemented and is presented. It allows the on-line definition of the configuration of the physical system and generates the appropriate graphical user interface. Simulation routines have been developed for the chemical recovery cycle of a paper pulp mill. The application, by creation of new classes, of the prototype to the interactive simulation of this physical system is described. Besides providing visual feedback, the resulting graphical user interface greatly simplifies the interaction with this set of simulation modules. This study shows that considerable benefits can be obtained by application of computer science concepts to the engineering domain, by helping domain experts to tailor interactive tools to suit their needs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis addresses data assimilation, which typically refers to the estimation of the state of a physical system given a model and observations, and its application to short-term precipitation forecasting. A general introduction to data assimilation is given, both from a deterministic and' stochastic point of view. Data assimilation algorithms are reviewed, in the static case (when no dynamics are involved), then in the dynamic case. A double experiment on two non-linear models, the Lorenz 63 and the Lorenz 96 models, is run and the comparative performance of the methods is discussed in terms of quality of the assimilation, robustness "in the non-linear regime and computational time. Following the general review and analysis, data assimilation is discussed in the particular context of very short-term rainfall forecasting (nowcasting) using radar images. An extended Bayesian precipitation nowcasting model is introduced. The model is stochastic in nature and relies on the spatial decomposition of the rainfall field into rain "cells". Radar observations are assimilated using a Variational Bayesian method in which the true posterior distribution of the parameters is approximated by a more tractable distribution. The motion of the cells is captured by a 20 Gaussian process. The model is tested on two precipitation events, the first dominated by convective showers, the second by precipitation fronts. Several deterministic and probabilistic validation methods are applied and the model is shown to retain reasonable prediction skill at up to 3 hours lead time. Extensions to the model are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a fiber laser system as a specific illustrative example, we introduce the concept of intermediate asymptotic states in finite nonlinear optical systems. We show that intermediate asymptotics of nonlinear equations (e.g., coherent structures with a finite lifetime or distance) can be used in applications similar to those of truly stable asymptotic solutions, such as, e.g., solitons and dissipative nonlinear waves. Applying this general idea to a particular, albeit practically important, physical system, we demonstrate a novel type of nonlinear pulse-shaping regime in a mode-locked fiber laser leading to the generation of linearly chirped pulses with a triangular distribution of the intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a fiber laser system as a specific illustrative example, we introduce the concept of intermediate asymptotic states in finite nonlinear optical systems. We show that intermediate asymptotics of nonlinear equations (e.g., coherent structures with a finite lifetime or distance) can be used in applications similar to those of truly stable asymptotic solutions, such as, e.g., solitons and dissipative nonlinear waves. Applying this general idea to a particular, albeit practically important, physical system, we demonstrate a novel type of nonlinear pulse-shaping regime in a mode-locked fiber laser leading to the generation of linearly chirped pulses with a triangular distribution of the intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multicore fiber (MCF) is a physical system of high practical importance. In addition to standard exploitation, MCFs may support discrete vortices that carry orbital angular momentum suitable for spatial-division multiplexing in high-capacity fiber-optic communication systems. These discrete vortices may also be attractive for high-power laser applications. We present the conditions of existence, stability, and coherent propagation of such optical vortices for two practical MCF designs. Through optimization, we found stable discrete vortices that were capable of transferring high coherent power through the MCF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ebben a tanulmányban ismertetjük a Nöther-tétel lényegi vonatkozásait, és kitérünk a Lie-szimmetriák értelmezésére abból a célból, hogy közgazdasági folyamatokra is alkalmazzuk a Lagrange-formalizmuson nyugvó elméletet. A Lie-szimmetriák dinamikai rendszerekre történő feltárása és viselkedésük jellemzése a legújabb kutatások eredményei e területen. Például Sen és Tabor (1990), Edward Lorenz (1963), a komplex kaotikus dinamika vizsgálatában jelent®s szerepet betöltő 3D modelljét, Baumann és Freyberger (1992) a két-dimenziós Lotka-Volterra dinamikai rendszert, és végül Almeida és Moreira (1992) a három-hullám interakciós problémáját vizsgálták a megfelelő Lie-szimmetriák segítségével. Mi most empirikus elemzésre egy közgazdasági dinamikai rendszert választottunk, nevezetesen Goodwin (1967) ciklusmodelljét. Ennek vizsgálatát tűztük ki célul a leírandó rendszer Lie-szimmetriáinak meghatározásán keresztül. / === / The dynamic behavior of a physical system can be frequently described very concisely by the least action principle. In the centre of its mathematical presentation is a specic function of coordinates and velocities, i.e., the Lagrangian. If the integral of the Lagrangian is stationary, then the system is moving along an extremal path through the phase space, and vice versa. It can be seen, that each Lie symmetry of a Lagrangian in general corresponds to a conserved quantity, and the conservation principle is explained by a variational symmetry related to a dynamic or geometrical symmetry. Briey, that is the meaning of Noether's theorem. This paper scrutinizes the substantial characteristics of Noether's theorem, interprets the Lie symmetries by PDE system and calculates the generators (symmetry vectors) on R. H. Goodwin's cyclical economic growth model. At first it will be shown that the Goodwin model also has a Lagrangian structure, therefore Noether's theorem can also be applied here. Then it is proved that the cyclical moving in his model derives from its Lie symmetries, i.e., its dynamic symmetry. All these proofs are based on the investigations of the less complicated Lotka Volterra model and those are extended to Goodwin model, since both models are one-to-one maps of each other. The main achievement of this paper is the following: Noether's theorem is also playing a crucial role in the mechanics of Goodwin model. It also means, that its cyclical moving is optimal. Generalizing this result, we can assert, that all dynamic systems' solutions described by first order nonlinear ODE system are optimal by the least action principle, if they have a Lagrangian.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi esplora la co-esistenza di computazioni embodied e disembodied nei moderni sistemi software, adottando come caso di studio il recente trend che vede sempre più coesi e integrati sistemi per l'Internet of Things e sistemi Cloud-based. Si analizzano i principali modelli di comunicazione, protocolli di comunicazione e architetture situate. Inoltre si realizza una piattaforma IoT Middleware cloud-based per mostrare come la computazione possa essere distribuita lato embodied e disembodied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The BlackEnergy malware targeting critical infrastructures has a long history. It evolved over time from a simple DDoS platform to a quite sophisticated plug-in based malware. The plug-in architecture has a persistent malware core with easily installable attack specific modules for DDoS, spamming, info-stealing, remote access, boot-sector formatting etc. BlackEnergy has been involved in several high profile cyber physical attacks including the recent Ukraine power grid attack in December 2015. This paper investigates the evolution of BlackEnergy and its cyber attack capabilities. It presents a basic cyber attack model used by BlackEnergy for targeting industrial control systems. In particular, the paper analyzes cyber threats of BlackEnergy for synchrophasor based systems which are used for real-time control and monitoring functionalities in smart grid. Several BlackEnergy based attack scenarios have been investigated by exploiting the vulnerabilities in two widely used synchrophasor communication standards: (i) IEEE C37.118 and (ii) IEC 61850-90-5. Specifically, the paper addresses reconnaissance, DDoS, man-in-the-middle and replay/reflection attacks on IEEE C37.118 and IEC 61850-90-5. Further, the paper also investigates protection strategies for detection and prevention of BlackEnergy based cyber physical attacks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract—With the proliferation of Software systems and the rise of paradigms such the Internet of Things, Cyber- Physical Systems and Smart Cities to name a few, the energy consumed by software applications is emerging as a major concern. Hence, it has become vital that software engineers have a better understanding of the energy consumed by the code they write. At software level, work so far has focused on measuring the energy consumption at function and application level. In this paper, we propose a novel approach to measure energy consumption at a feature level, cross-cutting multiple functions, classes and systems. We argue the importance of such measurement and the new insight it provides to non-traditional stakeholders such as service providers. We then demonstrate, using an experiment, how the measurement can be done with a combination of tools, namely our program slicing tool (PORBS) and energy measurement tool (Jolinar).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A natural way to generalize tensor network variational classes to quantum field systems is via a continuous tensor contraction. This approach is first illustrated for the class of quantum field states known as continuous matrix-product states (cMPS). As a simple example of the path-integral representation we show that the state of a dynamically evolving quantum field admits a natural representation as a cMPS. A completeness argument is also provided that shows that all states in Fock space admit a cMPS representation when the number of variational parameters tends to infinity. Beyond this, we obtain a well-behaved field limit of projected entangled-pair states (PEPS) in two dimensions that provide an abstract class of quantum field states with natural symmetries. We demonstrate how symmetries of the physical field state are encoded within the dynamics of an auxiliary field system of one dimension less. In particular, the imposition of Euclidean symmetries on the physical system requires that the auxiliary system involved in the class' definition must be Lorentz-invariant. The physical field states automatically inherit entropy area laws from the PEPS class, and are fully described by the dissipative dynamics of a lower dimensional virtual field system. Our results lie at the intersection many-body physics, quantum field theory and quantum information theory, and facilitate future exchanges of ideas and insights between these disciplines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different types of base fluids, such as water, engine oil, kerosene, ethanol, methanol, ethylene glycol etc. are usually used to increase the heat transfer performance in many engineering applications. But these conventional heat transfer fluids have often several limitations. One of those major limitations is that the thermal conductivity of each of these base fluids is very low and this results a lower heat transfer rate in thermal engineering systems. Such limitation also affects the performance of different equipments used in different heat transfer process industries. To overcome such an important drawback, researchers over the years have considered a new generation heat transfer fluid, simply known as nanofluid with higher thermal conductivity. This new generation heat transfer fluid is a mixture of nanometre-size particles and different base fluids. Different researchers suggest that adding spherical or cylindrical shape of uniform/non-uniform nanoparticles into a base fluid can remarkably increase the thermal conductivity of nanofluid. Such augmentation of thermal conductivity could play a more significant role in enhancing the heat transfer rate than that of the base fluid. Nanoparticles diameters used in nanofluid are usually considered to be less than or equal to 100 nm and the nanoparticles concentration usually varies from 5% to 10%. Different researchers mentioned that the smaller nanoparticles concentration with size diameter of 100 nm could enhance the heat transfer rate more significantly compared to that of base fluids. But it is not obvious what effect it will have on the heat transfer performance when nanofluids contain small size nanoparticles of less than 100 nm with different concentrations. Besides, the effect of static and moving nanoparticles on the heat transfer of nanofluid is not known too. The idea of moving nanoparticles brings the effect of Brownian motion of nanoparticles on the heat transfer. The aim of this work is, therefore, to investigate the heat transfer performance of nanofluid using a combination of smaller size of nanoparticles with different concentrations considering the Brownian motion of nanoparticles. A horizontal pipe has been considered as a physical system within which the above mentioned nanofluid performances are investigated under transition to turbulent flow conditions. Three different types of numerical models, such as single phase model, Eulerian-Eulerian multi-phase mixture model and Eulerian-Lagrangian discrete phase model have been used while investigating the performance of nanofluids. The most commonly used model is single phase model which is based on the assumption that nanofluids behave like a conventional fluid. The other two models are used when the interaction between solid and fluid particles is considered. However, two different phases, such as fluid and solid phases is also considered in the Eulerian-Eulerian multi-phase mixture model. Thus, these phases create a fluid-solid mixture. But, two phases in the Eulerian-Lagrangian discrete phase model are independent. One of them is a solid phase and the other one is a fluid phase. In addition, RANS (Reynolds Average Navier Stokes) based Standard κ-ω and SST κ-ω transitional models have been used for the simulation of transitional flow. While the RANS based Standard κ-ϵ, Realizable κ-ϵ and RNG κ-ϵ turbulent models are used for the simulation of turbulent flow. Hydrodynamic as well as temperature behaviour of transition to turbulent flows of nanofluids through the horizontal pipe is studied under a uniform heat flux boundary condition applied to the wall with temperature dependent thermo-physical properties for both water and nanofluids. Numerical results characterising the performances of velocity and temperature fields are presented in terms of velocity and temperature contours, turbulent kinetic energy contours, surface temperature, local and average Nusselt numbers, Darcy friction factor, thermal performance factor and total entropy generation. New correlations are also proposed for the calculation of average Nusselt number for both the single and multi-phase models. Result reveals that the combination of small size of nanoparticles and higher nanoparticles concentrations with the Brownian motion of nanoparticles shows higher heat transfer enhancement and thermal performance factor than those of water. Literature suggests that the use of nanofluids flow in an inclined pipe at transition to turbulent regimes has been ignored despite its significance in real-life applications. Therefore, a particular investigation has been carried out in this thesis with a view to understand the heat transfer behaviour and performance of an inclined pipe under transition flow condition. It is found that the heat transfer rate decreases with the increase of a pipe inclination angle. Also, a higher heat transfer rate is found for a horizontal pipe under forced convection than that of an inclined pipe under mixed convection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crossing the Franco-Swiss border, the Large Hadron Collider (LHC), designed to collide 7 TeV proton beams, is the world's largest and most powerful particle accelerator the operation of which was originally intended to commence in 2008. Unfortunately, due to an interconnect discontinuity in one of the main dipole circuit's 13 kA superconducting busbars, a catastrophic quench event occurred during initial magnet training, causing significant physical system damage. Furthermore, investigation into the cause found that such discontinuities were not only present in the circuit in question, but throughout the entire LHC. This prevented further magnet training and ultimately resulted in the maximum sustainable beam energy being limited to approximately half that of the design nominal, 3.5-4 TeV, for the first three years of operation (Run 1, 2009-2012) and a major consolidation campaign being scheduled for the first long shutdown (LS 1, 2012-2014). Throughout Run 1, a series of studies attempted to predict the amount of post-installation training quenches still required to qualify each circuit to nominal-energy current levels. With predictions in excess of 80 quenches (each having a recovery time of 8-12+ hours) just to achieve 6.5 TeV and close to 1000 quenches for 7 TeV, it was decided that for Run 2, all systems be at least qualified for 6.5 TeV operation. However, even with all interconnect discontinuities scheduled to be repaired during LS 1, numerous other concerns regarding circuit stability arose. In particular, observations of an erratic behaviour of magnet bypass diodes and the degradation of other potentially weak busbar sections, as well as observations of seemingly random millisecond spikes in beam losses, known as unidentified falling object (UFO) events, which, if persist at 6.5 TeV, may eventually deposit sufficient energy to quench adjacent magnets. In light of the above, the thesis hypothesis states that, even with the observed issues, the LHC main dipole circuits can safely support and sustain near-nominal proton beam energies of at least 6.5 TeV. Research into minimising the risk of magnet training led to the development and implementation of a new qualification method, capable of providing conclusive evidence that all aspects of all circuits, other than the magnets and their internal joints, can safely withstand a quench event at near-nominal current levels, allowing for magnet training to be carried out both systematically and without risk. This method has become known as the Copper Stabiliser Continuity Measurement (CSCM). Results were a success, with all circuits eventually being subject to a full current decay from 6.5 TeV equivalent current levels, with no measurable damage occurring. Research into UFO events led to the development of a numerical model capable of simulating typical UFO events, reproducing entire Run 1 measured event data sets and extrapolating to 6.5 TeV, predicting the likelihood of UFO-induced magnet quenches. Results provided interesting insights into the involved phenomena as well as confirming the possibility of UFO-induced magnet quenches. The model was also capable of predicting that such events, if left unaccounted for, are likely to be commonplace or not, resulting in significant long-term issues for 6.5+ TeV operation. Addressing the thesis hypothesis, the following written works detail the development and results of all CSCM qualification tests and subsequent magnet training as well as the development and simulation results of both 4 TeV and 6.5 TeV UFO event modelling. The thesis concludes, post-LS 1, with the LHC successfully sustaining 6.5 TeV proton beams, but with UFO events, as predicted, resulting in otherwise uninitiated magnet quenches and being at the forefront of system availability issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manufacturing companies have passed from selling uniquely tangible products to adopting a service-oriented approach to generate steady and continuous revenue streams. Nowadays, equipment and machine manufacturers possess technologies to track and analyze product-related data for obtaining relevant information from customers’ use towards the product after it is sold. The Internet of Things on Industrial environments will allow manufacturers to leverage lifecycle product traceability for innovating towards an information-driven services approach, commonly referred as “Smart Services”, for achieving improvements in support, maintenance and usage processes. The aim of this study is to conduct a literature review and empirical analysis to present a framework that describes a customer-oriented approach for developing information-driven services leveraged by the Internet of Things in manufacturing companies. The empirical study employed tools for the assessment of customer needs for analyzing the case company in terms of information requirements and digital needs. The literature review supported the empirical analysis with a deep research on product lifecycle traceability and digitalization of product-related services within manufacturing value chains. As well as the role of simulation-based technologies on supporting the “Smart Service” development process. The results obtained from the case company analysis show that the customers mainly demand information that allow them to monitor machine conditions, machine behavior on different geographical conditions, machine-implement interactions, and resource and energy consumption. Put simply, information outputs that allow them to increase machine productivity for maximizing yields, save time and optimize resources in the most sustainable way. Based on customer needs assessment, this study presents a framework to describe the initial phases of a “Smart Service” development process, considering the requirements of Smart Engineering methodologies.