888 resultados para Supervisory Control and Data Acquisition (SCADA) Topology
Resumo:
Los ataques a redes de información son cada vez más sofisticados y exigen una constante evolución y mejora de las técnicas de detección. Para ello, en este proyecto se ha diseñado e implementado una plataforma cooperativa para la detección de intrusiones basada en red. En primer lugar, se ha realizado un estudio teórico previo del marco tecnológico relacionado con este ámbito, en el que se describe y caracteriza el software que se utiliza para realizar ataques a sistemas (malware) así como los métodos que se utilizan para llegar a transmitir ese software (vectores de ataque). En el documento también se describen los llamados APT, que son ataques dirigidos con una gran inversión económica y temporal. Estos pueden englobar todos los malware y vectores de ataque existentes. Para poder evitar estos ataques, se estudiarán los sistemas de detección y prevención de intrusiones, describiendo brevemente los algoritmos que se tienden a utilizar en la actualidad. En segundo lugar, se ha planteado y desarrollado una plataforma en red dedicada al análisis de paquetes y conexiones para detectar posibles intrusiones. Este sistema está orientado a sistemas SCADA (Supervisory Control And Data Adquisition) aunque funciona sobre cualquier red IPv4/IPv6, para ello se definirá previamente lo que es un sistema SCADA, así como sus partes principales. Para implementar el sistema se han utilizado dispositivos de bajo consumo llamados Raspberry PI, estos se ubican entre la red y el equipo final que se quiera analizar. En ellos se ejecutan 2 aplicaciones desarrolladas de tipo cliente-servidor (la Raspberry central ejecutará la aplicación servidora y las esclavas la aplicación cliente) que funcionan de forma cooperativa utilizando la tecnología distribuida de Hadoop, la cual se explica previamente. Mediante esta tecnología se consigue desarrollar un sistema completamente escalable. La aplicación servidora muestra una interfaz gráfica que permite administrar la plataforma de análisis de forma centralizada, pudiendo ver así las alarmas de cada dispositivo y calificando cada paquete según su peligrosidad. El algoritmo desarrollado en la aplicación calcula el ratio de paquetes/tiempo que entran/salen del equipo final, procesando los paquetes y analizándolos teniendo en cuenta la información de señalización, creando diferentes bases de datos que irán mejorando la robustez del sistema, reduciendo así la posibilidad de ataques externos. Para concluir, el proyecto inicial incluía el procesamiento en la nube de la aplicación principal, pudiendo administrar así varias infraestructuras concurrentemente, aunque debido al trabajo extra necesario se ha dejado preparado el sistema para poder implementar esta funcionalidad. En el caso experimental actual el procesamiento de la aplicación servidora se realiza en la Raspberry principal, creando un sistema escalable, rápido y tolerante a fallos. ABSTRACT. The attacks to networks of information are increasingly sophisticated and demand a constant evolution and improvement of the technologies of detection. For this project it is developed and implemented a cooperative platform for detect intrusions based on networking. First, there has been a previous theoretical study of technological framework related to this area, which describes the software used for attacks on systems (malware) as well as the methods used in order to transmit this software (attack vectors). In this document it is described the APT, which are attacks directed with a big economic and time inversion. These can contain all existing malware and attack vectors. To prevent these attacks, intrusion detection systems and prevention intrusion systems will be discussed, describing previously the algorithms tend to use today. Secondly, a platform for analyzing network packets has been proposed and developed to detect possible intrusions in SCADA (Supervisory Control And Data Adquisition) systems. This platform is designed for SCADA systems (Supervisory Control And Data Acquisition) but works on any IPv4 / IPv6 network. Previously, it is defined what a SCADA system is and the main parts of it. To implement it, we used low-power devices called Raspberry PI, these are located between the network and the final device to analyze it. In these Raspberry run two applications client-server developed (the central Raspberry runs the server application and the slaves the client application) that work cooperatively using Hadoop distributed technology, which is previously explained. Using this technology is achieved develop a fully scalable system. The server application displays a graphical interface to manage analytics platform centrally, thereby we can see each device alarms and qualifying each packet by dangerousness. The algorithm developed in the application calculates the ratio of packets/time entering/leaving the terminal device, processing the packets and analyzing the signaling information of each packet, reating different databases that will improve the system, thereby reducing the possibility of external attacks. In conclusion, the initial project included cloud computing of the main application, being able to manage multiple concurrent infrastructure, but due to the extra work required has been made ready the system to implement this funcionality. In the current test case the server application processing is made on the main Raspberry, creating a scalable, fast and fault-tolerant system.
Resumo:
Esta dissertação desenvolve uma plataforma de controlo interactiva para edifícios inteligentes através de um sistema SCADA (Supervisory Control And Data Acquisition). Este sistema SCADA integra diferentes tipos de informações provenientes das várias tecnologias presentes em edifícios modernos (controlo da ventilação, temperatura, iluminação, etc.). A estratégia de controlo desenvolvida implementa um controlador em cascada hierárquica onde os "loops" interiores são executados pelos PLC's locais (Programmable Logic Controller), e o "loop" exterior é gerido pelo sistema SCADA centralizado, que interage com a rede local de PLC's. Nesta dissertação é implementado um controlador preditivo na plataforma SCADA centralizada. São apresentados testes efectuados para o controlo da temperatura e luminosidade de salas com uma grande área. O controlador preditivo desenvolvido tenta optimizar a satisfação dos utilizadores, com base nas preferências introduzidas em várias interfaces distribuídas, sujeito às restrições de minimização do desperdício de energia. De forma a executar o controlador preditivo na plataforma SCADA foi desenvolvido um canal de comunicação para permitir a comunicação entre a aplicação SCADA e a aplicação MATLAB, onde o controlador preditivo é executado. ABSTRACT: This dissertation develops an operational control platform for intelligent buildings using a SCADA system (Supervisory Control And Data Acquisition). This SCADA system integrates different types of information coming from the several technologies present in modem buildings (control of ventilation, temperature, illumination, etc.). The developed control strategy implements a hierarchical cascade controller where inner loops are performed by local PLCs (Programmable Logic Controller), and the outer loop is managed by the centralized SCADA system, which interacts with the entire local PLC network. ln this dissertation a Predictive Controller is implemented at the centralized SCADA platform. Tests applied to the control of temperature and luminosity in hugearea rooms are presented. The developed Predictive Controller tries to optimize the satisfaction of user explicit preferences coming from several distributed user-interfaces, subjected to the constraints of energy waste minimization. ln order to run the Predictive Controller at the SCADA platform a communication channel was developed to allow communication between the SCADA application and the MATLAB application where the Predictive Controller runs.
Resumo:
Questo elaborato di tesi è nato con l'esigenza di sviluppare un nuovo modulo per la stima di variabili energetiche da inserire nel software On.Energy, per dare la possibilità a tutti gli utilizzatori di capire quanto i valori osservati nella realtà si discostino da un modello teorico appositamente creato, e fornire quindi un ulteriore strumento di analisi per mantenere sotto controllo il sistema nell'ottica del miglioramento continuo. Il risultato sarà uno strumento che verrà provato in via sperimentale sugli stabilimenti di due aziende leader in Italia in settori produttivi largamente differenti (Amadori - alimentare, Pfizer - farmaceutico), ma accomunati dalle esigenze di monitorare, analizzare e efficientare i consumi energetici.
Resumo:
Los sistemas de adquisición de datos utilizados en los diagnósticos de los dispositivos de fusión termonuclear se enfrentan a importantes retos planteados en los dispositivos de pulso largo. Incluso en los dispositivos de pulso corto, en los que se analizan los datos después de la descarga, existen aún una gran cantidad de datos sin analizar, lo cual supone que queda una gran cantidad de conocimiento por descubrir dentro de las bases de datos existentes. En la última década, la comunidad de fusión ha realizado un gran esfuerzo para mejorar los métodos de análisis off‐line para mejorar este problema, pero no se ha conseguido resolver completamente, debido a que algunos de estos métodos han de resolverse en tiempo real. Este paradigma lleva a establecer que los dispositivos de pulso largo deberán incluir dispositivos de adquisición de datos con capacidades de procesamiento local, capaces de ejecutar avanzados algoritmos de análisis. Los trabajos de investigación realizados en esta tesis tienen como objetivo determinar si es posible incrementar la capacidad local de procesamiento en tiempo real de dichos sistemas mediante el uso de GPUs. Para ello durante el trascurso del periodo de experimentación realizado se han evaluado distintas propuestas a través de casos de uso reales elaborados para algunos de los dispositivos de fusión más representativos como ITER, JET y TCV. Las conclusiones y experiencias obtenidas en dicha fase han permitido proponer un modelo y una metodología de desarrollo para incluir esta tecnología en los sistemas de adquisición para diagnósticos de distinta naturaleza. El modelo define no sólo la arquitectura hardware óptima para realizar dicha integración, sino también la incorporación de este nuevo recurso de procesamiento en los Sistemas de Control de Supervisión y Adquisición de Datos (SCADA) utilizados en la comunidad de fusión (EPICS), proporcionando una solución completa. La propuesta se complementa con la definición de una metodología que resuelve las debilidades detectadas, y permite trazar un camino de integración de la solución en los estándares hardware y software existentes. La evaluación final se ha realizado mediante el desarrollo de un caso de uso representativo de los diagnósticos que necesitan adquisición y procesado de imágenes en el contexto del dispositivo internacional ITER, y ha sido testeada con éxito en sus instalaciones. La solución propuesta en este trabajo ha sido incluida por la ITER IO en su catálogo de soluciones estándar para el desarrollo de sus futuros diagnósticos. Por otra parte, como resultado y fruto de la investigación de esta tesis, cabe destacar el acuerdo llevado a cabo con la empresa National Instruments en términos de transferencia tecnológica, lo que va a permitir la actualización de los sistemas de adquisición utilizados en los dispositivos de fusión. ABSTRACT Data acquisition systems used in the diagnostics of thermonuclear fusion devices face important challenges due to the change in the data acquisition paradigm needed for long pulse operation. Even in shot pulse devices, where data is mainly analyzed after the discharge has finished , there is still a large amount of data that has not been analyzed, therefore producing a lot of buried knowledge that still lies undiscovered in the data bases holding the vast amount of data that has been generated. There has been a strong effort in the fusion community in the last decade to improve the offline analysis methods to overcome this problem, but it has proved to be insufficient unless some of these mechanisms can be run in real time. In long pulse devices this new paradigm, where data acquisition devices include local processing capabilities to be able to run advanced data analysis algorithms, will be a must. The research works done in this thesis aim to determining whether it is possible to increase local capacity for real‐time processing of such systems by using GPUs. For that, during the experimentation period, various proposals have been evaluated through use cases developed for several of the most representative fusion devices, ITER, JET and TCV. Conclusions and experiences obtained have allowed to propose a model, and a development methodology, to include this technology in systems for diagnostics of different nature. The model defines not only the optimal hardware architecture for achieving this integration, but also the incorporation of this new processing resource in one of the Systems of Supervision Control and Data Acquisition (SCADA) systems more relevant at the moment in the fusion community (EPICS), providing a complete solution. The final evaluation has been performed through a use case developed for a generic diagnostic requiring image acquisition and processing for the international ITER device, and has been successfully tested in their premises. The solution proposed in this thesis has been included by the ITER IO in his catalog of standard solutions for the development of their future diagnostics. This has been possible thanks to the technologic transfer agreement signed with xi National Instruments which has permitted us to modify and update one of their core software products targeted for the acquisition systems used in these devices.
Resumo:
Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way to control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and this private system makes it difficult to follow the work remotely. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The choice of MDSplus (Model Driven System plus) is proved by the fact that it is widely utilized, and the scientists from different institutions may use the same system in different experiments in different tokamaks without the need to know how each system treats its acquisition system and data analysis. Another important point is the fact that the MDSplus has a library system that allows communication between different types of language (JAVA, Fortran, C, C++, Python) and programs such as MATLAB, IDL, OCTAVE. In the case of tokamak TCABR interfaces (object of this paper) between the system already in use and MDSplus were developed, instead of using the MDSplus at all stages, from the control, and data acquisition to the data analysis. This was done in the way to preserve a complex system already in operation and otherwise it would take a long time to migrate. This implementation also allows add new components using the MDSplus fully at all stages. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
This technical report describes a Light Detection and Ranging (LiDAR) augmented optimal path planning at low level flight methodology for remote sensing and sampling Unmanned Aerial Vehicles (UAV). The UAV is used to perform remote air sampling and data acquisition from a network of sensors on the ground. The data that contains information on the terrain is in the form of a 3D point clouds maps is processed by the algorithms to find an optimal path. The results show that the method and algorithm are able to use the LiDAR data to avoid obstacles when planning a path from a start to a target point. The report compares the performance of the method as the resolution of the LIDAR map is increased and when a Digital Elevation Model (DEM) is included. From a practical point of view, the optimal path plan is loaded and works seemingly with the UAV ground station and also shows the UAV ground station software augmented with more accurate LIDAR data.
Resumo:
The object of analysis in the present text is the issue of operational control and data retention in Poland. The analysis of this issue follows from a critical stance taken by NGOs and state institutions on the scope of operational control wielded by the Polish police and special services – it concerns, in particular, the employment of “itemized phone bills and the so-called phone tapping.” Besides the quantitative analysis of operational control and the scope of data retention, the text features the conclusions of the Human Rights Defender referred to the Constitutional Tribunal in 2011. It must be noted that the main problems concerned with the employment of operational control and data retention are caused by: (1) a lack of specification of technical means which can be used by individual services; (2) a lack of specification of what kind of information and evidence is in question; (3) an open catalogue of information and evidence which can be clandestinely acquired in an operational mode. Furthermore, with regard to the access granted to teleinformation data by the Telecommunications Act, attention should be drawn to a wide array of data submitted to particular services. Also, the text draws on the so-called open interviews conducted mainly with former police officers with a view to pointing to some non-formal reasons for “phone tapping” in Poland. This comes in the form of a summary.
Resumo:
随着物理实验要求的提高,利用先进的计算机技术来改造现有的物理实验设备具有重要意义。因此,我们设计研制了光谱测量与能级寿命测量数据获取电子学系统。本文全面论述了这一系统的组成结构。本系统由微机进行控制,通过基于Pentium机上工SA总线的接口电路来控制数据采集过程,实现了测量设备的智能化,并具有高速度、高可靠性的特点。配合软件工作可完全实现实验的无人监控。论文第一部分介绍了这个系统开发的背景及意义。第二部分是这个系统的组成和硬件结构、性能。这个系统完成的主要功能是:(1)采集数据并处理;(2)控制外设马达的运动;(3)显示系统的状态。第三部分是调试过程以及针对在调试中出现的各种实际问题提出的解决办法和预防措施。在设计工作完成之后,本系统在兰州近代物理研究所的加速器实验大厅中进行了模拟实验,并采集了部分数据。通过对获取数据的处理,获得了比较满意的结果。在论文的最后一部分中给出了本系统在实际运行后得出的实验结果以及此系统中可待完善之处。
Resumo:
The treatment of wastewaters contaminated with oil is of great practical interest and it is fundamental in environmental issues. A relevant process, which has been studied on continuous treatment of contaminated water with oil, is the equipment denominated MDIF® (a mixer-settler based on phase inversion). An important variable during the operation of MDIF® is the water-solvent interface level in the separation section. The control of this level is essential both to avoid the dragging of the solvent during the water removal and improve the extraction efficiency of the oil by the solvent. The measurement of oil-water interface level (in line) is still a hard task. There are few sensors able to measure oil-water interface level in a reliable way. In the case of lab scale systems, there are no interface sensors with compatible dimensions. The objective of this work was to implement a level control system to the organic solvent/water interface level on the equipment MDIF®. The detection of the interface level is based on the acquisition and treatment of images obtained dynamically through a standard camera (webcam). The control strategy was developed to operate in feedback mode, where the level measure obtained by image detection is compared to the desired level and an action is taken on a control valve according to an implemented PID law. A control and data acquisition program was developed in Fortran to accomplish the following tasks: image acquisition; water-solvent interface identification; to perform decisions and send control signals; and to record data in files. Some experimental runs in open-loop were carried out using the MDIF® and random pulse disturbances were applied on the input variable (water outlet flow). The responses of interface level permitted the process identification by transfer models. From these models, the parameters for a PID controller were tuned by direct synthesis and tests in closed-loop were performed. Preliminary results for the feedback loop demonstrated that the sensor and the control strategy developed in this work were suitable for the control of organic solvent-water interface level
Resumo:
Esta dissertação apresenta o desenvolvimento de uma plataforma inercial autônoma com três graus de liberdade para aplicação em estabilização de sensores - por exemplo, gravimétricos estacionários e embarcados - podendo ser utilizada também para estabilização de câmeras. O sistema é formado pela Unidade de Medida Inercial, IMU, desenvolvida utilizando um sensor micro eletromecânico, MEMS - que possui acelerômetro, giroscópio e magnetômetros nos três eixos de orientação - e um microcontrolador para aquisição, processamento e envio dos dados ao sistema de controle e aquisição de dados. Para controle dos ângulos de inclinação e orientação da plataforma, foi implementado um controlador PID digital utilizando microcontrolador. Este recebe os dados da IMU e fornece os sinais de controle utilizando as saídas PWM que acionam os motores, os quais controlam a posição da plataforma. Para monitoramento da plataforma foi desenvolvido um programa para aquisição de dados em tempo real em ambiente Matlab, por meio do qual se pode visualizar e gravar os sinais da IMU, os ângulos de inclinação e a velocidade angular. Testou-se um sistema de transmissão de dados por rádio frequência entre a IMU e o sistema de aquisição de dados e controle para avaliar a possibilidade da não utilização de slip rings ou fios entre o eixo de rotação e os quadros da plataforma. Entretanto, verificou-se a inviabilidade da transmissão em razão da baixa velocidade de transmissão e dos ruídos captados pelo receptor de rádio frequência durante osmovimentos da plataforma. Sendo assim, dois pares de fios trançados foram utilizados fios para conectar o sensor inercial ao sistema de aquisição e processamento.
Resumo:
A detailed investigation has been undertaken into the field induced electron emission (FIEE) mechanism that occurs at microscopically localised `sites' on uncoated and dielectric coated metallic electrodes. These processes have been investigated using two dedicated experimental systems that were developed for this study. The first is a novel combined photo/field emission microscope, which employs a UV source to stimulate photo-electrons from the sample surface in order to generate a topographical image. This system utilises an electrostatic lens column to provide identical optical properties under the different operating conditions required for purely topographical and combined photo/field imaging. The system has been demonstrated to have a resolution approaching 1m. Emission images have been obtained from carbon emission sites using this system to reveal that emission may occur from the edge triple junction or from the bulk of the carbon particle. An existing UHV electron spectrometer has been extensively rebuilt to incorporate a computer control and data acquisition system, improved sample handling and manipulation and a specimen heating stage. Details are given of a comprehensive study into the effects of sample heating on the emission process under conditions of both bulk and transient heating. Similar studies were also performed under conditions of both zero and high applied field. These show that the properties of emission sites are strongly temperature and field dependent thus indicating that the emission process is `non-metallic' in nature. The results have been shown to be consistent with an existing hot electron emission model.