27 resultados para bigdata, data stream processing, dsp, apache storm, cyber security
em Instituto Politécnico do Porto, Portugal
Resumo:
This paper presents an electricity medium voltage (MV) customer characterization framework supportedby knowledge discovery in database (KDD). The main idea is to identify typical load profiles (TLP) of MVconsumers and to develop a rule set for the automatic classification of new consumers. To achieve ourgoal a methodology is proposed consisting of several steps: data pre-processing; application of severalclustering algorithms to segment the daily load profiles; selection of the best partition, corresponding tothe best consumers’ segmentation, based on the assessments of several clustering validity indices; andfinally, a classification model is built based on the resulting clusters. To validate the proposed framework,a case study which includes a real database of MV consumers is performed.
Resumo:
O estudo das curvas características de um transístor permite conhecer um conjunto de parâmetros essenciais à sua utilização tanto no domínio da amplificação de sinais como em circuitos de comutação. Deste estudo é possível obter dados em condições que muitas vezes não constam na documentação fornecida pelos fabricantes. O trabalho que aqui se apresenta consiste no desenvolvimento de um sistema que permite de forma simples, eficiente e económica obter as curvas características de um transístor (bipolar de junção, efeito de campo de junção e efeito de campo de metal-óxido semicondutor), podendo ainda ser utilizado como instrumento pedagógico na introdução ao estudo dos dispositivos semicondutores ou no projecto de amplificadores transistorizados. O sistema é constituído por uma unidade de condicionamento de sinal, uma unidade de processamento de dados (hardware) e por um programa informático que permite o processamento gráfico dos dados obtidos, isto é, traçar as curvas características do transístor. O seu princípio de funcionamento consiste na utilização de um conversor Digital-Analógico (DAC) como fonte de tensão variável, alimentando a base (TBJ) ou a porta (JFET e MOSFET) do dispositivo a testar. Um segundo conversor fornece a variação da tensão VCE ou VDS necessária à obtenção de cada uma das curvas. O controlo do processo é garantido por uma unidade de processamento local, baseada num microcontrolador da família 8051, responsável pela leitura dos valores em corrente e em tensão recorrendo a conversores Analógico-Digital (ADC). Depois de processados, os dados são transmitidos através de uma ligação USB para um computador no qual um programa procede à representação gráfica, das curvas características de saída e à determinação de outros parâmetros característicos do dispositivo semicondutor em teste. A utilização de componentes convencionais e a simplicidade construtiva do projecto tornam este sistema económico, de fácil utilização e flexível, pois permite com pequenas alterações
Resumo:
Trabalho de Projeto apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Tradução e Interpretação Especializadas, sob orientação da Doutora Sara Cerqueira Pascoal
Resumo:
Consider the problem of designing an algorithm for acquiring sensor readings. Consider specifically the problem of obtaining an approximate representation of sensor readings where (i) sensor readings originate from different sensor nodes, (ii) the number of sensor nodes is very large, (iii) all sensor nodes are deployed in a small area (dense network) and (iv) all sensor nodes communicate over a communication medium where at most one node can transmit at a time (a single broadcast domain). We present an efficient algorithm for this problem, and our novel algorithm has two desired properties: (i) it obtains an interpolation based on all sensor readings and (ii) it is scalable, that is, its time-complexity is independent of the number of sensor nodes. Achieving these two properties is possible thanks to the close interlinking of the information processing algorithm, the communication system and a model of the physical world.
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
Cooperating objects (COs) is a recently coined term used to signify the convergence of classical embedded computer systems, wireless sensor networks and robotics and control. We present essential elements of a reference architecture for scalable data processing for the CO paradigm.
Resumo:
Beyond the classical statistical approaches (determination of basic statistics, regression analysis, ANOVA, etc.) a new set of applications of different statistical techniques has increasingly gained relevance in the analysis, processing and interpretation of data concerning the characteristics of forest soils. This is possible to be seen in some of the recent publications in the context of Multivariate Statistics. These new methods require additional care that is not always included or refered in some approaches. In the particular case of geostatistical data applications it is necessary, besides to geo-reference all the data acquisition, to collect the samples in regular grids and in sufficient quantity so that the variograms can reflect the spatial distribution of soil properties in a representative manner. In the case of the great majority of Multivariate Statistics techniques (Principal Component Analysis, Correspondence Analysis, Cluster Analysis, etc.) despite the fact they do not require in most cases the assumption of normal distribution, they however need a proper and rigorous strategy for its utilization. In this work, some reflections about these methodologies and, in particular, about the main constraints that often occur during the information collecting process and about the various linking possibilities of these different techniques will be presented. At the end, illustrations of some particular cases of the applications of these statistical methods will also be presented.
Resumo:
Mestrado em Engenharia Mecânica – Especialização Gestão Industrial
Resumo:
Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.
Resumo:
Although we have many electric devices at home, there are just few systems to evaluate, monitor and control them. Sometimes users go out and leave their electric devices turned on what can cause energy wasting and dangerous situations. Therefore most of the users may want to know the using states of their electrical appliances through their mobile devices in a pervasive way. In this paper, we propose an Intelligent Supervisory Control System to evaluate, monitor and control the use of electric devices in home, from outside. Because of the transferring data to evaluate, monitor and control user's location and state of home (ex. nobody at home) may be opened to attacks leading to dangerous situations. In our model we include a location privacy module and encryption module to provide security to user location and data. Intelligent Supervising Control System gives to the user the ability to manage electricity loads by means of a multi-agent system involving evaluation, monitoring, control and energy resource agents.
Resumo:
Cyber-Physical Intelligence is a new concept integrating Cyber-Physical Systems and Intelligent Systems. The paradigm is centered in incorporating intelligent behavior in cyber-physical systems, until now too oriented to the operational technological aspects. In this paper we will describe the use of Cyber-Physical Intelligence in the context of Power Systems, namely in the use of Intelligent SCADA (Supervisory Control and Data Acquisition) systems at different levels of the Power System, from the Power Generation, Transmission, and Distribution Control Centers till the customers houses.
Resumo:
This paper presents a methodology supported on the data base knowledge discovery process (KDD), in order to find out the failure probability of electrical equipments’, which belong to a real electrical high voltage network. Data Mining (DM) techniques are used to discover a set of outcome failure probability and, therefore, to extract knowledge concerning to the unavailability of the electrical equipments such us power transformers and high-voltages power lines. The framework includes several steps, following the analysis of the real data base, the pre-processing data, the application of DM algorithms, and finally, the interpretation of the discovered knowledge. To validate the proposed methodology, a case study which includes real databases is used. This data have a heavy uncertainty due to climate conditions for this reason it was used fuzzy logic to determine the set of the electrical components failure probabilities in order to reestablish the service. The results reflect an interesting potential of this approach and encourage further research on the topic.
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files
Resumo:
Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores