964 resultados para Ethernet distributed Data Acquisition


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Future distribution systems will have to deal with an intensive penetration of distributed energy resources ensuring reliable and secure operation according to the smart grid paradigm. SCADA (Supervisory Control and Data Acquisition) is an essential infrastructure for this evolution. This paper proposes a new conceptual design of an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). This SCADA model is used to support the energy resource management undertaken by a distribution network operator (DNO). Resource management considers all the involved costs, power flows, and electricity prices, allowing the use of network reconfiguration and load curtailment. Locational Marginal Prices (LMP) are evaluated and used in specific situations to apply Demand Response (DR) programs on a global or a local basis. The paper includes a case study using a 114 bus distribution network and load demand based on real data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applications involving biosignals, such as Electrocardiography (ECG), are becoming more pervasive with the extension towards non-intrusive scenarios helping targeting ambulatory healthcare monitoring, emotion assessment, among many others. In this study we introduce a new type of silver/silver chloride (Ag/AgCl) electrodes based on a paper substrate and produced using an inkjet printing technique. This type of electrodes can increase the potential applications of biosignal acquisition technologies for everyday life use, given that there are several advantages, such as cost reduction and easier recycling, resultant from the approach explored in our work. We performed a comparison study to assess the quality of this new electrode type, in which ECG data was collected with three types of Ag/AgCl electrodes: i) gelled; ii) dry iii) paper-based inkjet printed. We also compared the performance of each electrode when acquired using a professional-grade gold standard device, and a low cost platform. Experimental results showed that data acquired using our proposed inkjet printed electrode is highly correlated with data obtained through conventional electrodes. Moreover, the electrodes are robust to high-end and low-end data acquisition devices. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing and intensive integration of distributed energy resources into distribution systems requires adequate methodologies to ensure a secure operation according to the smart grid paradigm. In this context, SCADA (Supervisory Control and Data Acquisition) systems are an essential infrastructure. This paper presents a conceptual design of a communication and resources management scheme based on an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). The methodology is used to support the energy resource management considering all the involved costs, power flows, and electricity prices leading to the network reconfiguration. The methodology also addresses the definition of the information access permissions of each player to each resource. The paper includes a 33-bus network used in a case study that considers an intensive use of distributed energy resources in five distinct implemented operation contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfilment of the requirements for the degree of Master in Computer Science

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, data centers are large energy consumers and the trend for next years is expected to increase further, considering the growth in the order of cloud services. A large portion of this power consumption is due to the control of physical parameters of the data center (such as temperature and humidity). However, these physical parameters are tightly coupled with computations, and even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in the cloud infrastructure hosted in the data center. Therefore, managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolution of the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center and with them, find opportunities to optimize energy consumptions. Having a high-resolution picture of the data center conditions, also enables minimizing local hot-spots, perform more accurate predictive maintenance (failures in all infrastructure equipments can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computerized handheld procedure is presented in this paper. It is intended as a database complementary tool, to enhance prospective risk analysis in the field of occupational health. The Pendragon forms software (version 3.2) has been used to implement acquisition procedures on Personal Digital Assistants (PDAs) and to transfer data to a computer in an MS-Access format. The data acquisition strategy proposed relies on the risk assessment method practiced at the Institute of Occupational Health Sciences (IST). It involves the use of a systematic hazard list and semi-quantitative risk assessment scales. A set of 7 modular forms has been developed to cover the basic need of field audits. Despite the minor drawbacks observed, the results obtained so far show that handhelds are adequate to support field risk assessment and follow-up activities. Further improvements must still be made in order to increase the tool effectiveness and field adequacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, there are several services and applications that allow users to locate and move to different tourist areas using a mobile device. These systems can be used either by internet or downloading an application in concrete places like a visitors centre. Although such applications are able to facilitate the location and the search for points of interest, in most cases, these services and applications do not meet the needs of each user. This paper aims to provide a solution by studying the main projects, services and applications, their routing algorithms and their treatment of the real geographical data in Android mobile devices, focusing on the data acquisition and treatment to improve the routing searches in off-line environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This final year project presents the design principles and prototype implementation of BIMS (Biomedical Information Management System), a flexible software system which provides an infrastructure to manage all information required by biomedical research projects.The BIMS project was initiated with the motivation to solve several limitations in medical data acquisition of some research projects, in which Universitat Pompeu Fabra takes part. These limitations,based on the lack of control mechanisms to constraint information submitted by clinicians, impact on the data quality, decreasing it.BIMS can easily be adapted to manage information of a wide variety of clinical studies, not being limited to a given clinical specialty. The software can manage both, textual information, like clinical data (measurements, demographics, diagnostics, etc ...), as well as several kinds of medical images (magnetic resonance imaging, computed tomography, etc ...). Moreover, BIMS provides a web - based graphical user interface and is designed to be deployed in a distributed andmultiuser environment. It is built on top of open source software products and frameworks.Specifically, BIMS has been used to represent all clinical data being currently used within the CardioLab platform (an ongoing project managed by Universitat Pompeu Fabra), demonstratingthat it is a solid software system, which could fulfill requirements of a real production environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report is divided into two volumes. This volume (Volume I) summarizes a structural health monitoring (SHM) system that was developed for the Iowa DOT to remotely and continuously monitor fatigue critical bridges (FCB) to aid in the detection of crack formation. The developed FCB SHM system enables bridge owners to remotely monitor FCB for gradual or sudden damage formation. The SHM system utilizes fiber bragg grating (FBG) fiber optic sensors (FOSs) to measure strains at critical locations. The strain-based SHM system is trained with measured performance data to identify typical bridge response when subjected to ambient traffic loads, and that knowledge is used to evaluate newly collected data. At specified intervals, the SHM system autonomously generates evaluation reports that summarize the current behavior of the bridge. The evaluation reports are collected and distributed to the bridge owner for interpretation and decision making. Volume II summarizes the development and demonstration of an autonomous, continuous SHM system that can be used to monitor typical girder bridges. The developed SHM system can be grouped into two main categories: an office component and a field component. The office component is a structural analysis software program that can be used to generate thresholds which are used for identifying isolated events. The field component includes hardware and field monitoring software which performs data processing and evaluation. The hardware system consists of sensors, data acquisition equipment, and a communication system backbone. The field monitoring software has been developed such that, once started, it will operate autonomously with minimal user interaction. In general, the SHM system features two key uses. First, the system can be integrated into an active bridge management system that tracks usage and structural changes. Second, the system helps owners to identify damage and deterioration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Gaia is the most ambitious space astrometry mission currently envisaged and is a technological challenge in all its aspects. We describe a proposal for the payload data handling system of Gaia, as an example of a high-performance, real-time, concurrent, and pipelined data system. This proposal includes the front-end systems for the instrumentation, the data acquisition and management modules, the star data processing modules, and the payload data handling unit. We also review other payload and service module elements and we illustrate a data flux proposal.