888 resultados para Supervisory Control and Data Acquisition (SCADA)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (1/eta)(t), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterisation of the optimal operating point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of detecting and resolving conflicts due to timing constraints imposed by features in real-time and hybrid systems. We consider systems composed of a base system with multiple features or controllers, each of which independently advise the system on how to react to input events so as to conform to their individual specifications. We propose a methodology for developing such systems in a modular manner based on the notion of conflict-tolerant features that are designed to continue offering advice even when their advice has been overridden in the past. We give a simple priority-based scheme forcomposing such features. This guarantees the maximal use of each feature. We provide a formal framework for specifying such features, and a compositional technique for verifying systems developed in this framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Thetaopt bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form dopt(Pmacrt) x Thetaopt with dopt scaling as Pmacrt 1 /eta, where Pmacrt is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then pro- - vide a simple characterisation of the optimal operating point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

NMR spectroscopy has witnessed tremendous advancements in recent years with the development of new methodologies for structure determination and availability of high-field strength spectrometers equipped with cryogenic probes. Supported by these advancements, a new dimension in NMR research has emerged which aims to increase the speed with data is collected and analyzed. Several novel methodologies have been proposed in this direction. This review focuses on the principles on which these different approaches are based with an emphasis on G-matrix Fourier transform NMR spectroscopy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Opportunistic selection in multi-node wireless systems improves system performance by selecting the ``best'' node and by using it for data transmission. In these systems, each node has a real-valued local metric, which is a measure of its ability to improve system performance. Our goal is to identify the best node, which has the largest metric. We propose, analyze, and optimize a new distributed, yet simple, node selection scheme that combines the timer scheme with power control. In it, each node sets a timer and transmit power level as a function of its metric. The power control is designed such that the best node is captured even if. other nodes simultaneously transmit with it. We develop several structural properties about the optimal metric-to-timer-and-power mapping, which maximizes the probability of selecting the best node. These significantly reduce the computational complexity of finding the optimal mapping and yield valuable insights about it. We show that the proposed scheme is scalable and significantly outperforms the conventional timer scheme. We investigate the effect of. and the number of receive power levels. Furthermore, we find that the practical peak power constraint has a negligible impact on the performance of the scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the materials scale, thermoelectric efficiency is defined by the dimensionless figure of merit zT. This value is made up of three material components in the form zT = Tα2/ρκ, where α is the Seebeck coefficient, ρ is the electrical resistivity, and κ is the total thermal conductivity. Therefore, in order to improve zT would require the reduction of κ and ρ while increasing α. However due to the inter-relation of the electrical and thermal properties of materials, typical routes to thermoelectric enhancement come in one of two forms. The first is to isolate the electronic properties and increase α without negatively affecting ρ. Techniques like electron filtering, quantum confinement, and density of states distortions have been proposed to enhance the Seebeck coefficient in thermoelectric materials. However, it has been difficult to prove the efficacy of these techniques. More recently efforts to manipulate the band degeneracy in semiconductors has been explored as a means to enhance α.

The other route to thermoelectric enhancement is through minimizing the thermal conductivity, κ. More specifically, thermal conductivity can be broken into two parts, an electronic and lattice term, κe and κl respectively. From a functional materials standpoint, the reduction in lattice thermal conductivity should have a minimal effect on the electronic properties. Most routes incorporate techniques that focus on the reduction of the lattice thermal conductivity. The components that make up κl (κl = 1/3Cνl) are the heat capacity (C), phonon group velocity (ν), and phonon mean free path (l). Since the difficulty is extreme in altering the heat capacity and group velocity, the phonon mean free path is most often the source of reduction.

Past routes to decreasing the phonon mean free path has been by alloying and grain size reduction. However, in these techniques the electron mobility is often negatively affected because in alloying any perturbation to the periodic potential can cause additional adverse carrier scattering. Grain size reduction has been another successful route to enhancing zT because of the significant difference in electron and phonon mean free paths. However, grain size reduction is erratic in anisotropic materials due to the orientation dependent transport properties. However, microstructure formation in both equilibrium and nonequilibrium processing routines can be used to effectively reduce the phonon mean free path as a route to enhance the figure of merit.

This work starts with a discussion of several different deliberate microstructure varieties. Control of the morphology and finally structure size and spacing is discussed at length. Since the material example used throughout this thesis is anisotropic a short primer on zone melting is presented as an effective route to growing homogeneous and oriented polycrystalline material. The resulting microstructure formation and control is presented specifically in the case of In2Te3-Bi2Te3 composites and the transport properties pertinent to thermoelectric materials is presented. Finally, the transport and discussion of iodine doped Bi2Te3 is presented as a re-evaluation of the literature data and what is known today.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large numbers of fishing vessels operating from ports in Latin America participate in surface longline fisheries in the eastern Pacific Ocean (EPO), and several species of sea turtles inhabit the grounds where these fleets operate. The endangered status of several sea turtle species, and the success of circle hooks (‘treatment’ hooks) in reducing turtle hookings in other ocean areas, as compared to J-hooks and Japanese-style tuna hooks (‘control’ hooks), prompted the initiation of a hook exchange program on the west coast of Latin America, the Eastern Pacific Regional Sea Turtle Program (EPRSTP)1. One of the goals of the EPRSTP is to determine if circle hooks would be effective at reducing turtle bycatch in artisanal fisheries of the EPO without significantly reducing the catch of marketable fish species. Participating fishers were provided with circle hooks at no cost and asked to replace the J/Japanese-style tuna hooks on their longlines with circle hooks in an alternating manner. Data collected by the EPRSTP show differences in longline gear and operational characteristics within and among countries. These aspects of the data, in addition to difficulties encountered with implementation of the alternating-hook design, pose challenges for analysis of these data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Catching methods and ways to improve them have been engaging the attention of fishermen from time immemorial. This was done mostly by trial and error methods, as most of the earlier investigations were primarily directed towards solution of biological problems related to fisheries. In recent years several fisheries laboratories have taken up studies on the working principles of many gears such as trawls, gill nets, round haul nets etc. with the aid of instruments developed for the purpose. The purpose of this article is to review the progress made in this field and in the development of telemetering instruments and continuous data acquisition systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infrastructure spatial data, such as the orientation and the location of in place structures and these structures' boundaries and areas, play a very important role for many civil infrastructure development and rehabilitation applications, such as defect detection, site planning, on-site safety assistance and others. In order to acquire these data, a number of modern optical-based spatial data acquisition techniques can be used. These techniques are based on stereo vision, optics, time of flight, etc., and have distinct characteristics, benefits and limitations. The main purpose of this paper is to compare these infrastructure optical-based spatial data acquisition techniques based on civil infrastructure application requirements. In order to achieve this goal, the benefits and limitations of these techniques were identified. Subsequently, these techniques were compared according to applications' requirements, such as spatial accuracy, the automation of acquisition, the portability of devices and others. With the help of this comparison, unique characteristics of these techniques were identified so that practitioners will be able to select an appropriate technique for their own applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Infrastructure spatial data, such as the orientation and the location of in place structures and these structures' boundaries and areas, play a very important role for many civil infrastructure development and rehabilitation applications, such as defect detection, site planning, on-site safety assistance and others. In order to acquire these data, a number of modern optical-based spatial data acquisition techniques can be used. These techniques are based on stereo vision, optics, time of flight, etc., and have distinct characteristics, benefits and limitations. The main purpose of this paper is to compare these infrastructure optical-based spatial data acquisition techniques based on civil infrastructure application requirements. In order to achieve this goal, the benefits and limitations of these techniques were identified. Subsequently, these techniques were compared according to applications' requirements, such as spatial accuracy, the automation of acquisition, the portability of devices and others. With the help of this comparison, unique characteristics of these techniques were identified so that practitioners will be able to select an appropriate technique for their own applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

为了解决ARCNET网络与以太网不兼容的问题,针对目前ARCNET网络设备监控管理系统存在的缺陷,提出了一种基于嵌入式TCP/IP协议的ARCNET数据采集与传输系统。分析了该数据采集系统的原理与结构,给出了系统的硬件设计方案,完成了数据采集与传输的软件结构设计和嵌入式TCP/IP协议栈的建立。对系统的实时性、可靠性和应用效果等进行了测试,结果证明,系统使用方便,性能稳定,具有良好的实时性和可靠性,综合性能优于现有的ARCNET数据采集系统。

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model is presented that deals with problems of motor control, motor learning, and sensorimotor integration. The equations of motion for a limb are parameterized and used in conjunction with a quantized, multi-dimensional memory organized by state variables. Descriptions of desired trajectories are translated into motor commands which will replicate the specified motions. The initial specification of a movement is free of information regarding the mechanics of the effector system. Learning occurs without the use of error correction when practice data are collected and analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we examine a number of admission control and scheduling protocols for high-performance web servers based on a 2-phase policy for serving HTTP requests. The first "registration" phase involves establishing the TCP connection for the HTTP request and parsing/interpreting its arguments, whereas the second "service" phase involves the service/transmission of data in response to the HTTP request. By introducing a delay between these two phases, we show that the performance of a web server could be potentially improved through the adoption of a number of scheduling policies that optimize the utilization of various system components (e.g. memory cache and I/O). In addition, to its premise for improving the performance of a single web server, the delineation between the registration and service phases of an HTTP request may be useful for load balancing purposes on clusters of web servers. We are investigating the use of such a mechanism as part of the Commonwealth testbed being developed at Boston University.