176 resultados para communication cost
Resumo:
In this paper, we present robust semi-blind (SB) algorithms for the estimation of beamforming vectors for multiple-input multiple-output wireless communication. The transmitted symbol block is assumed to comprise of a known sequence of training (pilot) symbols followed by information bearing blind (unknown) data symbols. Analytical expressions are derived for the robust SB estimators of the MIMO receive and transmit beamforming vectors. These robust SB estimators employ a preliminary estimate obtained from the pilot symbol sequence and leverage the second-order statistical information from the blind data symbols. We employ the theory of Lagrangian duality to derive the robust estimate of the receive beamforming vector by maximizing an inner product, while constraining the channel estimate to lie in a confidence sphere centered at the initial pilot estimate. Two different schemes are then proposed for computing the robust estimate of the MIMO transmit beamforming vector. Simulation results presented in the end illustrate the superior performance of the robust SB estimators.
Resumo:
A team of unmanned aerial vehicles (UAVs) with limited communication ranges and limited resources are deployed in a region to search and destroy stationary and moving targets. When a UAV detects a target, depending on the target resource requirement, it is tasked to form a coalition over the dynamic network formed by the UAVs. In this paper, we develop a mechanism to find potential coalition members over the network using principles from internet protocol and introduce an algorithm using Particle Swarm Optimization to generate a coalition that destroys the target is minimum time. Monte-Carlo simulations are carried out to study how coalition are formed and the effects of coalition process delays.
Resumo:
We present noise measurements of a phase fluorometric oxygen sensor that sets the limits of accuracy for this instrument. We analyze the phase sensitive detection measurement system with the signal ''shot'' noise being the only significant contribution to the system noise. Based on the modulated optical power received by the photomultiplier, the analysis predicts a noise spectral power density that was within 3 dB of the measured power spectral noise density. Our results demonstrate that at a received optical power of 20 fW the noise level was low enough to permit the detection of a change oxygen concentration of 1% at the sensor. We also present noise measurements of a new low-cost version of this instrument that uses a photodiode instead of a photomultiplier. These measurements show that the noise for this instrument was limited by noise generated in the preamplifier following the photodiode. (C) 1996 Society of Photo-Optical Instrumentation Engineers.
Resumo:
We consider discrete-time versions of two classical problems in the optimal control of admission to a queueing system: i) optimal routing of arrivals to two parallel queues and ii) optimal acceptance/rejection of arrivals to a single queue. We extend the formulation of these problems to permit a k step delay in the observation of the queue lengths by the controller. For geometric inter-arrival times and geometric service times the problems are formulated as controlled Markov chains with expected total discounted cost as the minimization objective. For problem i) we show that when k = 1, the optimal policy is to allocate an arrival to the queue with the smaller expected queue length (JSEQ: Join the Shortest Expected Queue). We also show that for this problem, for k greater than or equal to 2, JSEQ is not optimal. For problem ii) we show that when k = 1, the optimal policy is a threshold policy. There are, however, two thresholds m(0) greater than or equal to m(1) > 0, such that mo is used when the previous action was to reject, and mi is used when the previous action was to accept.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
Biomedical engineering solutions like surgical simulators need High Performance Computing (HPC) to achieve real-time performance. Graphics Processing Units (GPUs) offer HPC capabilities at low cost and low power consumption. In this work, it is demonstrated that a liver which is discretized by about 2500 finite element nodes, can be graphically simulated in realtime, by making use of a GPU. Present work takes into consideration the time needed for the data transfer from CPU to GPU and back from GPU to CPU. Although behaviour of liver is very complicated, present computer simulation assumes linear elastostatics. One needs to use the commercial software ANSYS to obtain the global stiffness matrix of the liver. Results show that GPUs are useful for the real-time graphical simulation of liver, which in turn is needed in simulators that are used for training surgeons in laparoscopic surgery. Although the computer simulation should involve rendering also, neither rendering, nor the time needed for rendering and displaying the liver on a screen, is considered in the present work. The present work is just a demonstration of a concept; the concept is not really implemented and validated. Future work is to develop software which can accomplish real-time and very realistic graphical simulation of liver, with rendered image of liver on the screen changing in real-time according to the position of the surgical tool tip approximated as the mouse cursor in 3D.
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
We address the optimal control problem of a very general stochastic hybrid system with both autonomous and impulsive jumps. The planning horizon is infinite and we use the discounted-cost criterion for performance evaluation. Under certain assumptions, we show the existence of an optimal control. We then derive the quasivariational inequalities satisfied by the value function and establish well-posedness. Finally, we prove the usual verification theorem of dynamic programming.
Resumo:
A Wireless Sensor Network (WSN) powered using harvested energies is limited in its operation by instantaneous power. Since energy availability can be different across nodes in the network, network setup and collaboration is a non trivial task. At the same time, in the event of excess energy, exciting node collaboration possibilities exist; often not feasible with battery driven sensor networks. Operations such as sensing, computation, storage and communication are required to achieve the common goal for any sensor network. In this paper, we design and implement a smart application that uses a Decision Engine, and morphs itself into an energy matched application. The results are based on measurements using IRIS motes running on solar energy. We have done away with batteries; instead used low leakage super capacitors to store harvested energy. The Decision Engine utilizes two pieces of data to provide its recommendations. Firstly, a history based energy prediction model assists the engine with information about in-coming energy. The second input is the energy cost database for operations. The energy driven Decision Engine calculates the energy budgets and recommends the best possible set of operations. Under excess energy condition, the Decision Engine, promiscuously sniffs the neighborhood looking for all possible data from neighbors. This data includes neighbor's energy level and sensor data. Equipped with this data, nodes establish detailed data correlation and thus enhance collaboration such as filling up data gaps on behalf of nodes hibernating under low energy conditions. The results are encouraging. Node and network life time of the sensor nodes running the smart application is found to be significantly higher compared to the base application.
Resumo:
In this paper, we study the problem of wireless sensor network design by deploying a minimum number of additional relay nodes (to minimize network design cost) at a subset of given potential relay locationsin order to convey the data from already existing sensor nodes (hereafter called source nodes) to a Base Station within a certain specified mean delay bound. We formulate this problem in two different ways, and show that the problem is NP-Hard. For a problem in which the number of existing sensor nodes and potential relay locations is n, we propose an O(n) approximation algorithm of polynomial time complexity. Results show that the algorithm performs efficiently (in over 90% of the tested scenarios, it gave solutions that were either optimal or exceeding optimal just by one relay) in various randomly generated network scenarios.
Resumo:
In this paper, we address the fundamental question concerning the limits on the network lifetime in sensor networks when multiple base stations (BSs) are deployed as data sinks. Specifically, we derive upper bounds on the network lifetime when multiple BSs arc employed, and obtain optimum locations of the base stations that maximise these lifetime bounds. For the case of two BSs, we jointly optimise the BS locations by maximising the lifetime bound using genetic algorithm. Joint optimisation for more number of BSs becomes prohibitively complex. Further, we propose a suboptimal approach for higher number of BSs, Individually Optimum method, where we optimise the next BS location using optimum location of previous BSs. Individually Optimum method has advantage of being attractive for solving the problem with more number of BSs at the cost of little compromised accuracy. We show that accuracy degradation is quite small for the case of three BSs.
Resumo:
This paper presents the capability of the neural networks as a computational tool for solving constrained optimization problem, arising in routing algorithms for the present day communication networks. The application of neural networks in the optimum routing problem, in case of packet switched computer networks, where the goal is to minimize the average delays in the communication have been addressed. The effectiveness of neural network is shown by the results of simulation of a neural design to solve the shortest path problem. Simulation model of neural network is shown to be utilized in an optimum routing algorithm known as flow deviation algorithm. It is also shown that the model will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The existence of an optimal feedback law is established for the risk-sensitive optimal control problem with denumerable state space. The main assumptions imposed are irreducibility and a near monotonicity condition on the one-step cost function. A solution can be found constructively using either value iteration or policy iteration under suitable conditions on initial feedback law.
Resumo:
Motion Estimation is one of the most power hungry operations in video coding. While optimal search (eg. full search)methods give best quality, non optimal methods are often used in order to reduce cost and power. Various algorithms have been used in practice that trade off quality vs. complexity. Global elimination is an algorithm based on pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. We propose an adaptive version of the global elimination algorithm that extracts individual macro-block features using Hadamard transform to optimize the search. Performance achieved is close to the full search method and global elimination. Operational complexity and hence power is reduced by 30% to 45% compared to global elimination method.