254 resultados para Atrioventricular node
Resumo:
Workstation clusters equipped with high performance interconnect having programmable network processors facilitate interesting opportunities to enhance the performance of parallel application run on them. In this paper, we propose schemes where certain application level processing in parallel database query execution is performed on the network processor. We evaluate the performance of TPC-H queries executing on a high end cluster where all tuple processing is done on the host processor, using a timed Petri net model, and find that tuple processing costs on the host processor dominate the execution time. These results are validated using a small cluster. We therefore propose 4 schemes where certain tuple processing activity is offloaded to the network processor. The first 2 schemes offload the tuple splitting activity - computation to identify the node on which to process the tuples, resulting in an execution time speedup of 1.09 relative to the base scheme, but with I/O bus becoming the bottleneck resource. In the 3rd scheme in addition to offloading tuple processing activity, the disk and network interface are combined to avoid the I/O bus bottleneck, which results in speedups up to 1.16, but with high host processor utilization. Our 4th scheme where the network processor also performs apart of join operation along with the host processor, gives a speedup of 1.47 along with balanced system resource utilizations. Further we observe that the proposed schemes perform equally well even in a scaled architecture i.e., when the number of processors is increased from 2 to 64
Resumo:
In a dense multi-hop network of mobile nodes capable of applying adaptive power control, we consider the problem of finding the optimal hop distance that maximizes a certain throughput measure in bit-metres/sec, subject to average network power constraints. The mobility of nodes is restricted to a circular periphery area centered at the nominal location of nodes. We incorporate only randomly varying path-loss characteristics of channel gain due to the random motion of nodes, excluding any multi-path fading or shadowing effects. Computation of the throughput metric in such a scenario leads us to compute the probability density function of random distance between points in two circles. Using numerical analysis we discover that choosing the nearest node as next hop is not always optimal. Optimal throughput performance is also attained at non-trivial hop distances depending on the available average network power.
Resumo:
We consider evolving exponential RGGs in one dimension and characterize the time dependent behavior of some of their topological properties. We consider two evolution models and study one of them detail while providing a summary of the results for the other. In the first model, the inter-nodal gaps evolve according to an exponential AR(1) process that makes the stationary distribution of the node locations exponential. For this model we obtain the one-step conditional connectivity probabilities and extend it to the k-step case. Finite and asymptotic analysis are given. We then obtain the k-step connectivity probability conditioned on the network being disconnected. We also derive the pmf of the first passage time for a connected network to become disconnected. We then describe a random birth-death model where at each instant, the node locations evolve according to an AR(1) process. In addition, a random node is allowed to die while giving birth to a node at another location. We derive properties similar to those above.
Resumo:
H.264 is a video codec standard which delivers high resolution video even at low bit rates. To provide high throughput at low bit rates hardware implementations are essential. In this paper, we propose hardware implementations for speed and area optimized DCT and quantizer modules. To target above criteria we propose two architectures. First architecture is speed optimized which gives a high throughput and can meet requirements of 4096x2304 frame at 30 frames/sec. Second architecture is area optimized and occupies 2009 LUTs in Altera’s stratix-II and can meet the requirements of 1080HD at 30 frames/sec.
Resumo:
Ad hoc networks are being used in applications ranging from disaster recovery to distributed collaborative entertainment applications. Ad hoc networks have become one of the most attractive solution for rapid deployment of interconnecting large number of mobile personal devices. The user community of mobile personal devices are demanding a variety of value added multimedia entertainment services. The popularity of peer group is increasing and one or some members of the peer group need to send data to some or all members of the peer group. The increasing demand for group oriented value added services is driving for efficient multicast service over ad hoc networks. Access control mechanisms need to be deployed to provide guarantee that the unauthorized users cannot access the multicast content. In this paper, we present a topology aware key management and distribution scheme for secure overlay multicast over MANET to address node mobility related issues for multicast key management. We use overlay approach for key distribution and our objective is to keep communication overhead low for key management and distribution. We also incorporate reliability using explicit acknowledgments with the key distribution scheme. Through simulations we show that the proposed key management scheme has low communication overhead for rekeying and improves the reliability of key distribution.
Resumo:
We propose robust and scalable processes for the fabrication of floating gate devices using ordered arrays of 7 nm size gold nanoparticles as charge storage nodes. The proposed strategy can be readily adapted for fabricating next generation (sub-20 nm node) non-volatile memory devices.
Resumo:
In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations.
Resumo:
We consider the one-way relay aided MIMO X fading Channel where there are two transmitters and two receivers along with a relay with M antennas at every node. Every transmitter wants to transmit messages to every other receiver. The relay broadcasts to the receivers along a noisy link which is independent of the transmitters channel. In literature, this is referred to as a relay with orthogonal components. We derive an upper bound on the degrees of freedom of such a network. Next we show that the upper bound is tight by proposing an achievability scheme based on signal space alignment for the same for M = 2 antennas at every node.
Resumo:
A method of precise measurement of on-chip analog voltages in a mostly-digital manner, with minimal overhead, is presented. A pair of clock signals is routed to the node of an analog voltage. This analog voltage controls the delay between this pair of clock signals, which is then measured in an all-digital manner using the technique of sub-sampling. This sub-sampling technique, having measurement time and accuracy trade-off, is well suited for low bandwidth signals. This concept is validated by designing delay cells, using current starved inverters in UMC 130nm CMOS process. Sub-mV accuracy is demonstrated for a measurement time of few seconds.
Resumo:
Energy Harvesting (EH) nodes, which harvest energy from the environment in order to communicate over a wireless link, promise perpetual operation of a wireless network with battery-powered nodes. In this paper, we address the throughput optimization problem for a rate-adaptive EH node that chooses its rate from a set of discrete rates and adjusts its power depending on its channel gain and battery state. First, we show that the optimal throughput of an EH node is upper bounded by the throughput achievable by a node that is subject only to an average power constraint. We then propose a simple transmission scheme for an EH node that achieves an average throughput close to the upper bound. The scheme's parameters can be made to account for energy overheads such as battery non-idealities and the energy required for sensing and processing. The effect of these overheads on the average throughput is also analytically characterized.
Resumo:
We study a sensor node with an energy harvesting source. The generated energy can be stored in a buffer. The sensor node periodically senses a random field and generates a packet. These packets are stored in a queue and transmitted using the energy available at that time. We obtain energy management policies that are throughput optimal, i.e., the data queue stays stable for the largest possible data rate. Next we obtain energy management policies which minimize the mean delay in the queue. We also compare performance of several easily implementable sub-optimal energy management policies. A greedy policy is identified which, in low SNR regime, is throughput optimal and also minimizes mean delay.
Resumo:
For the analysis and design of pile foundation used for coastal structures the prediction of cyclic response, which is influenced by the nonlinear behavior, gap (pile soil separation) and degradation (reduction in strength) of soil becomes necessary. To study the effect of the above parameters a nonlinear cyclic load analysis program using finite element method is developed, incorporating the proposed gap and degradation model and adopting an incremental-iterative procedure. The pile is idealized using beam elements and the soil by number of elastoplastic sub-element springs at each node. The effect of gap and degradation on the load-deflection behavior. elasto-plastic sub-element and resistance of the soil at ground-line have been clearly depicted in this paper.
Resumo:
Frequent episode discovery is a popular framework for temporal pattern discovery in event streams. An episode is a partially ordered set of nodes with each node associated with an event type. Currently algorithms exist for episode discovery only when the associated partial order is total order (serial episode) or trivial (parallel episode). In this paper, we propose efficient algorithms for discovering frequent episodes with unrestricted partial orders when the associated event-types are unique. These algorithms can be easily specialized to discover only serial or parallel episodes. Also, the algorithms are flexible enough to be specialized for mining in the space of certain interesting subclasses of partial orders. We point out that frequency alone is not a sufficient measure of interestingness in the context of partial order mining. We propose a new interestingness measure for episodes with unrestricted partial orders which, when used along with frequency, results in an efficient scheme of data mining. Simulations are presented to demonstrate the effectiveness of our algorithms.
Resumo:
In this paper, we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy for assessing the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.