49 resultados para Time Trade Off


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Z' = 1 and Z' = 5 structures of quinoxaline are compared. The nature of the intermolecular interactions in the Z' = 5 structure is studied by means of variable-temperature single-crystal X-ray diffraction. The C-H center dot center dot center dot N and pi ... pi it interactions in these structures are of a stabilizing nature. The high Z' structure has the better interactions, whereas the low Z' structure has the better stability. This trade-off is a recurrent theme in molecular crystals and is a manifestation of the distinction between thermodynamically and kinetically favoured crystal forms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a wireless sensor network whose main function is to detect certain infrequent alarm events, and to forward alarm packets to a base station, using geographical forwarding. The nodes know their locations, and they sleep-wake cycle, waking up periodically but not synchronously. In this situation, when a node has a packet to forward to the sink, there is a trade-off between how long this node waits for a suitable neighbor to wake up and the progress the packet makes towards the sink once it is forwarded to this neighbor. Hence, in choosing a relay node, we consider the problem of minimizing average delay subject to a constraint on the average progress. By constraint relaxation, we formulate this next hop relay selection problem as a Markov decision process (MDP). The exact optimal solution (BF (Best Forward)) can be found, but is computationally intensive. Next, we consider a mathematically simplified model for which the optimal policy (SF (Simplified Forward)) turns out to be a simple one-step-look-ahead rule. Simulations show that SF is very close in performance to BF, even for reasonably small node density. We then study the end-to-end performance of SF in comparison with two extremal policies: Max Forward (MF) and First Forward (FF), and an end-to-end delay minimising policy proposed by Kim et al. 1]. We find that, with appropriate choice of one hop average progress constraint, SF can be tuned to provide a favorable trade-off between end-to-end packet delay and the number of hops in the forwarding path.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The steady state throughput performance of distributed applications deployed in switched networks in presence of end-system bottlenecks is studied in this paper. The effect of various limitations at an end-system is modelled as an equivalent transmission capacity limitation. A class of distributed applications is characterised by a static traffic distribution matrix that determines the communication between various components of the application. It is found that uniqueness of steady state throughputs depends only on the traffic distribution matrix and that some applications (e.g., broadcast applications) can yield non-unique values for the steady state component throughputs. For a given switch capacity, with traffic distribution that yield fair unique throughputs, the trade-off between the end-system capacity and the number of application components is brought out. With a proposed distributed rate control, it has been illustrated that it is possible to have unique solution for certain traffic distributions which is otherwise impossible. Also, by proper selection of rate control parameters, various throughput performance objectives can be realised.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study the trade-off between delivery delay and energy consumption in delay tolerant mobile wireless networks that use two-hop relaying. The source may not have perfect knowledge of the delivery status at every instant. We formulate the problem as a stochastic control problem with partial information, and study structural properties of the optimal policy. We also propose a simple suboptimal policy. We then compare the performance of the suboptimal policy against that of the optimal control with perfect information. These are bounds on the performance of the proposed policy with partial information. Several other related open loop policies are also compared with these bounds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motion Estimation is one of the most power hungry operations in video coding. While optimal search (eg. full search)methods give best quality, non optimal methods are often used in order to reduce cost and power. Various algorithms have been used in practice that trade off quality vs. complexity. Global elimination is an algorithm based on pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. We propose an adaptive version of the global elimination algorithm that extracts individual macro-block features using Hadamard transform to optimize the search. Performance achieved is close to the full search method and global elimination. Operational complexity and hence power is reduced by 30% to 45% compared to global elimination method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The memory subsystem is a major contributor to the performance, power, and area of complex SoCs used in feature rich multimedia products. Hence, memory architecture of the embedded DSP is complex and usually custom designed with multiple banks of single-ported or dual ported on-chip scratch pad memory and multiple banks of off-chip memory. Building software for such large complex memories with many of the software components as individually optimized software IPs is a big challenge. In order to obtain good performance and a reduction in memory stalls, the data buffers of the application need to be placed carefully in different types of memory. In this paper we present a unified framework (MODLEX) that combines different data layout optimizations to address the complex DSP memory architectures. Our method models the data layout problem as multi-objective genetic algorithm (GA) with performance and power being the objectives and presents a set of solution points which is attractive from a platform design viewpoint. While most of the work in the literature assumes that performance and power are non-conflicting objectives, our work demonstrates that there is significant trade-off (up to 70%) that is possible between power and performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multilevel inverters are an attractive solution in the medium-voltage and high-power applications. However in the low-power range also it can be a better solution compared to two-level inverters, if MOSFETs are used as devices switching in the order of 100 kHz. The effect of clamping diodes in the diode-clamped multilevel inverters play an important role in determining its efficiency. Power loss introduced by the reverse recovery of MOSFET body diode prohibits the use of MOSFET in hard-switched inverter legs. A technique of avoiding reverse recovery loss of MOSFET body diode in a three-level neutral point clamped inverter is suggested. The use of multilevel inverters topology enables operation at high switching frequency without sacrificing efficiency. High switching frequency of operation reduces the output filter requirement, which in turn helps in reducing the size of the inverter. This study elaborates the trade-off analysis to quantify the suitability of multilevel inverters in the low-power applications. Advantages of using a MOSFET-based three-level diode-clamped inverter for a PM motor drive and UPS systems are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Energy use in developing countries is heterogeneous across households. Present day global energy models are mostly too aggregate to account for this heterogeneity. Here, a bottom-up model for residential energy use that starts from key dynamic concepts on energy use in developing countries is presented and applied to India. Energy use and fuel choice is determined for five end-use functions (cooking, water heating, space heating, lighting and appliances) and for five different income quintiles in rural and urban areas. The paper specifically explores the consequences of different assumptions for income distribution and rural electrification on residential sector energy use and CO(2) emissions, finding that results are clearly sensitive to variations in these parameters. As a result of population and economic growth, total Indian residential energy use is expected to increase by around 65-75% in 2050 compared to 2005, but residential carbon emissions may increase by up to 9-10 times the 2005 level. While a more equal income distribution and rural electrification enhance the transition to commercial fuels and reduce poverty, there is a trade-off in terms of higher CO(2) emissions via increased electricity use. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A series of novel organic-inorganic hybrid membranes have been prepared employing Nafion and acid-functionalized meso-structured molecular sieves (MMS) with varying structures and surface area. Acid-functionalized silica nanopowder of surface area 60 m(2)/g, silica meso-structured cellular foam (MSU-F) of surface area 470 m(2)/g and silica meso-structured hexagonal frame network (MCM-41) of surface area 900 m(2)/g have been employed as potential filler materials to form hybrid membranes with Nafion framework. The structural behavior, water uptake, proton conductivity and methanol permeability of these hybrid membranes have been investigated. DMFCs employing Nafion-silica MSU-F and Nafion-silica MCM-41 hybrid membranes deliver peak-power densities of 127 mW/cm(2) and 100 mW/cm(2), respectively; while a peak-power density of only 48 mW/cm(2) is obtained with the DMFC employing pristine recast Nafion membrane under identical operating conditions. The aforesaid characteristics of the hybrid membranes could be exclusively attributed to the presence of pendant sulfonic acid groups in the filler, which provide fairly continuous proton-conducting pathways between filler and matrix in the hybrid membranes facilitating proton transport without any trade-off between its proton conductivity and methanol crossover. (C) 2012 The Electrochemical Society. DOI: 10.1149/2.036211jes] All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components-like genetic circuits, biochemical cascades, and ion channels, among others-enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode-with almost 20-60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identifying the determinants of neuronal energy consumption and their relationship to information coding is critical to understanding neuronal function and evolution. Three of the main determinants are cell size, ion channel density, and stimulus statistics. Here we investigate their impact on neuronal energy consumption and information coding by comparing single-compartment spiking neuron models of different sizes with different densities of stochastic voltage-gated Na+ and K+ channels and different statistics of synaptic inputs. The largest compartments have the highest information rates but the lowest energy efficiency for a given voltage-gated ion channel density, and the highest signaling efficiency (bits spike(-1)) for a given firing rate. For a given cell size, our models revealed that the ion channel density that maximizes energy efficiency is lower than that maximizing information rate. Low rates of small synaptic inputs improve energy efficiency but the highest information rates occur with higher rates and larger inputs. These relationships produce a Law of Diminishing Returns that penalizes costly excess information coding capacity, promoting the reduction of cell size, channel density, and input stimuli to the minimum possible, suggesting that the trade-off between energy and information has influenced all aspects of neuronal anatomy and physiology.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Subtle manipulation of mutual repulsion and polarisation effects between polar and polarisable chromophores forced in closed proximity allows achieving major (100%) enhancement of the first hyperpolarisability together with increased transparency, breaking the well-known nonlinearity-transparency trade-off paradigm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Content Distribution Networks (CDNs) are widely used to distribute data to large number of users. Traditionally, content is being replicated among a number of surrogate servers, leading to high operational costs. In this context, Peer-to-Peer (P2P) CDNs have emerged as a viable alternative. An issue of concern in P2P networks is that of free riders, i.e., selfish peers who download files and leave without uploading anything in return. Free riding must be discouraged. In this paper, we propose a criterion, the Give-and-Take (G&T) criterion, that disallows free riders. Incorporating the G&T criterion in our model, we study a problem that arises naturally when a new peer enters the system: viz., the problem of downloading a `universe' of segments, scattered among other peers, at low cost. We analyse this hard problem, and characterize the optimal download cost under the G&T criterion. We propose an optimal algorithm, and provide a sub-optimal algorithm that is nearly optimal, but runs much more quickly; this provides an attractive balance between running time and performance. Finally, we compare the performance of our algorithms with that of a few existing P2P downloading strategies in use. We also study the computation time for prescribing the strategy for initial segment and peer selection for the newly arrived peer for various existing and proposed algorithms, and quantify cost-computation time trade-offs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A deformable mirror (DM) is an important component of an adaptive optics system. It is known that an on-axis spherical/parabolic optical component, placed at an angle to the incident beam introduces defocus as well as astigmatism in the image plane. Although the former can be compensated by changing the focal plane position, the latter cannot be removed by mere optical realignment. Since the DM is to be used to compensate a turbulence-induced curvature term in addition to other aberrations, it is necessary to determine the aberrations induced by such (curved DM surface) an optical element when placed at an angle (other than 0 deg) of incidence in the optical path. To this effect, we estimate to a first order the aberrations introduced by a DM as a function of the incidence angle and deformation of the DM surface. We record images using a simple setup in which the incident beam is reflected by a 37 channel micro-machined membrane deformable mirror for various angles of incidence. It is observed that astigmatism is a dominant aberration, which was determined by measuring the difference between the tangential and sagittal focal planes. We justify our results on the basis of theoretical simulations and discuss the feasibility of using such a system for adaptive optics considering a trade-off between wavefront correction and astigmatism due to deformation. (C) 2015 Optical Society of America